8 Comments
User's avatar
Ben D.'s avatar
20hEdited

There is a much more fundamental question, and it would be great for the titans of the AI industry to address this question honestly. It's this: What are the deep costs to human life of adopting this technology? I think this is ultimately what worries people the most. It's certainly what worries me the most.

I'm literally automating a workflow right now with Claude Code, and reading this while Claude works. When I'm done here I'll have reduced a 2-hour weekly slog to a few minutes. And the actual process of working with Claude to build this is so fun – it feels like magic!

So I love AI, and I'm also terrified of it, because it's really addictive.

When I see people I've known for years doing things they would never have done before, because now it's easy for them to do with the help of AI – I first think, great! Then a moment later I wonder what opportunities for personal growth they've now permanently discarded in favor of easy results. (and if I'm honest, the results often aren't that great...)

Jenna Hermann's avatar

Saanya, this was brilliant. Articulated everything I've been seeing flying between SF and middle America to work with Oil and Gas to reduce emissions (an oil and water convo right out of the gate). Silicon Valley is such a bubble. We reinforce our own biases and skip testing them with the masses because it feels inconvenient or hard or wrong because it's not our bubble speak.

We can't disregard the impact this has on individuals. The Frontier Labs need smarter ways to show people that AI is their personal win, not their risk. That's demand creation, a skill we leveraged a ton in cleantech and often one people don't realize they need. I don't see the industry hiring for this very much. They haven't felt the need to because of the fast-paced adoption they experienced, but they're coming up on that wall.

Imagine if the whole world was actually pro-AI. Data centers get built faster and with cleaner energy because communities want them. The user base grows faster. Corporate adoption accelerates. All of that is on the table if they stop forcing it and start earning it. It may feel like it takes longer, but it's often a much faster path than the alternative.

I hope they see there's real power in influencing a democracy rather than bulldozing one and real opportunity in adapting their strategy.

David's avatar
20hEdited

It honestly reminds me of the internet/computer revolution when it first started. Back then, my father was handed a book "Who Moved my Cheese" as a response to the companies expectation that all Boomers should learn to use a computer and the internet. It was not well received and Boomers and the traditional media pundants on the 6 OClock News screeched and raved about the "evils of computers and the web". Much the same as today, kids were seen as "addicted to evil video games turning them into monsters and rotting their brains". That all turned out be hogwash, but the people spreading such misinformation were never held accountable for lying about it, so... why not do the same thing again when AI arrived?

The fact of "My kid is addicted to it." speaks volumes. That says that while adults might scoff at AI because it threatens their way of life, their kids are jumping on board. To that end, this ends like those parents who kept their kids off computers in the 1990s to read books because computer literacy was a "waste of time". Maybe they come out with a new "Pagemaster" remake, but instead of visiting the library, the child protagonists searches Google and Wikipedia instead of asking Chat GPT and discover the value of "using their own brains". Naturally it would be done in 3D because who has time for traditional 2D animation methods - but at least it didn't use one of those accursed "video models".

Andy Shi's avatar

Well AI is different in that the thing that its replacing is reasoning itself — not mechanical, non-intellectual tools like with the industrial revolution, web search, etc. "Soft" skills such as the ability to problem solve, thinking independently, etc. can really only be gained through doing hard things yourself, and embracing that struggle. I'm in high school; increasingly I see that students offload the actual learning part to LLMs. If the younger generation can't truly reason for themselves or think independently, then it leaves society a lot more susceptible to manipulation.

Also note that a large portion of Gen-Z is, in fact, against AI. Look at the top comments on Instagram reels regarding the molotov cocktail and Sam Altman, for instance.

Kyle's avatar

I do agree with you that, traditionally, the best way to learn has - heretofore - been a dogged commitment to singing this song: "Oh My God this is a Painfully Brutal Slog through Repetition and Boredom But I am Going to Stick With It Because I Know, AT Some Point, Through the Magic of Osmosis, Inspiration and Knowledge Will Take Root and This Awful Season of Drudgery Will Soon Yield an Abundant Harvest."

The above was, most definitely, kind of the only way to do it unless you had an unreasonably kind mentor and teacher shepherd you through life. I've never heard of one of these magical creatures actually existing. But I reserve the possibility that it's possible.

However, what's different today is that there is a deeply understood science to make learning a helluva lot less painful and a heck of a lot more efficient.

Learning can actually be fun - in fact, it needs to be. And, if the system is designed well, you will be astonished by how quickly your brain can learn.

And what subject matters can be covered? Well. One of the things that AI is probably going to help us do is rapidly develop the materials needed to cover more and more areas of deep expertise.

Is this all speculation?

Nah.

My sister and brother-in-law started one of these companies about a year ago in the healthcare space. I was confident that there absolutely must be existing companies in the space that, the second they saw the angle, would swoop in to crush them.

Apparently, nope.

This seems to be yet another irrationality in our healthcare system. The practitioners have crazy high levels of education and ongoing certification requirements. And, somehow - much to my astonishment - there's no real pre-existing industry that serves the educational needs of these professionals.

And that, I guess, is how your ARR goes from ghost-level zero to WHAT NOW in less than 12 months.

So, my point is: The educational issue is not a structural issue. And AI can actually be part of the solution where it replaces the drudgery on the one hand and replaces the lost learning that used to happen in the drudgery on the other hand. But this second step requires people - and companies - to actually build the right infrastructure to create the educational opportunities that would, I agree, otherwise be lost by younger generations.

Kumar Venkatramani's avatar

I agree wholeheartedly with you on the items you outline as the steps to take, but the question is who should take the actions your outline? If it is the same folks who are building this, then the incentive structure is broken; if it is the politicians then their trust is at best dubious, at worst and almost certainly hidden; if it is technological peers (like Hinton), the messaging is downright scary; if we want it to be fair and balanced, then it needs a panel of economists, academicians, and social evangelists to drive this forward. This has never been done before in previous iterations of such earth shattering inventions. In fact, quite the opposite. In capitalistic societies, the people who “owned and benefitted” from the technology got rich and left the masses to simply avail of the resources made available and take advantage of scraps left behind (Electricity, oil, railroads). Clearly that is the model most companies are following .. first two or three to market will take the winnings.

Kyle's avatar

I was thinking about your LinkedIn post and this lengthier Substack article further, after the proposed rules came out to eliminate quarterly reporting for public companies.

My LinkedIn comments discussed how so many executives talk to their employees in really silly ways about AI.

e.g., our competitive moat as humans is: EQ.

Ok. That's nonsense. But you know what? I think we are now seeing that our dysfunctional public markets are - actually - now a national security issue.

Since the late 1990's, we have incrementally regulated and legislated a patchwork quilt of insanity that keeps companies from going public at the right stage in their life.

There's nothing wrong with private capital.

But we've got to get companies public much, much, much, much earlier in their J-Curve. Something much more representative of how things worked historically.

Because if we DID get companies public at the right time (e.g., the stage of development when Amazon went public, as an example), then everyone has a choice. You can hedge your risk by investing in all these foundational models that went public in 2024 or 2025 by riding your early days investment zoom, zoom, zoom -- all the way up.

But, in this alternate reality where our public markets aren't broken and you could invest - in a way that was historically available to Americans to invest in disruption - Well, Yippee. You are rooting for AI to disrupt. Because you are economically aligned with its disruption. And this society wide participation in wealth creation results in adoption rates that drive us way past China. We're all on the same team and rowing in the same direction.

There are all these compounding benefits that protect us against real national security threats and allow our society to be co-adventurers in the creation of a new and better world.

But, you know what happens when you break this model and companies do not go public until they are much further down their J-Curve path and it's hard to tell whether the real gains have already been soaked up by private capital, well, you get a lot of people who don't feel like they are beneficiaries in the society they are asked - or compelled - to create.

Because it's all God Damned Risk On. Very little just deserts for hard work.

And this is deeply supported in the life experience and the data for everyone that is an elder millennial and younger. Life has not, actually, been all that fair to these generations. Deep generational covenants have been systemically broken, and the deck has been stacked against them.

So, as the WSJ repeatedly reminds us with articles: If you are an elder millennial or younger, you are either Scenario A or Scenario B.

Scenario A is: Unfortunately, your beloved parents will die soon. Because that's how the math works. But the gravy is that you are about to be the beneficiary of an incredibly large inheritance - the likes of which this country has never seen -- because your parents did nothing particularly well but also were decent stewards of their income and assets. i.e., they happened to make decent money that they used to accumulate assets (a nice house and a balanced retirement portfolio placed in the markets) that all skyrocketed in value over the course of their lifetime.

Scenario B is: Unfortunately, your beloved parents will die soon. And whether due to low wages, poor investment decisions or maybe even something as simple as the wealth devastating effects of divorce, you are going to inherit nothing. So all the generational forces that have been suppressing your career progression and asset accumulation is going to get even more enraging very soon when you see all the people who can start taking it easy. But, for you? With the rising costs of staying alive -- forget about retiring? And the threat of AI devouring your earning potential in the prime of your career?

Well. Yeah. Let's talk about retraining. And because you got your education because society told you that's what you were supposed to do (even though it was insanely expensive) you know that these "retraining" words was a star in the same song and dance that was delivered to the middle-class coal mining towns in Appalachia. Right before society abandoned them into a far too inconvenient collapse into structural poverty and drug addiction.

And here's the thing: While it may not actually prove to be all that existential for us to beat China to some cliff effect of technological development in the race to AI and quantum dominance, there's a serious enough risk that it is existential that I really do accept that our society must keep the pedal to the metal.

But, Gee, you know what? In light of China's looming demographic collapse, all these efficiency gains are kind of perfect for them. Rather than threatening their society with systemic structural disruption, it may well save them from an unimaginable apocalypse that they would otherwise face (because we have no precedent -- apart from the Bubonic Plague, which I am told wasn't robustly fun -- of societies that suffer the kind of population implosion China faces).

But, our society, with the way its structured right now? Maybe winning just means you help create a new Gilded Age.

Actually, this is almost definitely what it means.

Unless you can figure out a way to reinvent yourself for the 15th God Damned time in your career and figure out how to be one of the winners, rather than losers...in this Gilded Age that is hurtling at us.

And you really do not want to be one of the losers if we are entering a Gilded Age. Because what we did to people during the Gilded Age was truly horrific.

All our problems and solutions are inter-related. And we've got to started looking at things holistically.

[Incidentally: I'm sophisticated enough that I think I can make pubco investments that are unlikely to be as good at getting into these foundational models as early as I would like but, at least, kinda sorta yield some of the investment growth that would be expected if AI delivers on its greatest promises.

But, while most could figure out how to invest in foundational models, they aren't going to be able to do this more complex investment strategy that I can put together. It's simply too much for them to have to learn for this to be a reasonable way to reduce the risks posed to them by AI in any meaningful way.]

Kerie Roark's avatar

I am an AI optimist too. The problem is that a lot of the same people that talk about wanting us to progress to a Star Trek society or move up a level on the Kardashev scale, are some of AI’s biggest detractors. AI is the first step towards any of that. The saying “it takes a village to raise a child” could be said about AI, it’s just a child now (not claiming sentience) and how we all contribute to it and interact with it will determine if we get Jarvis, or Ultron……that came out pretty good, I’m going to have to post that on my page. Haha.