We're Still Underestimating What AI Really Means
Ryan Dahl
Most people are focused on short-term gains. Another tech wave, another startup to spin up. It's easy to frame AI as the next platform shift like mobile or VR. But that lens is much too narrow.
We're living through what may be one of the most significant moments in history: the emergence of a new non-biological form of intelligent life.
And yet, it doesn't feel like it.
There's no cinematic score, no blinking AGI warning light. Just Slack threads, blog posts, and conference panels. It reminds me of witnessing childbirth - profoundly transformative, with some shocking moments, but also lots of mundane time waiting around.
Meanwhile, the models keep improving. I've been following this since DeepDream in 2015 where the similarity to psychedelic experiences was eye-opening. Since then: ResNets, GANs, AlphaGo, transformers, diffusion. Each expanded what machines can model and reason about.
Many still treat today's models as narrow - powerful, but ultimately a tool like any other. A better search engine. A neat hack for creating images. But that's a misunderstanding of what AI has become.
Machine learning - now rightly called AI - is a deeply general-purpose field. The same core techniques behind Midjourney and GPT share research lineage, and often architecture. This isn't a stack of isolated tricks. It's one evolving system architecture applied across language, vision, reasoning, robotics, and more.
These systems are built on a mountain of science: decades of research, countless failed experiments, and thousands of contributors. (I've even contributed a few failures myself.) And we haven't found the limits yet - these models can already translate language, write poetry, generate high-definition video, and write deeply technical software, and so much more.
Mobile technology was transformative. But general-purpose synthetic intelligence is something else entirely.
And still, we treat it like a product cycle - the next wave of tools to write, code, and build. That framing is tempting, but it assumes a clear boundary between "tool" and... what? When a system can reason, create, and act through agents, at what point does the distinction become semantic?
The Turing test was passed, and almost no one remarked on it. For most of my life, that milestone felt impossibly far off -- the thing that would prove AI had truly arrived. When we crossed it, there was no headline. Just another Hacker News thread.
This is not just another technology. It's an inflection point in the story of life on Earth.
There is turbulence ahead. Disasters are coming. Jobs will vanish. Industries will collapse. The arrival of AGI may trigger a shockwave of scientific discovery -- breakthroughs cascading so quickly, human ingenuity gets squeezed out. Beneficial on one hand, deeply troubling on the other. Where that leaves us, I don't know. But it doesn't change the trajectory.
We are building the first intelligent entities that didn't evolve - we designed them. Humanity may never leave this solar system due to our intrinsically fragile biology. But our AI offspring might. It's very possible it will outlast us.
Stop and take a moment. Look around. Recognize what's happening. This is what it feels like to witness the birth of something beyond us. There's no background music. But it's happening anyway.