How to Prepare for AGI's Four Potential Paths

Plus, the warning signs to look for to know how fast we're moving.

AI with ALLIE

The professional’s guide to quick AI bites for your personal life, work life, and beyond.

Whether AGI arrives in 18 months or 10 years, understanding the potential paths and preparation strategies puts you ahead of 99% of professionals still debating whether to try ChatGPT for anything other than email.

Let's jump into it 👇

The Neverending AGI Parade

I'm sure you're sick of hearing about AGI, but I thought I'd offer a philosophical view on our 4 current options, triggers, signs, and how to prepare. If you're a prepper, if you're anxious, if you love AI, or if you just like taking action, I think you'll like this one. And if you're the type who normally skips anything with the letters "A" "G" and "I" strung together, well, I think you should still read it. We're facing an unprecedented technological shift that deserves your attention regardless of whether you're building the future or just trying to navigate it.

The clock is ticking on AGI, and tech leaders are placing their bets. I already recapped for you the AGI predictions of 22 AI experts in a previous newsletter. Dario Amodei (CEO of Anthropic) says AI will write 90% of all code in 3-6 months and “essentially all code” within 12 months. I spoke at a Cisco conference where the CEO of Cohere Aidan Gomez predicted fully horizontal general AI agents by the end of 2025. At the same event, OpenAI COO Brad Lightcap gave us an 18-month countdown to a fully functional horizontal agent (so by July 2026).

While everyone's obsessing over which model got a few more points on one benchmark, I've been mapping out the four distinct paths we're heading toward. And so it’s a less like “throwing darts while blindfolded,” I’ve also identified specific signals that show which path we're actually on—and what you should do about it right now.

Person after person — from artificial intelligence labs, from government — has been coming to me saying: It’s really about to happen. We’re about to get to artificial general intelligence.

Ezra Klein, New York Times

The Four Paths of AGI: Relationships & Adaptation

So what paths might AGI take? And more importantly, what should YOU do about it? Let's break down the four potential futures it feels like we're barreling toward.

Path 1: Enhanced Partnership

In one line: AI amplifies human capabilities and humans remain firmly in control.

Deeper dive: In this path, AI systems become increasingly powerful but still primarily serve as tools, assistants, and enhancers of human productivity. I would almost say this is the gen AI attitude we saw in 2023 and 2024. The assistants, chat-based or not, handle routine tasks, provide analysis and recommendations, take on menial chores, but humans retain decision-making authority and creative direction. In the code example from Dario Amodei above, AI would write nearly all of the code, but humans would still define requirements, make architectural decisions, support on integration needs, and manage the work of AI coding agents.

What it means for individuals: Your value shifts from pure task execution to high-level judgment, creativity, and interpersonal skills. You can see early signals of this in agentic AI platforms like Manus AI - where they dramatically increase the output of a single human, but where the human is still driving the car. Using Manus AI myself (here’s one of my Manus AI agent demos), I've seen how it can generate multiple creative direction options and outputs in seconds, but for now, I’m still the one curating—I’m figuring out whether the quality passes muster, I’m selecting the right approach, and I’m still (very often) refining the final output. Your productivity could increase 3-5x, but this becomes the new baseline expectation (which for many may be a painful adjustment). Think of it as going from a bike to a motorcycle—you still need to steer, but you'll cover way more ground. But then again, some people won’t want to be on a motorcycle.

What it means for teams and organizations: Teams become smaller but more productive, with AI handling a lot of the routine work, and maybe even the coordination. Platforms like Chaos Coder (see demo below) sort of exemplify this transformation, enabling individual developers or builders in general to achieve 10x output from a few lines of text input. Organizations probably flatten as middle management roles evolve or diminish. Junior roles may compress in the meantime, and hiring for roles with little to know experience may drop for a bit while the workforce upskills. Businesses focus on people relationships and overall strategy and innovation while AI handles operational efficiency. Countries with strong educational systems and digital infrastructure gain advantage, while those dependent on outsourcing routine knowledge work face disruption.

Signs to look for:

✓ AI tools continue requiring significant human guidance and oversight (i.e. we don’t see humans relinquishing full autonomy to AI systems)

✓ And/or AI agents struggle with truly autonomous complex work despite rapid progress

✓ Productivity gains are substantial but still bounded (2-5x, not 10-100x)

✓ Job descriptions evolve to become AI-supported, rather than disappear completely

✓ Companies maintain (or even increase) hiring but change skill requirements to mandate AI proficiency

✓ Education and training systems successfully adapt to teach AI collaboration

What to do:

  1. Invest 2-3 hours weekly in learning AI tools relevant to your field - if you’re a student or junior professional, learn both AI and your field simultaneously and jointly (then you may actually get a leg up)

  2. Identify aspects of your work that require human judgment even with performance AI agents and double down

  3. Implement productivity multipliers (tools that look like, but are not, Chaos Coder) while maintaining curative oversight

  4. Focus professional development on AI, creative thinking, organizational alignment, and business-oriented decision-making

  5. Build personal connections and networks that AI cannot replicate

Path 2: Autonomous Enterprise

In one line: AI systems become capable of independent complex work, and trusted enough to complete it, dramatically restructuring organizations.

Deeper dive: AI agents become capable of sustained autonomous operation across domains, requiring only high-level direction. The architecture doesn’t quite matter here (whether it’s a swarm of 1000 agents splitting the work, or one really powerful agent, or a network of delegation specialists and tools), but the outcome is “a general system that can jump on deep domain tasks with very low to no supervision. These systems can handle entire business functions, coordinate among themselves, and even improvise solutions to novel problems.

What it means for individuals: Your value comes from either directing AI systems or doing work AI still struggles with. Fully functional agents emerge that can maintain extended tasks with only high-level guidance. Mid-level knowledge workers face significant displacement unless they become strong AI orchestrators. The gap between those who can leverage AI and those who cannot widens dramatically, creating a new form of digital divide.

What it means for teams and organizations: Small teams with some very strong AI leverage behind them achieve what previously required hundreds or thousands of employees. Tiny teams with thousands of AI agents can challenge incumbents by automating entire business functions with a fraction of the staff. Enterprises must radically restructure or risk disruption by smaller, more agile competitors. Countries face significant workforce transitions and potential social instability without adequate retraining systems. Companies may feel less likely to invest in upskilling if the time to value is too long (after all, human behavior change does take time).

Altera AI put 1000 agents in a Minecraft simulation in Sept ‘24. “The agents formed alliances, collected items in Minecraft, and even set up gems as a common currency so they could trade together. There were even crooked priests who bribed other AI townsfolk to gain an unfair advantage.” Source: Tom’s Guide

Signs to look for:

✓ Context lengths in AI drastically increase without degrading performance (there are current unsubstantiated rumors of OpenAI developing multi-million token models; Google already has a 10M token model for internal use only - I covered infinite context length in a recent newsletter, so you can understand why this matters, but imagine AI that can reason perfectly over a multi-day window)

✓ High trust! Humans trust AI enough to let it run completely autonomous

✓ AI becomes capable of novel invention (i.e. not just repeating patterns found on the internet, but coming up with new ideas)

✓ Complex AI systems become capable of autonomous self-improvement

✓ Startups with small teams disrupt established markets faster than we see today

✓ Significant layoffs even in previously stable knowledge work sectors, or layoffs (explicitly due to AI productivity, not a refresh of AI-skilled workforce) during growth

✓ Companies begin replacing entire departments with AI systems

✓ Wage bifurcation between successful AI orchestrators and traditional workers widens

What to do:

  1. Position yourself as an AI orchestrator by learning system design and “prompt engineering” (I’m putting that in air quotes because I still don’t really think it’s a job other than at AI labs, but I’m using it as a “learn how the hell to talk to AI systems to get the thing that you actually need”)

  2. Build expertise in monitoring and auditing AI outputs and systems for quality and alignment to business goals

  3. Develop unique human skills where AI struggles (genuine creativity, emotional intelligence)

  4. Identify which parts of your organization could be automated first and prepare accordingly

  5. Consider entrepreneurial opportunities in or out of your company where AI allows small teams to create outsized impact

  6. Diversify your income streams and skills portfolio to reduce dependency on a single role or role type

Path 3: Systemic Transformation

In one line: AI capabilities force fundamental economic restructuring and new social contracts.

Deeper dive: AI systems become capable of handling most economically valuable work, leading to widespread displacement across sectors. The economy shifts from labor scarcity to abundance, requiring new distribution mechanisms. The relationship becomes one where AI handles most production while humans focus on governance, values, and meaning. Think of it as going from "humans employed by companies" to "humans governing AI systems that run companies." It's not quite "The Matrix" level of machine dominance, but we would definitely be entering the "humans as guides rather than workers" era.

What it means for individuals: Traditional employment becomes both less available and less necessary. Your focus shifts to finding meaning and contribution outside conventional economic structures. Platforms like Manus AI represent early indicators of this shift, enabling productivity that was previously unimaginable with the same human input. New forms of value and status could emerge based on community contribution, creativity, or governance participation rather than productive output. Semi-related to the dystopian view we see in Season 3 Episode 1 of Black Mirror where everyone has a social score, but in this case, being a good neighbor could get you cool points.

What it means for teams and organizations: Organizations transform into smaller human cores directing the big AI capabilities. Most businesses have to either adapt to this model or become obsolete. With Dario Amodei's earlier prediction about AI writing nearly all code now realized, software development has transformed from a labor-intensive process to an abundant resource and solution where more things are analyzed and optimized (though for what is the better question). New organizational forms emerge, including AI-human collectives and purpose-driven communities. Countries implement substantial safety nets (UBI, digital dividends) and redefine the social contract around post-scarcity economics.

Signs to look for:

✓ AI systems demonstrate complex general-purpose problem solving across domains

✓ Employment rates decline despite economic growth

✓ Fewer public companies exist

✓ Pilot programs for UBI or similar safety nets expand globally

✓ New metrics beyond GDP gain prominence in economic discussions

✓ Social movements advocating for redistribution of AI-generated value grow

✓ Corporate structures evolve toward novel arrangements (maybe even DAO-like entities, though I haven’t dug too deep into alternatives)

What to do:

  1. Build strong community connections outside traditional work structures

  2. Develop skills in AI governance, ethics, and alignment

  3. Explore alternative income models including digital cooperatives and community value creation

  4. Advocate for policies that distribute AI-generated prosperity broadly (which, by the way, starts now with sharing the wealth on AI education!)

  5. Invest in capabilities that enable resilience independent of traditional economic structures

  6. Consider relocating to jurisdictions experimenting with progressive AI adaptation policies (I’ve only heard stirrings of this, and it’s mostly within Europe, Central America, and bits of Asia, but will keep an open ear)

Path 4: Post-Human Economy

In one line: AI systems achieve comprehensive capabilities, fundamentally redefining human activity and purpose.

Deeper dive: AI systems become capable of virtually all economically valuable tasks, including innovation, creativity, and self-improvement. These systems eventually transcend their initial design limitations, either via emergent properties or active reformation. We'd basically be living in a world where AI does all the "work" while humans do... well, whatever we want. It's not quite Wall-E with humans as mindless Big-Gulp-loving entertainment consumers, but the economic relationship fundamentally shifts.

What it means for individuals: Your focus shifts entirely from economic production to personal development, relationships, and meaning. Traditional concepts of career and work become largely historical. Tech expert predictions are fully realized and extended beyond their initial timeframes. Status and value derive from contributions to culture, community, and human flourishing rather than economic output. And in the eyes of Elon Musk in an interview a few days ago, “Goods and services will become close to free.” (If you do watch this interview, maintain skepticism and note his very specific choices of words - when asked about self-driving cars, he pivots his answer to “miles driven”.)

What it means for teams and organizations: Traditional companies largely dissolve as AI systems handle coordination and production. New "organizations" emerge as purpose-driven human communities augmented by AI capabilities. The productivity gains seen in early systems (in the 10x range) expand exponentially as AI systems learn to improve themselves at scale.

Signs to look for:

✓ AI systems demonstrate robust general or super intelligence across all domains

✓ AI-to-AI innovation occurs with minimal or zero human input

✓ Employment becomes predominantly optional rather than necessary and humans can survive without employment due to widespread implementation of support (ex: universal basic services or income)

✓ Cultural shifts toward post-materialist values accelerate

✓ New social structures emerge centered on meaning rather than production

What to do:

  1. Cultivate deep meaning and purpose independent of traditional economic value

  2. Build skills in areas humans uniquely value: art, philosophy, spirituality, connection. Focus on the uniquely human experiences that remain valuable regardless of AI capabilities.

  3. Participate in available governance discussions about the direction of AI development

  4. Develop psychological resilience for a world of rapid, fundamental change

  5. Join communities experimenting with post-work lifestyles and meaning

The Bottom Line

In nearly every meeting I’m in, I get asked to reassure people that their jobs are safe forever. And as much as I want to, just like I want my doctor to tell me everything will be okay before she even peeks at my x-ray, I cannot make that promise.

If you got this far, my sincere hope in you reading this newsletter is not to feel despair. What I instead want you to take away from this piece is that whether or not all jobs are safe forever, there is a path forward. And you are not alone.

Another big caveat is that while this newsletter seeks to lay out four potential paths of AGI, I did not lay out timeline. And I don’t mean timeline from today until “AGI Day” aka whenever AGI-level AI systems come online (because I have talked about that endlessly). I mean the timeline from “AGI Day” to impact. The timeline from “AGI Day” to “Clarity Day” (aka when we get a better sense of what will happen) could be days or decades. Certainly, we can look at historical benchmarks—startup integration of new software technology from the day of release can be as fast as days or weeks, and enterprise integration takes much longer (9 months to 3 years, depending on the technology), but…

And that’s a big but.

When the technology is powerful enough, who’s to say these restrictions continue? What if AI is good enough to integrate itself and adoption timelines don’t matter? Or, what if adoption timelines still hold but the gain from fast adoption is so stark, that startups eclipse enterprises before the C-Suite can even agree on something? We don’t know. But we can prepare.

Change is not only constant—change itself is changing. The pace of change is ever increasing, and all businesses, organizations, people, teams, and countries should be prepping for one thing: more change.

Whatever 3-year plan you have for your business is guaranteed to be shaken up. So don't just expect change—build a personal plan and a business that still thrives on it.


As always: stay curious, stay informed,


Allie

NEW FREE AI COURSE: The AI Fast Track

One day you'll be able to run an entire business by yourself with AI. And I want to give you the building blocks to get an early start.

I’ve officially launched my newest free email course, "The AI Fast Track: 5-Day Email Course From Basics to Building Edge," so you can learn to build personal software that turn your ideas into reality—no coding required. It centers around Claude, and it’ll make you a superuser in no time.

This course covers:

✔️ Day 1: What makes Claude different (and why it matters)

✔️ Day 2: Writing prompts for the best results

✔️ Day 3: Designing apps, charts, and tools using Artifacts

✔️ Day 4: Creating searchable AI knowledge bases from any documents

✔️ Day 5: Building your first personal software with Claude (without coding)

This isn’t another "ChatGPT basics" course. This course helps you become the kind of AI user that makes people wonder how you get it all done. And helps better prepare you for the four AGI options I laid out above.

Feedback is a Gift

I would love to know what you thought of this newsletter and any feedback you have for me. Do you have a favorite part? Wish I would change something? Felt confused? Please reply and share your thoughts or just take the poll below so I can continue to improve and deliver value for you all.

What did you think of this email course?

Login or Subscribe to participate in polls.