• AI with ALLIE
  • Posts
  • Here's what Sam Altman thinks about AGI and having AI girlfriends

Here's what Sam Altman thinks about AGI and having AI girlfriends

My unfiltered reactions to Altman’s predictions on AI agents and human jobs

AI with ALLIE

The professional’s guide to quick AI bites for your personal life, work life, and beyond.

June 6, 2024

I just had a cozy 46-minute hangout with Sam Altman and Logan Bartlett (virtually, and I was just watching their YouTube video, of course), but phew… this one was a doozy.

Sam's take on everything from AI coding to humanoid robots had me both nodding and raising my eyebrows. So, if you're keen on hearing both Sam’s thoughts and my reactions, grab your popcorn, kettle corn preferably, and let’s dive in.

From agentive AI to AI in education to regulation: Here's what Sam actually thinks

Hey friends,

I recently watched the entire 46-minute Sam Altman interview with Logan Bartlett, and I have to say, it’s one of the most jam-packed, fascinating interviews with Altman I’ve ever seen.

If you haven't seen it yet, you can watch the Sam Altman Interview here (and use Claude to summarize it for you—whatever works for you).

Consider this my unfiltered reaction to the interview. When my friends and I watch AI videos together, we often pause and react, sharing our thoughts in real time.

So, I’m opening my virtual door to all of you and inviting you into my living room.

Other than the two of them sitting weirdly close to each other, here’s what stood out to me, broken down by topic, with my reactions to each.

AGENTIVE AI

Sam has been sorta quietly hinting at what OpenAI’s agentive AI efforts could be. I think most people think it’s “oh wow, I’m going to control 500 AI agents!” but there’s the potential that it’ll be a sort of ensemble model approach, spoke and wheel, where a human sends tasks to one core brain and that AI system delegates out to hundreds or thousands of agents for tasks. See drawing deeper in the newsletter.

SUCCESS METRICS OF AI

I find it strange that Sam’s main KPI for “changing the world” is GDP growth. I don’t have a perfect answer for its replacement, but here are some: happiness, creativity metrics, scientific discovery rate, health, relative wealth of all classes. Sam cites GDP in this interview and others.

AI AND CODING

Sam talks about how big AI will be for coding. Read that again: coding. He doesn’t say helping software engineers (SWEs) or machine learning experts, he says coding. That could mean the state we have today, where SWEs are coding and they get a big boost from AI and copilots. But it could also be that…

  1. non-coders are coding OR

  2. AI is writing its own code to execute tasks

HUMANOID ROBOTS

Sam says that the world is designed for humans, so things that operate in that world, like humanoid robots, may have an easier time—you know, getting around, picking stuff up, doing our laundry. While I agree with hi, I don’t want it to be that way. I worked with a waste management startup years ago who found that retrofit tech gave them nearly a 10x leg up on the competition. Humanoid robots feel like retrofit tech to me. I want to question the status quo and default more.

Interestingly, after this interview was filmed (recently, I may add), OpenAI formally relaunched its robotics team, which previously shut down in 2020. Forbes reported it here. Sam often will drop subtle hints of new announcements in his interviews a few weeks or months in advance.

AI DEVICES

In regards to AI devices like Humane and Limitless, Sam just says “it’s early.” That’s a nice way of saying he doesn’t think they’re good enough, or thinking optimistically, will get better. Personally, I would pay $100 for a device that is NOT my phone that runs ChatGPT mobile and has an optimized voice performance/interface. Sam’s comments are a very light signal to me that OpenAI isn’t releasing in this space, or more strategically for their partners, maybe Sam just wanted to downvote these devices because of OpenAI’s new reported deal with Apple (here).

CHATGPT WRAPPERS AND STARTUPS 

I really think Sam has a misrepresentation of ChatGPT wrappers. He commonly craps (that’s the scientific word for it, actually) on folks building these wrappers but guess what–these builders aren’t dumb. They know the next model will do better. And they know the model after that will be even better. They just don’t know the OpenAI roadmap. What they’re betting on is that people want something better than just the model as it exists today. And that IS the right bet. Also, I would add that there are builders like Levels IO who are making $2M a year from ChatGPT wrappers.

You can either build a business that bets against the next model being really good or a model that bets on that happening and benefits from it happening

Sam Altman

NEW JOBS IN AI

One of the more shocking moments of the interview for me was when Logan asked Sam what new jobs AI would create, and Sam said HE HAD NEVER BEEN ASKED THAT.

Literally, what.

I had to spit out my Poppi and rewatch that. I get asked that question probably every other week. Maybe he’s only talking to C-Suite, world leaders, and engineers, who are the least likely cohorts to ask this question. His prediction was something in art or creative and said “human, in person, fantastic experiences” - so hey, if you’re an event planner and Burning Man organizer type…that’s you. Congrats.

I have other predictions like AI Avatar Manager, AI Operations, and Action Lead. Here’s an old video of mine on that topic.

OPENAI BUSINESS MODEL

When asked about the OpenAI business model, Sam said “ChatGPT subscription model, like, really works well for us, like surprisingly” - aka they’re pulling in tons of $$ (they’re rumored to have made $1.6B last year, reported by The Information). Good to know 🤑

My drawing of our assumptions (left) and OpenAI’s potential AI Agent plan (right)

NEW OPENAI STRUCTURE

He said they’re “close” to being able to talk about the new OpenAI structure. It sounded like he was about to say the words “a few months” and then quickly instead said “this year, we’ll be ready to talk about it.” The leadership changes have been going on since November 2023, with some changes, like new board members, already announced. Now might be a good time for folks to watch the Helen Toner interview with Bilawal Sidhu on the TED AI podcast on the board and Sam Altman (watch here).

And here’s Sam’s reply during his UN AI for Good Summit (watch here)

AGI AND EXPONENTIAL GROWTH

Sam doesn’t think that AGI is a moment in time. Instead, he sees AI’s progress as a continuous exponential curve, with year over year growth. This is interesting for two reasons:

  1. clearly he doesn’t see an AI plateau coming soon

  2. this clears him of any clear definition of AGI

If it’s a spectrum and there is no binary “ah we don’t have AGI today” and “grandpa, wake up, we have AGI!”, then there is no need to distinguish between non-AGI and AGI. Though I agree it’s a spectrum, I also think that if your entire company’s mission is to build responsible AGI that benefits everyone, then you should know what each of those words mean and share that definition with the world, regularly.

The arrival of intelligence of a non-human form is really a big deal for the world. It’s coming, it’s here, it’s about to happen, it happens in stages.

Eric Schmidt, co-founder of Schmidt Futures and former Google CEO (source)

SCIENTIFIC DISCOVERY

When talking about the benefits of stronger AI, Sam often talks about NEW or RESEARCH or DISCOVERY. I’d say this is one of the biggest hopes of AI. Notably, he says that moment is not close but he wouldn’t rule it out. Now, in my opinion, unbelievable scientific support can already be provided (and therefore hopefully progress made) in pattern matching, and especially pattern matching across modalities, industries, themes, timeframes. This is one of the spaces I’m most excited about.

AI ADOPTION

Sam at one point on the pod agrees that GPT-4 is widely adopted, and Logan calls it “mainstream”. What’s important here is that he is NOT giving the talking point of “we’re so early, tons of people are still not using this” - and in my personal experience, based on the hundreds of people I talk to each month and thousands of comments I see, it feels like the adoption curve for individuals (not businesses) has changed in the last year and REALLY changed in the last 6 months. We’ve seen statistics all over the place on AI adoption, but here’s a recent study from Microsoft on the topic.

AI RISK

Sam says that he believes that AI models will pose “significant catastrophic risks to the world”. This is such an important note. When Ilya Sutskever left OpenAI and the superalignment team was disbanded (read Jan Leike’s X thread here), the rumor mill was shouting “they must have realized AI is not risky!” and pushed for even more acceleration and regulation delay. Well, if you trust Sam’s words, that is wrong.

AI REGULATION

Sam says, “Regulation has net been really bad for technology” - I’m honestly surprised he said this so boldly. Then he says we might feel different about this after some AI threshold (basically saying regulation now on our current AI is bad, but regulation on some higher performing AI may be good).

Random but related and wanted to make sure these thoughts are in writing, I disagree on AI regulation being based on model size–it should be based on risk level, capabilities, use cases, data access, and consequentialism. Size may actually be a pretty weak indicator if/when new algorithms come out. And doesn’t address a multi-model approach for agentive AI.

OPEN SOURCE

To Logan’s question, “Do you think open source models themselves present inherent danger in some ways?”

Sam replies, “No current one does, but I could imagine one that could.”

And to that reply I say, “Mmmm…what?” There’s a chance that he is distinguishing between the model itself and the model itself in the wrong hands, so I’ll give him some grace. But I would say that the latter is already an issue with hate-speech-generating model examples like GPT-4chan. In my opinion, the model itself becomes a larger issue with autonomy/no checks, access to capital, and self-improvement.

THE SAFETY BINARY

Sam says that safety is not binary. He seems to be rejecting a lot of binary assumptions in areas like AGI or safety, suggesting that everything is actually on a spectrum. I think this perspective is correct and helps people adopt a mindset and set of actions conducive to continuous (if exponential) AI improvements. However, big however, I don’t want this approach to come at the expense of abnegating responsible definitions and education in the space.

ROMANTIC RELATIONSHIPS WITH AI

As gung-ho about AI as Sam is, he makes a shocking prediction almost 40 minutes into the podcast. He says he reads headlines or internet comments about AI romance, recapping, “Everyone’s gonna fall in love with ChatGPT… everyone’s gonna have a ChatGPT girlfriend… I bet not. I think we’re so wired to care long-term about other humans.” Again, giving him grace, I also don’t think everyone will have an AI romantic relationship, but I’ve already met people–real people!–that do.

Last year, while giving an AI keynote, I met a very normal man on the AV team. He told me he was lonely and created an AI boyfriend for himself. He was proud of it. He gave the AI a name and literally talks to the AI about his day and flirts with him. And despite working in AV, he told me he is not a crazy first adopter of tech. He even asked me if I thought there would be a future AI-human wedding. As I walk around the world, I think loneliness and nihilism are two of our biggest emotional pandemics, and AI can potentially help mitigate one while also potentially increasing the other.

THE IMPORTANCE OF SHIPPING CODE

Sam keeps talking about iterative deployment. And we’ve heard this narrative quite a bit from OpenAI (like his quote a year or so ago saying that it’s more a bunch of tiny things combined versus one big thing that gave the OpenAI team their success). Though here’s where we disagree: Sam talks about the importance of interactive development as a public trust thing, but I’m convinced that this is also a massive moat. This is how startups are gaining speed on enterprises, and this is the right way to build a company in the AI age when things are changing so quickly. I wrote more about it in my newsletter on the evolution age (read here) if you want to read about the future of business processes.

DIGITAL TWINS

This is one of the weirder things I caught in the interview. Sam is talking about an AI system that is meant to mimic a human, and he keeps referring to it as “Sam’s AI Assistant” or “Sam’s AI Ghost”. The standard industry terms for this would be “Sam’s AI Avatar” or “Sam’s Digital Twin”, but he never says that (in this interview, at least). He wants this “AI Ghost” to be a separate entity and not an extension of himself. Sam specifically says “I think of it as NOT me.” This feels like a big differentiation to call out. Will people consider their “AI” an extension, a copy, or a secondary? If I was forced to bet, I would say we’ll see the Sora model get involved in this zone, either creating our “ghosts” or creating the worlds our “ghosts” live in.

But it’s not the same intelligence that I have, right? I think one of the most unfortunate names if artificial intelligence. I wish we had called it a different intelligence.

Satya Nadella, Microsoft CEO (source)

AI IN SCHOOLS

Sam and I agree here: he wants AI to be required in schools and education just like calculators, though he says there will be times that we can’t use it. It’s the same method Ethan Mollick uses in his classroom (read here).

Surprise surprise, weeks after this interview, following up on their success of signing with Arizona State, OpenAI is launching a dedicated effort for education groups called ChatGPT Edu (read here). As I said, these in-depth Sam interviews are a prediction tool—who knows, maybe you can probably build a forecasting model based on OpenAI executive interviews to predict launch dates.

THE SCALE OF AI PRODUCTIVITY

As I shared in my last newsletter (read here), I’ve seen AI improvements on tasks range from 20-30%, to 5x, to 16x. But when asked to predict what the future looks like, Sam says a much different productivity scale—thinking about what it looks like when “one person can do the work of hundreds or thousands of well-coordinated people.” Hard to imagine this happening without the workflow described in the AGENTIVE AI section at the top of this newsletter and in the drawing above. But…we’ll just have to wait and see…

As always: stay curious, stay informed,

Allie

25% off my top-rated AI course — ends Sunday!

Big news: my AI for Business Leaders course earned top course status on Maven 🎉 

For the first time, Maven is offering 25% off picks across AI, product, design, and engineering. That’s the biggest discount ever offered.

When I set out to create a course that would help business leaders and executives seamlessly integrate AI, I had no idea it would resonate with hundreds of students from industry giants like Amazon, Adobe, Apple, Google, Meta, Microsoft, Ralph Lauren, Walt Disney, General Mills, CVS, Novartis, Thomson Reuters, Verizon, ReMax, FedEx, MetLife, Mastercard, Cornell, Pepsi, Ralph Lauren, Aetna, Cisco, McKinsey, and more.

Here’s what you can expect:

  • 📚 40+ modules

  • 🎤 5 guest speaker modules

  • 💬 1 live Q&A session

  • 📂 Free access to my full enterprise AI mastery toolkit (60+ pages of worksheets and resources on creating your first AI use case, building a responsible AI policy, implementation strategies, and more)

Tools, courses, and blogs that caught my eye

I’ve pulled together some of the top releases with my take on each one. Check it out.

  1. Perplexity launched ‘Perplexity Pages’ — Perplexity has introduced "Perplexity Pages" for Pro users, allowing you to create and share comprehensive, linked web pages on any topic within seconds, with interactive follow-up questions and public engagement metrics. However, users must verify the data as Perplexity may generate inaccurate statistics and unreliable links. Some are calling it the “Wikipedia killer” and others say it might ruin the internet. How fun. (read it) (my thoughts)

  2. Reverse Turing Test Experiment with AIs — In a mind-bending reverse Turing Test, four AIs (GPT-4-Turbo, Claude 3 Opus, Llama 3, Gemini Pro) created in Unity and voiced by ElevenLabs compete to identify the human among them. The human barely tries, the AIs find him immediately. (watch it)

  3. My AI Prompts for Fun and Work — In a world driven by efficiency, here are some fun and work-related AI prompts I’ve tried, like analyzing love letters for patterns, finding ideal neighborhoods in new states, translating Italian, and understanding Gen Z slang (read it)

  4. Hugging Face NYTW Panel — Insights from the Hugging Face NY Tech Week research panel last night highlighted the importance of understanding data flaws, skepticism about training LLMs on the whole internet, the sustained relevance of transformers, and the complexities of data quality and open-source practices in AI development. Best tech panel I’ve seen in awhile (my thoughts)

Feedback is a Gift

I would love to know what you thought of this newsletter and any feedback you have for me. Do you have a favorite part? Wish I would change something? Felt confused? Please reply and share your thoughts or just take the poll below so I can continue to improve and deliver value for you all.

What did you think of this month's newsletter?

Login or Subscribe to participate in polls.