Should you cancel your ChatGPT subscription?

AI voice demos + 24 hours left for free AI course

Hi there,

We’re going to talk more deeply about ChatGPT Advanced Voice Mode (that thing that broke the internet in May of this year), but first…

This is the final countdown.

 

Your chance to get my "How to Use Generative AI: Building an AI-First Mindset" course for FREE expires tomorrow, August 6 at 2:30pm ET. After which, it will be accessible through a LinkedIn Learning subscription.

Don't let procrastination cost you. This is your final opportunity to join thousands of professionals who have already taken this course and started to transform their careers.

 

NOW, MY REAL THOUGHTS ON THE NEW OPENAI FEATURE

OpenAI has its work cut out for them. Google is making inroads, Meta is setting out to become the most used AI assistant in the world, and Claude from Anthropic is beating out benchmarks left and right.

But in my opinion, OpenAI and Anthropic are both focused on one thing: the long game.

And that’s the biggest reason why you shouldn’t cancel your subscription.

They’re not just sprinkling in AI features for fun, they’re thinking about multimodality from scratch, which includes new voice features, interactive features, coding features, and more.

I have early access to OpenAI’s new Advanced Voice Mode and wanted to share with you even more of my experiments, so you can get a sense of where all of this is headed (see below).

But four callouts here:

First, this feature is still in alpha. That means it may make mistakes, interrupt itself, not work when you try and launch it, not understand your requests, have unintended risks, or fail to respond. Trolls can’t see past that; first adopters can.

Second, it is not magic. Just because an AI system may be able to pick up on voice, pitch, pace, diction, intonation, or tone, that does not mean that it understands emotions. Even as humans, we make a best guess based on available data, but someone might be just acting happy.

Third, my favorite features are the least flashy. As a voice AI addict, hands-free mode is critical so I can dictate while cleaning my home or running errands. There are countless features in this release (tone detection, sound detection, voice impressions, alt pacing), but as a user on the hunt for the most beneficial and functional features, my top pick is the ability to interrupt the AI voice. When you hear it going off the rails, the ability to interject and right the ship is critical.

Fourth, why does this matter? Why do I care if ChatGPT can act out a scene from Braveheart? Why do I care if ChatGPT can detect a sneeze? Because not all high-value audio is human speech. And listening to non-human things is actually a huge use case boost. Imagine if your AI assistant can hear you have the TV on and silences your phone notifications. Or it hears your smoke alarm is going off and calls the fire department for you. Or it hears a manufacturing machine start whirring and pre-orders a replacement part. Don’t think of it as a character voice actor, think of it as global sound and speech detection.

NOW, ONTO THE DEMOS.

ALTERNATIVE ENDING TO TITANIC

There was room on the door, Rose.

RON WEASLEY LOST IN AN IKEA

The ended had me Ron Wheezing with laughter.

SOUND DETECTION FOR COUGHING AND LAUGHING

Maybe ChatGPT has a bigger place in healthcare.

TAKE 40 MINUTES IN THE NEXT 24 HOURS

My new course ‘How to Use Generative AI: Building an AI-First Mindset‘ has already been taken by thousands of people. Product managers and CIOs. Surgeons and psychologists. Project managers and HR leaders. Google and Ford. Microsoft and Mattel. Nvidia and Scotiabank. PhDs and CROs.

Everyone can learn AI. And that includes you.

You’re one click away from understanding where the future is headed—for yourself, your team, your kids, and your community.