Boo… Hoo… Monday!
Hello, Curse and Coffee friends,
Today, we explore how humans are hindering AI’s progress.
Hit reply and let us know what you think (we read all of your kind words).
Coffee at the ready…
The Big Sip

The take: OpenAI dressed a governance complaint as a productivity problem.
What happened: Alexander Embiricos, OpenAI's Codex product lead, told Lenny's Podcast the "limiting factor" to AGI is "human typing speed"—meaning humans reviewing AI output.
Why it matters: The ask is for less oversight. "Rebuild systems to let the agent be default useful" means AI validates its own work.
What to watch: Embiricos predicts productivity "hockey sticks" in 2026. The labs are setting the timeline. Regulators haven't been invited.
The EU spent three years requiring human oversight. OpenAI spent one podcast calling it the problem.
Sponsor Break
Before we slurp into today’s brew…
Here are some wordies from today’s sponsor.
200+ AI Side Hustles to Start Right Now
AI isn't just changing business—it's creating entirely new income opportunities. The Hustle's guide features 200+ ways to make money with AI, from beginner-friendly gigs to advanced ventures. Each comes with realistic income projections and resource requirements. Join 1.5M professionals getting daily insights on emerging tech and business opportunities.
Here’s Your Brew

The EU AI Act came into force in August 2024. Article 14 requires high-risk AI systems to be designed so humans can "effectively oversee" them during use.
The regulation explicitly addresses "automation bias"—the tendency to trust AI output even when you have contradictory information.
Organizations must train supervisors not to rely too heavily on AI decisions.
OpenAI's pitch inverts this entirely.
The problem, in their framing, is that catching errors slows down the hockey stick.
Embiricos wants systems "rebuilt," so AI operates "by default" without constant review. A governance redesign is sold as an efficiency measure.
Anthropic's Amodei predicts AGI by 2027. Google DeepMind says 2030 is "plausible."
The labs are racing.
And the bottleneck to artificial general intelligence turns out to be regular human intelligence, asking, "Are you sure?"
Wait… Let’s answer your Question.
Q: WTF is AGI?
A: AGI means a computer that can learn and figure things out the way you can.
Right now, AI is like a toy that's really good at one game. It can beat anyone at chess, but if you ask it to play hide and seek, it doesn't know what you're talking about.
AGI would be like a toy that can play any game you teach it—chess, hide-and-seek, building blocks, make-believe—and get good at all of them without needing a different toy for each one.
The peeps building AI think this might happen soon. Some people are excited. Some people are worried. The OpenAI man in the newsletter is basically saying: "We could build this faster if people stopped asking us to check our homework."
Two Sides, One Mug

Pro: AI that validates its own work could genuinely accelerate—OpenAI claims Codex users merge 70% more pull requests weekly.
Con: "Complete oversight may no longer be viable"—that's how academics phrase "we don't know who's accountable when AI ships the bug."
Our read: The labs want speed. Regulators want accountability. OpenAI just told you which side they're picking.
Receipt of the Day
[Primary] Lenny's Podcast, December 14, 2025
"If we can rebuild systems to let the agent be default useful, we'll start unlocking hockey sticks."
Why it matters: A design philosophy: AI should run without humans in the loop by default. The word "rebuild" is doing a lot of work.
Lenny's Newsletter
Spit Take
Forecasters now give AGI a 25% chance by 2027. In 2020, the median prediction was 2058. — 80,000 Hours
Your Coffee Break Links (and water cooler chatter)
VentureBeat: AI 2027 forecast predicts superintelligence "months" after AGI. Plan accordingly.
OpenAI: Nearly all OpenAI engineers now use Codex, up from half in July. The future is already internal.
ScienceDirect: Academic paper asks: "Is human oversight of AI systems still possible?" The answer involves many "strategic interventions."
Join your team of caffeinated skeptics.
Opinionated world news that respects your time.
One bold take, the best counter, and the receipt(s) that prove it (all in sixish minutes).
Mugshot Poll 📊
OpenAI says human review slows AGI. Your take?
You can read all our back issue newsletters for free here.
For the love of coffee, see you tomorrow!
Enjoy your Monday, keep it caffeinated.
How did we do?
Thanks for reading!
Are you subscribing?
Be sure to get your daily curse and coffee fix by hitting the button below.



