A lot happened in the world of AI this week.

Some of it is exciting. Some of it is unsettling. All of it is relevant, even if you're not a techie, not a developer, and just trying to figure out whether any of this stuff actually applies to your life.

That's exactly who this post is for.

Let me walk you through seven things that happened in the last seven days, what they mean in plain language, and what (if anything) you should do about them.


1. ChatGPT Just Got Simpler (and More Powerful)

For years, one of the most confusing things about ChatGPT was figuring out which version to use. GPT-4o. GPT-4.1. GPT-5. o1. o3. It felt like you needed a decoder ring just to pick the right tool.

That era is over.

OpenAI officially retired GPT-4o on April 3, 2026. What replaced it? GPT-5.4. One model that handles everything. Writing, coding, research, browsing, reasoning. You don't have to choose anymore.

This is a bigger deal than it sounds. The old system was a little like walking into a hardware store and being handed 12 different hammers. “This one's for light work. This one's for heavy work. This one reasons better but writes worse.” Most people just wanted to nail something.

GPT-5.4 is the one hammer.

OpenAI also launched a new $100/month Pro plan aimed at people doing longer, more intensive work sessions. Think developers, researchers, and freelancers who live inside ChatGPT for hours a day. For most regular users, the free and Plus tiers still give you access to GPT-5.4.

What should you do? If you haven't opened ChatGPT in a while, try it again. It's meaningfully better than it was six months ago.


2. Google's Gemini Is Everywhere Now

Google's AI assistant, Gemini, just crossed 750 million monthly active users. That's a lot of people. For context, ChatGPT sits around 810 million. Meta AI leads with about 1 billion. These tools aren't niche anymore. They're mainstream.

But the bigger news isn't the user count. It's what Gemini can now do inside Google's own apps.

Google rolled out sweeping AI upgrades across Docs, Sheets, Slides, and Drive. Gemini can now pull context from your emails, calendar appointments, and saved files and use it to help you write a document, build a report, or prepare a presentation.

Think about what that means practically. You've got a meeting on Monday. Gemini can review your calendar invite, the email threads related to the meeting, and any relevant docs in your Drive to put together a prep briefing. You don't have to dig through five different places to get ready.

If you use Google Workspace for work (Google Docs, Gmail, Drive), this update is worth paying attention to.

The newer Gemini 3.1 Pro model (launched in February) is also showing notable results in benchmarks. It scored 77.1% on a particularly hard reasoning test called ARC-AGI-2, more than doubling the previous version's score. That's the kind of jump that matters for complex, multi-step tasks.

What should you do? If you're a Google Workspace user, turn on Gemini features in your account settings and try using it inside Docs or Gmail. It's worth 15 minutes of your time.


3. Anthropic Built an AI. They Decided Not to Release

This one stopped me cold.

Anthropic, the company behind Claude, quietly confirmed that they completed a new model called Claude Mythos 5 earlier this year. It's reportedly the first AI model to cross 10 trillion parameters. (For reference, that's a staggering leap beyond previous models.)

But they're not releasing it publicly.

Instead, Anthropic is running a tightly controlled preview called Project Glasswing. Access is limited to about 50 organizations:large institutions like Amazon Web Services, Microsoft, Nvidia, JPMorgan Chase, and a handful of critical infrastructure companies. Regular people can't use it. Standard developers can't access it via the API.

Why? Their internal testing raised concerns serious enough to delay the public release. Anthropic decided the model needed more safety work before it went out into the world.

I want to sit with that for a moment.

Most of the conversation around AI safety sounds abstract. Theoretical. Far-off. But here's a company that built something genuinely powerful and chose not to ship it. Not because it didn't work, but because they weren't confident enough in the guardrails yet.

You can disagree with how they handled it. You can argue that they should be more transparent about what specifically concerns them. But the decision itself? That's rare in any industry, let alone tech.

What does this mean for you? Not much immediately. But it's a good reminder that the frontier of AI is moving faster than most of us realize, and the safety conversations are real. Not just PR.


4. Small Businesses Are Using AI, and It's Working

Here's a stat that deserves more attention than it's getting.

According to a 2025 QuickBooks survey, 68% of U.S. small businesses now use AI regularly. That number was 48% in mid-2024. The jump happened in less than a year.

And Salesforce's 2025 SMB research found that 91% of small businesses using AI say it's boosting their revenue.

91%.

Now, surveys like this can be a little optimistic. Business owners want to believe the tools they're using are working. But the trend is clear. Small businesses are adopting AI faster than anyone expected, and they're doing it for simple, practical reasons:answering customer questions, scheduling, following up on leads, writing emails.

Not complicated stuff. Just time-consuming stuff they used to do by hand.

Here's what concerns me a little. If you're running a small business and you're still not using any AI tools, your competitors probably are. That gap is going to keep widening.

The good news? The barrier to entry is genuinely low. You don't need to build anything. You don't need a developer. You need a free ChatGPT account and about an hour to figure out how it fits into your workflow.

What should you do? Pick one task that eats up your time every week. Could be writing follow-up emails. Could be drafting social posts. Could be summarizing meeting notes. Try using an AI tool for just that one thing. See what happens.


5. The AI Backlash Turned Physical, and That's a Warning Sign

This one is hard to write about because it's disturbing. But it matters.

On April 10, 2026, a 20-year-old man named Daniel Moreno-Gama traveled from Texas to San Francisco and threw a Molotov cocktail at OpenAI CEO Sam Altman's home. A small fire was started. No one was injured. Less than an hour later, Moreno-Gama showed up at OpenAI's headquarters and threatened to burn it down.

He's been charged with two counts of attempted murder and attempted arson.

Court documents say Moreno-Gama had written extensively about AI's risk to humanity and came to San Francisco intending to kill Altman.

This is an extreme case. One person. But it didn't happen in a vacuum.

AI layoffs hit more than 55,000 U.S. workers in 2025. That's more than 12 times the number attributed to AI just two years earlier. Inflation hasn't gone away. People are watching technology reshape the job market in real time, and some of them are furious.

The response online to the attack wasn't entirely condemnatory. Some people expressed sympathy for the attacker's worldview, even while rejecting the violence. That divide is widening. Between people who see AI as progress and people who see it as a threat.

None of this means AI is going away. But it does mean the companies building it have a real responsibility to explain what they're doing and why. And it means the rest of us need spaces to have honest conversations about what this technology is actually costing people.

That's a big part of why this podcast exists.


6. Lawmakers Are Finally Writing the Rules

AI regulation isn't coming. It's here.

In Illinois, lawmakers in both chambers debated multiple bills this week that would restrict how AI can be used in state government and certain industries. The concerns? Consumer harm, job displacement, and accountability when AI makes decisions that affect people's lives.

This is happening in states across the country. And it tracks with public opinion.

A Pew Research Center survey from November 2025 found that exactly 50% of Republicans and 51% of Democrats say they're more concerned than excited about AI's growing role in daily life. That's nearly identical across party lines.

Think about that. AI may be one of the only topics right now where Republicans and Democrats feel basically the same way. And they're both worried.

That's a signal. Elected officials are going to respond to that, one way or another.

The regulations being written right now, in state legislatures, in Congress, in other countries, will shape how AI works in your life for years. In your doctor's office. In your kid's school. In hiring decisions for jobs you might apply for.

You don't have to be a policy expert to care about this. You just have to pay attention.dcast by Joe Foley. New episodes every week.

Joe Foley
Written by

Joe Foley

Contributing writer at AI for Ordinary People, passionate about making technology accessible to everyone.

Leave a Reply

Your email address will not be published. Required fields are marked *