Google Search is now using AI to replace headlines

Foto: I’m particularly annoyed by “Copilot Changes: Marketing Teams at it Again,” as I hate reading Headlines That Cap Every Word and we never do that at The Verge .
Google Has Become an Editor. The search engine started replacing original article headlines with AI-generated ones — first experimenting in Google Discover, now in traditional search results. The Verge found numerous examples where Google rewrote their texts without permission, sometimes changing the article's meaning. An article about a cheating tool was shortened to five words, suggesting a product recommendation that the editorial team never issued. Although Google claims this is a "narrow experiment," it does not disclose its actual scale. The company's stance resembles a bookstore changing book covers without authors' knowledge — it violates publishers' right to control their work. It is concerning that Google previously also called the headline replacement in Discover an experiment, then announced it as a feature. While changed headlines are currently rare and relatively mild, they could be a harbinger of larger changes. Google assures that it will not use generative AI in any potential full rollout — however, history shows that tech giant promises do not always hold true.
For over two decades, Google Search was to the internet what a compass is to a sailor — a reliable guide that everyone oriented themselves by. Users loved this simplicity: ten blue links, each leading exactly where you clicked it. It was an unwritten agreement between Google and the rest of the world: we deliver results, you decide where to go. Now that agreement is falling apart. Google is starting to replace article headlines in search results with versions generated by artificial intelligence — and it's doing this without asking the editorial staff that wrote those articles. This is not a marginal experiment. This is an attack on the very essence of how journalism works in the age of algorithms.
The Mountain View company first started playing with headlines in its Google Discover channel, and now it has transferred this practice to traditional search results. Editorial teams are starting to report cases where Google not only shortened their headlines — which would still be acceptable — but changed their meaning, sometimes drastically. An article titled "I used the 'cheat on everything' AI tool and it didn't help me cheat on anything" (a clear criticism of the tool) Google changed to "'Cheat on everything' AI tool" — five words that sounded like a recommendation for a product that the authors clearly did not endorse. This is no longer optimization. This is distorting the authors' words.
When Google takes control of your story
Google spokesperson Jennifer Kutz admitted that experiments are indeed taking place, but then added that they are "small" and "narrow." She never specified exactly how small they are. Over the past months, editorial staff have watched as headlines they never wrote appear in Google Search results. Headlines that don't match their editorial style, with no indication that they have been changed. Google claims it is experimenting with headlines not just for news media, but for the entire internet.
Read also
The comparison made in the article is very apt: it's as if a bookstore were tearing covers off books on its display and changing their titles. Editorial teams spend hours working on headlines that are meant to be true, interesting, entertaining, and noteworthy, without resorting to clickbait. And Google seems to believe that editorial teams have no right to market their own work in the way they want to.
The situation becomes even more absurd when you look at Google's earlier experiments in Google Discover. There, artificial intelligence generated headlines that were simply false. "US reverses foreign drone ban" — on an article that described something quite the opposite. Or "PlayStation Portal is getting a 1080p streaming mode" — when in reality the console got a higher bitrate, nothing more. These were not interpretation errors. This was disinformation on an industrial scale.
An experiment that never ends
Google says this is an "experiment." But history teaches us to be cautious of such words. A few months ago, that same company told us that AI-generated headlines in Google Discover were also an experiment. Then, a month later, it announced that it was no longer an experiment — it was a feature that "performs well in user satisfaction research." You know how this usually ends.
Sean Hollister, who wrote this article, has 15 years of experience editing technology content. Over those years, he has seen how Google changes the rules of the game. But he has never seen anything like this before — Google directly overwriting headlines that someone wrote. This is a precedent. And it's a dangerous precedent.
Google maintains that the goal is to "identify content on a page that would be a useful and appropriate title for a user's query." The aim is to "better match titles to user queries and make it easier to interact with web content." It sounds reasonable. Until you realize that Google decides what is "useful" and "appropriate," and you have no say in it.
Technical wizardry, or how Google justifies it
Spokeswoman Mallory De Leon said that if Google were to launch this feature at scale, "it would not be using a generative model and we would not be creating headlines using generative AI." The problem is that no one knows how this would work. If you're not using generative AI to create new headlines, how exactly are you creating them? With magic?
Google is trying to normalize this by saying it's one of "tens of thousands of live experiments on traffic" that Google conducts to test possible search improvements. The company also reminds us that it has been manipulating page titles in search results for years. But as Hollister himself points out — this is not the same thing. When Google shortens a headline because it's too long, that's one thing. When Google completely rewrites it, creating new text, that's something else entirely.
Google's traditional actions in this area have been simple and understandable. If the algorithm deemed a headline too long or unbalanced, it would show only part of it — cutting off the beginning or end. If an editorial team had two headlines — one for the search engine, one for the page — Google would sometimes choose the latter. Annoying? Yes. But understandable and acceptable. This is something different.
When the algorithm becomes an editor
The fundamental problem lies in the fact that Google is taking on the role of editor. It is no longer a search engine that organizes existing content. It is an editorial office that edits articles without the authors' consent. It changes the meaning of text, sometimes drastically. This is a violation of a fundamental principle of how the internet works — that content belongs to the person who created it.
Take a concrete example: an article about a cheating tool. The original version clearly communicated that the tool doesn't work — it was a criticism. Google changed it to a simple product description. Now anyone who sees that headline in Google search results thinks the article recommends this tool. This changes the meaning of the article. This changes the message.
The second example is equally problematic: "Copilot Changes: Marketing Teams at it Again." The article originally had a different headline, with a different message. Google created this new one, and in doing so applied formatting that The Verge never uses — Headlines With Every Word Written In Capital Letters. This is not subtle. This is a change in the style, tone, and message of the article.
Journalism in times when machines edit
All of this is happening at a time when trust in media is already weakened. Institutions are trying to discredit journalism, and editorial teams are struggling for survival. In such a context, allowing Google to change headlines is not just a technical problem — it's a threat to the entire industry. When users see changed headlines, they don't know they've been changed. They think the editorial team wrote them. When a headline is misleading or false, the responsibility falls on journalists, not Google.
Sean Hollister previously warned that Google prioritizes artificial intelligence over traditional search results. He noted that the Gemini AI search engine doesn't encourage users to click on actual news articles. He thought he would always have those traditional blue links at his disposal — the last line of defense against Google's complete control over what people read. Now even that's not certain.
A precedent that opens the gate
The most important question is: what does this mean for the future? If Google can replace headlines in search, what else can it replace? Descriptions? First paragraphs? Entire articles? Where does it end?
Google argues that this is an experiment. But experiments tend to become features, and features tend to spread. Remember that Google Discover started as an experiment. Now it's everywhere. Every Android phone has it. Every Chrome user sees it. What will happen when AI-generated headlines become the standard in Google Search?
Indeed, Vox Media (the parent company of The Verge) filed a lawsuit against Google, accusing it of illegal monopolistic practices in advertising technology. This adds context to this situation — it's not just about headlines. It's about Google having too much power over how content reaches people, and how it uses that power.
Seven years later: will anyone remember who wrote the article?
Imagine a scenario seven years from now. Google has fully implemented AI-generated headlines. Maybe even descriptions. Maybe even first paragraphs. A user reads an article they like, but doesn't know that the headline that attracted them was written by a machine, not by a journalist. The article may be great — but its first impression, its entry point, was changed without the user's knowledge.
This is the world we're heading toward if nothing changes. Google has the power to change what people read before they even read it. And it's doing it in the name of "better functionality" and "better matching to queries."
Hollister is right in saying that this is not normal. In 15 years of working in the industry, he has never seen anything like this before. And we should all be concerned about this. Not because Google is evil — Google is doing what big corporations do, pushing the boundaries of what they can do. We should be concerned because no one is stopping Google. There is no regulation that prohibits this. There is no competition that would do it differently. There is only Google and its vision of what the internet should look like.
And that vision increasingly looks like a world where machines decide what we read, and we just read what they gave us.








