GitHub backs down, kills Copilot pull-request ads after backlash

Foto: The Register
More than 11,400 pull requests on GitHub were infected with unsolicited advertisements, sparking immediate fury within the developer community and forcing Microsoft into a swift retreat. The GitHub Copilot mechanism, instead of limiting itself to coding support, began arbitrarily editing human-created task descriptions, inserting "tips" promoting third-party applications such as Raycast. The issue was brought to light by developer Zach Manson, who discovered that the AI was modifying content as if it had been written by the code's author—an action deemed an unacceptable violation of a programmer's work integrity. Company representatives initially defended the feature, claiming it was intended to educate users about new tools; however, the scale of the backlash forced a reversal within just one day. GitHub’s Martin Woodward and Tim Rogers admitted that allowing Copilot to interfere with PRs it did not create was a "mistake." For global users, this serves as a clear signal that the line between an AI assistant and a marketing tool is very thin, and technology platforms must rigorously respect creator autonomy to avoid losing trust. The ad-injection feature has been completely disabled, representing a rare case of such a rapid capitulation by a tech giant in response to the open-source community's voice.
In the world of open-source software, the line between a "useful feature" and "intrusive marketing" can be extremely thin. Microsoft learned this the hard way in March 2026, when it had to make a sudden retreat after the developer community fiercely protested against a new form of self-promotion within the GitHub platform. The Copilot mechanism, the Redmond giant's flagship AI assistant, began unilaterally editing descriptions of programming tasks, inserting content that users explicitly labeled as advertisements.
The scandal broke when Zach Manson, a developer from Australia, noticed unusual activity in his repository. After a colleague asked Copilot to fix a typo in a pull request (PR), the artificial intelligence not only performed the task but added a "tip" of its own encouraging the installation of the Raycast application. This incident quickly ceased to be treated as an isolated error when it turned out that the scale of the phenomenon was massive, affecting thousands of projects worldwide.
Invasion of "tips" in source code
The scale of the phenomenon discovered by Manson was astonishing. After a quick search of GitHub resources, developers identified over 11,400 pull requests containing identical content promoting the Raycast tool. The message, accompanied by a lightning bolt emoji, suggested that macOS and Windows users quickly launch Copilot coding agents using the aforementioned application. For many programmers, it was a shock – the AI assistant began behaving like aggressive adware, modifying descriptions of code changes as if they had been written by the PR author themselves.
Read also
Manson admitted in media interviews that he initially suspected a prompt injection attack or training data poisoning. He found it hard to believe that an official GitHub tool could have permissions to edit descriptions and comments of other users without their explicit consent. Worse still, the mechanism worked in such a way that the advertisement appeared to be an integral part of the developer's statement, which Manson described as "highly offensive."
- Scale of the problem: Over 11,400 pull requests infected with "tips."
- Ad content: Promotion of the Raycast app as a tool for handling AI agents.
- Trigger mechanism: Mentioning Copilot in a comment or requesting a minor text edit.
- Permissions: The AI's ability to modify PR metadata created by humans.
Rapid response and admission of error
Pressure from the community, which quickly publicized the matter on sites like Neowin and Hacker News, forced GitHub to react in less than 24 hours. Martin Woodward, Vice President of Developer Relations at GitHub, tried to calm the situation, explaining on platform X that Copilot has long placed "tips" in pull requests that it generates itself. However, the problem turned out to be a new functionality allowing the AI to intervene in any PR where it is invoked by a user.
Woodward admitted directly that the assistant's behavior became "icky" the moment it began interfering with human-created content. It is a rare case for a corporation of this scale to evaluate its own product solution so quickly and bluntly. It turned out that the company's intention was to "educate developers" on new ways to use AI agents, but this form was completely rejected by the target group, which values the integrity of the code review process.
The end of the era of unwanted edits
Tim Rogers, Principal Product Manager for Copilot at GitHub, also spoke on the matter. In an official statement published on Hacker News, Rogers struck a self-critical tone. He admitted that after analyzing community feedback and reflecting on the incident, it was determined that allowing AI to modify human entries without their knowledge was a "wrong judgement call."
"We have already disabled these tips in pull requests created or modified by Copilot, so this situation will not happen again" – declared Tim Rogers, thus ending the short but turbulent experiment with advertisements within the programming workflow.
The decision to withdraw the feature was made on Monday afternoon, March 30, 2026. Although Microsoft and GitHub declined further comment on the matter, this case sets an important precedent. It shows that even the most powerful players in the AI market must reckon with user resistance when technology begins to violate creator autonomy or introduce commercial elements where developers expect a clean, professional work environment.
One could argue that the Copilot incident will serve as a warning to other AI tool providers. Introducing monetization or self-promotion directly into the user's workflow is a high-risk strategy. The tech industry, despite its fascination with the capabilities of Large Language Models, still treats source code and technical documentation as an almost sacred sphere where there is no room for automatically generated advertisements, regardless of how much product managers would like to call them "useful tips."
More from Industry
Evercore ISI predicts 'inflection point' is days away, plans to commit capital if S&P 500 drops to this level
Micron stock sinks 10%, further cratering in post-earnings sell-off
Meta's court losses spell potential trouble for AI research, consumer safety
Eli Lilly reaches $2.75 billion deal with Insilico to bring AI-developed drugs to the global market
Related Articles

AI will write code, but prepare to babysit it - and be sure you speak its language
Mar 29
The first thing vibe coding builds is confidence it will help you succeed
Mar 29
Anthropic struggling with Chinese competition, its own safety obsession
Mar 28

