AI5 min readThe Verge AI

Grammarly’s sloppelganger saga

P
Redakcja Pixelift0 views
Share
Grammarly’s sloppelganger saga

Cath Virginia /The Verge, Getty Images

As late as March 10, 2026, Grammarly (now rebranded as Superhuman) was forced to capitulate regarding its controversial "Expert Review" feature. Instead of the promised professional consultations, the tool served users AI-generated advice, unlawfully signed with the names of famous authors, scientists, and even deceased professors. Among the "experts" were Stephen King and Neil deGrasse Tyson, and editorial tests revealed that the system cloned the identities of active journalists without consent, offering generic and largely useless editorial tips. The scandal erupted when it was revealed that the company not only lacked consent from the individuals involved but also employed misleading verification icons. Although Superhuman initially defended itself by citing access to public source texts, a wave of criticism regarding ethics and the violation of personal rights led to the complete deactivation of the feature. For global users, this is a clear signal that the line between inspiration and digital identity theft in AI tools is becoming a new battlefield for privacy. This incident forces software developers to move from an "opt-out" model to full transparency and expert control over their likeness. The race to be the most intelligent writing assistant cannot come at the expense of the credibility of those whose authority the AI attempts to misappropriate.

The market for writing assistants has undergone a drastic metamorphosis in recent years, but the case of the company formerly known as Grammarly will go down in history as a textbook example of how not to implement generative artificial intelligence. What began as an ambitious rebranding toward Superhuman quickly spiraled into a PR and legal nightmare when the behind-the-scenes details of the Expert Review feature came to light. The use of names of well-known journalists, scientists, and even deceased professors without their consent became a flashpoint in the debate over AI ethics.

From correcting commas to digital doppelgängers

The evolution of Grammarly gained momentum in June 2025, when the company acquired the email platform Superhuman Mail. Just a few months later, in October, the official rebranding of the entire enterprise to Superhuman was announced. The company's CPO, Noam Lovinsky, assured at the time that the Grammarly brand would not disappear but would undergo a transformation—from a simple spell-checking tool into a hub for intelligent AI agents. The centerpiece of this strategy was to be the Expert Review feature, quietly introduced as early as August 2025.

The promise was enticing: users were to receive advice from "leading professionals, authors, and subject matter experts." In practice, the app's sidebar began displaying suggestions attributed to names such as Stephen King, Neil deGrasse Tyson, and Carl Sagan, complete with a verification icon reminiscent of the "checks" known from social media. Although the company added a fine-print disclaimer stating that these experts do not endorse the product, the presentation suggested something entirely different.

Illustration depicting AI technology and data
Superhuman's aggressive implementation of AI features led to serious ethical controversies.

The "inspiration" mechanism under editorial scrutiny

The scandal erupted in full force in early March 2026, when reports from Wired and tests by The Verge editorial staff revealed the scale of the practice. It turned out that Superhuman algorithms were generating opinions attributed to living tech journalists without their knowledge. Among the "experts" were Nilay Patel, David Pierce, and Tom Warren. The suggestions generated under their names were often described as annoying or even useless—for example, an AI impersonating Patel suggested adding "intrigue" to headlines using generic phrases.

When Alex Gay, VP of Product Marketing at Superhuman, was asked why the affected individuals were not notified, he responded evasively that experts appear in the system because their work is "publicly available and widely cited." However, technical analysis revealed deeper problems:

  • Source links leading to the experts' alleged work were often broken or directed to completely unrelated content.
  • The system bypassed paywalls by using copies of articles from content archiving services.
  • The AI suggestions had no basis in the actual working methods of the people whose names were used.

Rapid retreat and legal consequences

The company's response to the crisis was chaotic. On March 10, 2026, a special email inbox was launched through which experts could request the removal of their names from the database (an opt-out model). However, just a day later, under a wave of criticism, Superhuman completely disabled the Expert Review feature. Ailian Gan, Director of Product Management, admitted that the feature requires "rethinking" to give experts real control over their likeness.

An apology was also issued by the company's CEO, Shishir Mehrotra, admitting on LinkedIn that the concerns regarding the misrepresentation of expert opinions were valid. This did not, however, stop the legal machinery. That same day, investigative journalist Julia Angwin filed a class-action lawsuit against Superhuman. It alleges that the company violated privacy, right of publicity, and personal property protection laws in the states of New York and California.

Graphic symbolizing digital content distribution
The use of publicly available work to train AI models without authors' consent is drawing opposition from creators.

The extractive nature of modern AI

The confrontation between Shishir Mehrotra and Nilay Patel on the Decoder podcast shed light on a fundamental difference in the perception of authorship. Mehrotra claimed that the system merely "attributed" knowledge to sources, while Patel pointed out that attributing AI-fabricated advice to someone is not citation, but fabrication. Despite the failure of Expert Review, the Superhuman CEO still believes in a vision of a "creator economy" where authors train their own AI agents to interact with audiences—only this time, with their consent.

The Expert Review case is more than a failed product experiment. It is a glaring example of the extractive model of AI development, where data and reputations built over years by experts are treated as free fuel for algorithms. Although Superhuman promises the return of the feature in a new form, the trust of the creative industry has been severely damaged. The tech industry has received a clear signal: "publicly available" does not mean "available for any manipulation by AI."

In an era of fighting for copyrights in the generative age, the Superhuman case sets a new boundary. If tech companies want to build tools based on the authority of real people, they must abandon the "apologize later" model in favor of transparent partnership. Otherwise, instead of a productivity revolution, we face a wave of lawsuits that could effectively stall the development of next-generation AI assistants.

Source: The Verge AI
Share

Comments

Loading...