Confronting the CEO of the AI company that impersonated me

Foto: A photo illustration of Superhuman CEO Shishir Mehrotra.
As many as 12 million daily active users utilize Grammarly's tools, but few expected that under the Expert Review feature, their texts would be evaluated by digital clones of real journalists without their consent. Shishir Mehrotra, CEO of the company (currently rebranding as Superhuman) and former head of product at YouTube, had to face allegations of identity theft after editors from The Verge discovered their own names within the AI system. This incident sheds new light on the ethics of training language models and using the authority of specific individuals to validate the outputs of generative artificial intelligence. For the global community of creators and specialists, this is a clear signal that the line between inspiration and impersonation in the AI world has become dangerously thin. Mehrotra, who also sits on the board of Spotify, argues that technology is meant to support creativity; however, Grammarly's practices show that major tech corporations still struggle with the definition of intellectual and image property. In practice, this means users must account for increasingly frequent cases of algorithms "hallucinating" authorities, forcing rigorous verification of sources even in tools considered market standards. The industry now faces the necessity of developing strict opt-in standards to prevent brands from monetizing others' reputations under the guise of innovation.
In the world of technology, the line between inspiration and identity theft is becoming increasingly thin, and the latest conflict between the media and AI providers is a stark example of this. Shishir Mehrotra, the current CEO of Superhuman (formerly known as Grammarly), is facing serious allegations after his platform began offering a service to "clone" the personalities of well-known journalists without their knowledge or consent. This is not just another theoretical dispute over copyright for texts in training databases; it is a direct hit on the personal brands of creators who have built their credibility over decades.
Mehrotra is no newcomer to Silicon Valley. As the former Chief Product Officer at YouTube and a current board member at Spotify, he perfectly understands the mechanisms governing platforms and the creator economy. However, his latest venture, the flagship product of Grammarly, has crossed a barrier that many in the industry did not expect to be breached so quickly. In August of last year, the company introduced the Expert Review feature, which promised users corrections and stylistic suggestions from virtual "experts." The problem was that these experts turned out to be real people, including editors from The Verge, whose digital avatars began providing advice without any legal or financial consultations.

The mechanism of identity appropriation in Expert Review
The Expert Review feature was intended by its creators to be a revolution in text editing. Instead of generic grammatical corrections, the user was given insight into how a specific, recognized professional would write a given passage. In practice, it turned out that Grammarly used the names and writing styles of specific journalists, creating digital copies of them. This approach sheds new light on the problem of AI-cloning. While training models on publicly available data is the subject of numerous lawsuits, offering a specific name as a "product" is a direct violation of personal rights.
Read also
For the journalists whose names appeared in the Expert Review selection menu, the situation was shocking. Superhuman (as the parent company) argues that these models are optimized for specific narrative styles; however, the lack of transparency in the selection process for these "experts" sparked a wave of outrage. In the creative industry, style is a signature. If an algorithm can reliably mimic it and does so under the author's name, an erosion of trust between the reader and the creator occurs. Who is actually responsible for the words uttered in AI suggestions?
- Expert Review: A feature offering suggestions from digitally cloned experts.
- Lack of consent: Journalists' names were used without their knowledge or authorization.
- Scale of the phenomenon: The mechanism included many reporters from leading technology editorial offices.
- Legal context: The use of image and name in a commercial AI tool without a licensing agreement.

Product ethics versus aggressive growth
The stance of Shishir Mehrotra reflects a broader trend in the generative artificial intelligence sector: "deploy first, apologize later." As a man who helped build the power of YouTube, Mehrotra knows that platforms are built on user-generated content, but Grammarly went a step further. Instead of just being a tool for error correction, the software began to actively parasitize the reputations of external experts to validate its own AI results. This strategic shift from "writing assistance" to "authority simulation" is a turning point for the entire SaaS software category.
In conversations about the future of software, Mehrotra often emphasizes how AI will change the way we interact with interfaces. However, the case of Expert Review shows the dark side of this vision. If every writing app can summon the style of any writer, the value of a unique voice in the media will drop drastically. Superhuman, despite the name change and attempts to position itself as a premium tool for professionals, must now answer a fundamental question: does their product support creativity or cannibalize it?
"No one ever asked for permission to use our names in this way. Many reporters were outraged by the fact that their professional identity was reduced to a function in a dropdown menu."
A new paradigm of intellectual property
The dispute surrounding Grammarly and the actions taken under the aegis of Superhuman is just the tip of the iceberg. We are witnessing a redefinition of the concept of intellectual property in the era of language models. Traditional copyright protected a specific work, but not necessarily a "style" or "professional personality" from being copied by a machine. Mehrotra, sitting on the board of Spotify, sees similar processes in the music industry, where AI generates tracks that sound like Drake or The Weeknd. Grammarly's transfer of this model to the world of journalism and copywriting is a logical, if ethically questionable, step in the evolution of platforms.
The use of AI-cloned experts in a commercial product without a revenue share model for the original creators is a signal to the entire creative industry. If giants like Superhuman do not develop identity licensing standards, we face a wave of lawsuits that could block the development of the most promising AI features. The key limitations the company currently faces are not technology, but the social and legal barriers they raised themselves by ignoring the subjectivity of authors.
The aggressive approach to implementing features like Expert Review without an ethical foundation heralds a deep crisis in the relationship between big-tech and the creative sector. Shishir Mehrotra and Superhuman will have to revise their strategy, because in a world where AI can be anyone, authenticity becomes the most expensive currency. If AI tools are perceived as parasites feeding on human achievement, their adoption in professional environments will encounter resistance that no algorithm update will break. The industry needs clear rules of the game, and the current conflict surrounding Grammarly is a painful but necessary step toward establishing them.









