Writer denies it, but publisher pulls horror novel after multiple allegations of AI use

Foto: Getty Images
Hachette publishing house has withdrawn the horror novel "Shy Girl" by Mia Ballard from the British market following a New York Times investigation suggesting significant use of AI in the novel. The decision also cancels plans for the book's US release. The novel, which gained popularity on social media in 2025 as a self-published work, tells the story of a depressed woman with OCD who becomes involved with a "sugar daddy" and must live as his pet. Reviews were mixed: some praised the writing style, while others called the work "absolute nonsense" filled with repetitions and poor execution. Suspicions about AI intensified in January 2026 when a Reddit editor published a lengthy post pointing to characteristics typical of text generated by an LLM. Criticism spread rapidly — a 2.5-hour YouTube video garnered 1.2 million views, and AI detection companies like Pangram confirmed signs of extensive use of generators. Ballard denies the accusations, but the publisher's swift response demonstrates how seriously the publishing industry takes the issue of work authenticity in the era of advanced language models.
When Shy Girl by Mia Ballard appeared on the shelves of Hachette bookstores, it seemed like a success story worthy of the contemporary internet — a self-published novel that gained mass popularity on social media, and then caught the attention of one of the world's largest publishing houses. Less than a year later, the publisher withdrew the book from the British market and canceled plans for its release in the United States. The reason? An investigation by the New York Times suggesting that significant portions of the work were created using artificial intelligence. This is one of the first cases where a serious publisher had to face such an accusation — at a time when the publishing industry is only beginning to understand the threats posed by generative models.
This story became a turning point in the debate about the authenticity of literary creation in the age of AI. It's not just a matter of ethics or honesty — it's a more fundamental problem about what literature is, who can be considered an author, and how the publishing industry should respond to increasingly advanced text-generating tools. Ballard denies all accusations, but the fact that Hachette decided to withdraw the book says much about how seriously publishers take these allegations — and how uncertain they are about their ability to verify the authenticity of work.
From social media phenomenon to scandal
Shy Girl is the story of Gia, a woman struggling with depression and obsessive-compulsive disorder, who meets a man offering her a radical solution to her financial problems. The condition? She must literally live like an animal — his animal. The plot quickly becomes something far more disturbing when the protagonist literally transforms into a beast. This is exactly the kind of surreal, grotesque horror that fits perfectly with TikTok and BookTok algorithms — the platform where user recommendations can push an unknown novel to bestseller status within weeks.
Read also
Initially, reviewers were delighted. Readers on Goodreads wrote about being "obsessively fascinated" by the way Ballard writes. The book quickly gained a cult following, with observers sharing passages from the novel on social media, discussing its meaning and speculating about the author's intentions. For the publisher, it was an ideal scenario — organic buzz that every publisher dreams of achieving. Naturally, Hachette decided to capitalize on this interest, signing a publishing deal and planning international expansion.
However, not all readers were convinced. Alongside praise, critical voices also emerged — and not just about the quality of the narrative. Some reviewers described the text as "absolutely terrible, overloaded, repetitive, poorly executed" with "disgusting formatting". But criticism soon went beyond the realm of traditional literary reviews.
When readers begin to suspect artificial intelligence
In a pivotal moment in January 2026, someone claiming to be an experienced book editor published a long thread on Reddit containing a detailed analysis of the text. The post drew attention to what the author considered characteristic features of text generated by LLM (Large Language Model) — repeating sentence structures, unnatural transitions between paragraphs, overly perfect formulations that simultaneously seemed devoid of authentic authorial voice. "If this isn't AI, then she's a terrible writer," the editor wrote. "Her writing is really indistinguishable from a language model."
It was just the beginning. Shortly after, a YouTube video nearly two and a half hours long appeared, in which the author detailed an analysis of Ballard's text, comparing it with samples of text generated by ChatGPT, GPT-4, and other popular AI models. The video quickly gained 1.2 million views, and its comments filled with discussions about whether literature can be authentic if generated by an algorithm.
Even AI detection companies joined the debate. Pangram, one of the leading platforms for detecting AI-generated text, analyzed Shy Girl and found that large portions of the text exhibited characteristics typical of AI-generated text. Although AI detection tools are known for being unstable and sometimes incorrectly classifying human-written text as AI-generated, the fact that many of them pointed to the same conclusion was a signal that publishers could not ignore.
Publisher's response and author's position
When accusations began to spread, Mia Ballard decided to attack them directly. She denied any suggestions that she used AI to write Shy Girl, claiming that the text was entirely her own work. Her defense was firm, but in the face of growing analyses and expert opinions, it seemed that the author's words had lost their power of persuasion. In a world where anyone can submit text to several AI detection tools and get a result within minutes, traditional denial ceased to be sufficient.
Hachette, on the other hand, faced a dilemma. The publisher could either stand by Ballard and defend the book — risking its reputation if the accusations proved true — or it could withdraw from the project and avoid negative image consequences. The publisher chose the second option. It withdrew Shy Girl from the British market and canceled plans for release in the United States. This was a decision that practically confirms that Hachette believes the accusations — even if it didn't say so directly.
The irony of the situation does not escape the notice of industry observers. Hachette, one of the "Big Five" traditional publishers that actively fought AI in the context of copyright and model training, found itself caught in a trap — publishing a book that may be a product of the same technology it sharply criticizes.
AI detection tools — are they trustworthy?
This entire story sheds light on one of the key challenges of our time: how can we ever know whether text was written by a human or a machine? AI detection tools such as Pangram, Turnitin AI Detector, or GPTZero promise an answer, but the reality is far more complicated. These systems work by analyzing statistical patterns in text — they search for features that are characteristic of text generated by language models, such as overly perfect grammar, lack of errors typical for humans, or predictable sentence structures.
However, research has shown that these tools are far from perfect. They can incorrectly classify text written by non-native English speakers as AI-generated. On the other hand, cleverly formatted AI text — especially text that has been subsequently edited by a human — can pass through a detector unnoticed. It is a game of cat and mouse, where each side is constantly improving.
In the case of Shy Girl, the fact that multiple tools pointed to AI, combined with analyses from editors and community opinion, created enough circumstantial evidence for Hachette to decide to act. But it also shows how unstable this terrain is — the publisher practically admitted that it was uncertain, but preferred to be safe rather than sorry later.
Precedent for the publishing industry
The story of Shy Girl will have long-term consequences for the entire publishing industry. For the first time, a known publisher had to withdraw a book due to suspicions related to AI — and without definitive, scientific proof. This sets a precedent that other publishers will have to consider. Should they be more careful in the author verification process? Should they require certificates or guarantees that the text was written by a human? Should they invest in better AI detection tools?
For authors, especially those starting their careers on platforms like Amazon KDP or Smashwords, this is also a warning. If you want your work to be published by a traditional publisher, you will have to be ready for a more intensive verification process. Some publishers have already begun requiring authors to state that their works were not generated by AI — but how to verify this remains an open question.
What we learned about the reader community
One of the most interesting aspects of the entire affair is the role of the reader community in uncovering potential fraud. These were not scientists, it was not a formal investigation — these were readers who noticed something strange in the text and decided to share their observations. Their collective intelligence and persistence in analyzing the text led to the matter reaching mainstream media and ultimately the pages of the New York Times.
This shows how powerful an online community can be when it has a common goal. But it also shows the risk — what if the accusations were wrong? Readers could destroy the career of an innocent author based on statistical anomalies that may simply be the result of personal writing style. The story of Shy Girl is a reminder that even when we have access to analytical tools and the ability to share information across the world, we must be careful in making judgments.
The future of authenticity in literature
While Mia Ballard claims she is innocent, and the fact that Hachette withdrew the book is not formal proof of guilt, this case highlights an increasingly urgent problem: how will we verify the authenticity of literature in an era when AI models can generate text that is almost indistinguishable from human-written text?
Some suggest that the answer lies in blockchain and distributed ledger technologies, which could create an immutable history of the creative process. Others propose that publishers require a more engaged editing process that would be harder to fake with AI. Still others argue that ultimately it doesn't matter — if the text is good, does it matter whether it was written by a human or by AI?
But this last perspective ignores something fundamental: readers and publishers clearly believe it does matter. The story of Shy Girl shows that authenticity — or at least the belief that we are reading work written by a human — matters to the publishing industry. When this authenticity is questioned, even commercial successes can quickly turn into a PR nightmare. It is a lesson that every publisher should remember when wondering how to deal with the wave of AI-generated content.








