ChatGPT did not cure a dog’s cancer

Foto: The Verge AI
# AI's Role in Dog Cancer Treatment Overstated, Media Dramatizes Story A story about an Australian entrepreneur who allegedly used ChatGPT to save his dog from cancer has circulated through global media as evidence of an AI revolution in medicine. The reality is decidedly more complicated. Paul Conyngham, without medical experience, used ChatGPT to find treatment options for his dog Rosie's tumor. The chatbot pointed to immunotherapy, and Conyngham subsequently collaborated with scientists from University of New South Wales on developing a personalized mRNA vaccine. Several weeks after the first injection, tumors shrank, though they did not disappear completely, and one did not respond at all. The problem is that media outlets dramatized the story. Newsweek wrote about "inventing a cancer cure," while the reality is far less spectacular. ChatGPT did not design the vaccine — it merely served as a research assistant for reviewing literature. Scientists developed the vaccine. Additionally, Rosie received another immunotherapy simultaneously, making it impossible to determine whether the mRNA vaccine actually had any impact on the improvement. The role of AlphaFold and Grok remains unclear. The story demonstrates how AI receives too much credit for scientists' work.
When an Australian tech entrepreneur with no experience in biology or medicine announced that ChatGPT helped save his dog from cancer, the story spread at breakneck speed. It was exactly the kind of validation that tech giants have been seeking for years — proof that artificial intelligence will revolutionize medicine and tackle one of its most dangerous diseases. Reality, as usual, turned out to be far more complicated.
The story of Paul Conyngham from Sydney and his dog Rosie quickly became one of those narratives that seem too perfect to be true. An entrepreneur without medical training, a desperate pet owner, a few chatbots — all the elements came together in a scenario straight out of an article promoting an AI revolution in healthcare. Except when you dig deeper, it turns out that reality is far more chaotic, and the role of AI in this case has been drastically misrepresented.
It's worth examining this case not because it's trivial, but precisely because it's instructive. It shows how easily a technological narrative can overshadow actual human achievement, how media can distort a story in search of clicks, and how the AI industry uses such cases to build its image as humanity's savior.
Read also
The story that flooded the internet — and how it was distorted
It all started in 2024 when Rosie, a Staffordshire bull terrier and Shar Pei mix, was diagnosed with cancer. It's an ordinary, sad story that thousands of pet owners experience. Chemotherapy initially slowed the disease but didn't shrink the tumors. When veterinarians said "nothing more can be done", Conyngham made a decision that ultimately turned his personal tragedy into a media sensation.
The version of the story that media outlets spread — first the Australian newspaper The Australian — sounded relatively straightforward. Conyngham allegedly used ChatGPT to brainstorm treatment options. The chatbot supposedly suggested immunotherapy and directed him to scientists from the University of New South Wales. Conyngham then used both ChatGPT and Google's AI model for protein structure prediction — AlphaFold — to understand the results of genetic profiling of Rosie's tumor. Working with Professor Pall Thordarson from UNSW, he reportedly developed a personalized mRNA vaccine tailored to the dog's tumor mutations.
A few weeks after the first vaccine injection in December, Rosie's tumors were supposed to have shrunk, and the dog was supposed to feel better — even chasing rabbits in the park. It sounded like a triumph. Except Conyngham himself was cautious in his words: "I have no illusions that this is a cure, but I believe this treatment gave Rosie significantly more time and quality of life".
However, this caution didn't survive first contact with the media. Newsweek announced: "Medical-experience-free owner invents cure for terminal dog cancer". The New York Post shouted that "Tech pro saves his dying dog using ChatGPT to code custom cancer vaccine". On social media, accounts spread Rosie as "cured" and as evidence of a new era of personalized medicine. Greg Brockman, president and co-founder of OpenAI, not only shared this story but added his own enthusiasm. Elon Musk couldn't resist and pointed out that his company's AI, xAI — Grok — also played a role, a detail that surprisingly early reports omitted.
Where AI ends and real science begins
The problem with this story begins with a fundamental misunderstanding of what ChatGPT and other AI tools actually did. The narrative spread by media suggests that artificial intelligence designed and created the treatment. That's simply not true. ChatGPT didn't design or create Rosie's treatment — scientists did.
At best, the chatbot served as a research assistant, helping Conyngham analyze medical literature. That's genuinely impressive — AI's ability to search, summarize, and explain complex scientific texts is real progress. But that's something entirely different from what the "AI saved the dog" narrative suggests.
The mRNA vaccine itself wasn't generated by the chatbot. Scientists from UNSW performed genetic profiling of the tumor, identified mutations, and then developed a vaccine based on those mutations. That was the work of laboratory experts, physical work in the lab, testing and validation. AI didn't do that — it could only help think through problems and review literature.
David Ascher, professor and director of biotechnology programs at the University of Queensland, explained this perfectly to The Verge: AlphaFold "could contribute to structural hypotheses about proteins, but it's not a ready-to-use system for designing cancer vaccines". Moreover, official guidelines warn that AlphaFold is not validated for predicting the effects of certain mutations and doesn't model "several biologically important contexts".
The role of Grok — xAI's AI — is even more unclear. Conyngham wrote on X that "the final vaccine design for Rosie was designed by Grok", but it's unclear what that specifically means or what input the model received. Ascher suggests that Grok realistically falls into the same category as ChatGPT: a tool that "could help with literature searches, summarizing articles, translating jargon, suggesting workflows, editing code or documents, and helping the user think through options". A useful role, but far from what the phrase "designing a cancer vaccine" suggests.
The human work that media completely ignored
What really hurts about this story is the complete disregard for the enormous effort of people, without which all of this would be just text on a screen. Alvin Chan, assistant professor at Nanyang Technological University in Singapore, who works on AI for biomedical and pharmaceutical discovery, expressed it perfectly: the frame "AI did it" ignores this massive human effort, without which "AI output would remain just text on a screen".
Think about it for a moment. To go from "ChatGPT suggested immunotherapy" to "mRNA vaccine tailored to specific mutations of a dog's tumor" required:
- Access to specialist scientists from an elite Australian university
- Ability to conduct advanced genetic profiling of the tumor
- Deep knowledge of immunotherapy and mRNA vaccines
- Access to a laboratory capable of synthesizing mRNA
- Ability to produce and test the vaccine
- Significant funds — estimated at tens of thousands of dollars
- Veterinary skills to safely administer the treatment
ChatGPT provided none of these elements. It could only help formulate questions and analyze answers. It's like saying GPS designed a house — it can show you a map of the terrain, but it doesn't build the structure.
Professor Ascher summed it up well: Rosie's case "is better viewed as an extraordinary, very specific proof of concept than as a template that ordinary people can easily replicate". It required "significant" expert effort, not just "a chatbot and a few prompts".
Was Rosie actually cured? The answer is complicated
Reading the first headlines, one might think Rosie was completely cured of cancer. That would be an astounding discovery, revolutionary in the world of veterinary and human medicine. But reality is far more nuanced — and far less triumphant.
First: Rosie's tumors didn't disappear. They shrank, but remained. One tumor didn't respond to treatment at all. This isn't a cure — it's disease delay, which, while important for the dog's quality of life, isn't the same as "saving" or "curing".
The second problem is even more fundamental: it's unknown whether the mRNA vaccine had any effect at all. The vaccine was administered simultaneously with another form of immunotherapy called a checkpoint inhibitor — a drug designed to support the immune system in attacking tumors. This means it's practically impossible to determine whether the vaccine did anything. One of the scientists involved in the project, Martin Smith, said the team is conducting tests to check the immune response. But those results haven't been published yet or undergone scientific peer review.
This is a crucial distinction. In medicine — both human and veterinary — you can't just say "the patient feels better, so my treatment worked". You can have many variables. It could have been the checkpoint inhibitor. It could have been placebo effect for the owner, which affected the dog's care. Rosie might have been lucky and her natural immune system simply attacked the tumors. Without controlled trials, without reproducibility, without transparency, it's all speculation.
Why the story sounds like a PR stunt — because it might be
The entire story has a certain unsettling smell that's hard to shake. The extraordinary combination of elements — a desperate pet owner, a hopeless medical situation, a tech genius who "invents" a solution, and a happy ending — is too perfect. Too clean. Too convenient for the narrative that tech companies want to spread.
In the world of tech funding, where bold claims built on questionable foundations using unclear methods are the norm, this story fits perfectly. mRNA vaccines — like the broader promise of personalized medicine — remain largely unproven as a cancer treatment in humans, let alone dogs. While the case may be real, it seems too carefully composed and conveniently omits the tens of thousands of dollars and significant expertise required to turn an idea into viable treatment.
Conyngham, when contacted by The Verge, didn't respond. But his X profile says a lot: "Ending Cancer for Dogs". Links to a Google form describing his "dream to make this process something everyone could have access to". The form asks if your dog has cancer, if you're a researcher or scientist who wants to get involved, and — crucially — if you're an investor.
This doesn't look like a scientist's project to share a discovery with the world. This looks like building a business.
AI as a research tool — real value hidden beneath the noise
Despite all the caveats, it would be a mistake to completely dismiss Rosie's story as nonsense. Here lies real progress, real value of AI, though it's far more subtle than the headlines suggest.
Artificial intelligence is indeed making science more accessible to ordinary people. ChatGPT's ability to search, summarize, and explain complex scientific texts is real progress. Previously, to understand scientific literature, you needed years of education. Now a chatbot can help you get through it in minutes. This democratizes access to information.
But — and this is a huge but — democratizing access to information is not the same as democratizing access to care. Conyngham had access to scientists from an elite university, a laboratory capable of synthesizing mRNA, funds to conduct everything, and a veterinarian capable of administering the treatment. Most people — even in wealthy countries — don't have access to any of these elements.
Rosie's story shows that AI can be a helpful tool in the hands of experts with access to resources. But it doesn't show that AI can replace experts, lab equipment, funds, or the physical work of science. This is an important distinction that media completely missed.
Special case or template for the future?
The question that really matters is whether Rosie's case can be reproduced. Can an ordinary pet owner with access to ChatGPT now design a vaccine for their dog? The answer is decidedly no.
Ascher expressed it clearly: Rosie's case "is better viewed as an extraordinary, very specific proof of concept than as a template that ordinary people can easily replicate". It required an extraordinary coincidence — an owner who had enough money and determination to contact scientists, scientists who were interested in experimenting with a new idea, access to advanced laboratory equipment, and scientists' time.
By comparison, most people in the world — even in wealthy countries — don't have access to this kind of resources for themselves, let alone their pets. Where does that leave us? That AI can be a useful tool for scientists with access to resources. That's important, but it's not a revolution in medicine.
A lesson for the tech industry — and for us
Rosie's story is instructive in many ways. First, it shows how easily a technological narrative can be distorted, especially when it's convenient for companies that want to promote their technology. Second, it shows how media can push for sensation at the expense of accuracy. Third, it shows how important it is to be skeptical of claims about AI "breakthroughs".
For the AI industry, this story should be a warning. The hype around AI is already high — perhaps too high. When companies like OpenAI, Google, and xAI amplify exaggerated versions of stories, they contribute to building unrealistic expectations. And when expectations are unrealistic, disappointment is inevitable.
For Polish tech creators and entrepreneurs, this story is particularly important. Poland's AI industry is growing, but if we build our reputation on exaggerated claims and distorted stories, we'll share the fate of the tech industry — lose public trust when reality doesn't match the promises.
The real value of AI in medicine and science is real. But it's more subtle, more demanding, and less dramatic than the headlines suggest. AI is a tool that can support scientists and doctors, but it cannot replace them. It can help analyze data, but it cannot replace the physical work of the laboratory. It can suggest hypotheses, but it cannot test them.
Rosie's story is a story about what happens when we mix real technological progress with uncontrolled hype and media seeking sensation. The result is a distorted narrative that harms both public expectations and real scientific progress.









