Kagi Translate's AI answers the question "What would horny Margaret Thatcher say?"

Foto: Getty Images
Kagi Translate tool, an alternative to Google Translate, turned out to be far more versatile than it appeared. It turns out that instead of traditional languages, it can be "translated" into invented "languages" — from "LinkedIn Speak" through "Gen Z slang" to "horny Margaret Thatcher". The discovery spread on the internet only in recent weeks, although initial reports appeared a year ago on Hacker News. Kagi, known primarily as a paid competitor to Google, launched Kagi Translate in 2024 as "simply better" alternative to DeepL. The tool uses a combination of large language models to optimize translation results. The discovery demonstrates both the creative potential of LLMs and their dangers — users can manipulate URL parameters to set unexpected "target languages". This phenomenon reveals a fundamental problem: when a system is sufficiently flexible, the boundary between actual translation and text generation becomes completely blurred.
Remember the times when playing around with language models was just fun? When instead of worrying about alignment, hallucinations, and AI safety, you could just type an absurd instruction and wait for the result? It turns out those times haven't ended at all — at least not for users of Kagi Translate, a translation tool that has become an unofficial internet meme in recent weeks. All because its creators apparently didn't anticipate that people would want to translate text into "the language of an excited Margaret Thatcher" or "speaking in LinkedIn style".
This story tells us something important about the state of contemporary AI tools. Kagi Translate is not the first application that unintentionally became a toy for meme creators, but it is a particularly instructive case. It shows how generic language models, when left without strict constraints, can do things completely outside their intended scope — and how this sometimes leads to surprisingly funny results.
How a translation tool became an absurdity generator
Kagi is primarily a search engine — a paid alternative to Google that promises better privacy and fewer ads. In 2024, the company expanded its offering with Kagi Translate, positioning it as "simply better" than Google Translate and DeepL. The tool was supposed to use "a combination of language models, selecting and optimizing the best result for each task" — while acknowledging that this might "sometimes lead to oddities that we're actively working on".
Read also
For most of its history, Kagi Translate functioned like any other translation tool: a user would type text, select a source and target language from a list of 244 available options, and receive a translated result. Boring, predictable, safe. Exactly what a tool for the serious task of text translation should be.
But in February 2025 — though the story only circulated online recently — an anonymous Hacker News user discovered something interesting: the tool's URL parameters could be manipulated. Instead of selecting one of the 244 predefined languages, you could type any text as the "target language". And the system didn't protest. It didn't throw an error. It just... worked.
From that moment on, the internet started experimenting. People discovered that you could ask Kagi Translate to translate into "the language of an excited Margaret Thatcher", "speaking like a rude guy with a Boston accent", "LinkedIn Speak", "Gen Z slang", or even "pirate speak". And the tool — instead of rejecting these absurd instructions — actually executed them, generating results that were both funny and surprisingly coherent.
When a generic LLM becomes a tool for everything
The key to understanding why Kagi Translate behaved this way is the fact that under the hood works a generic language model, not a narrowly specialized translation algorithm. While traditional translation tools (of the old generation) were built on statistical or neural architectures, but strictly limited to the task of translation between known languages, modern LLMs are built on the principle of "do what I say".
For a model like Claude or GPT, the instruction "translate this text into the language of an excited Margaret Thatcher" is perfectly sensible. The model has sufficient knowledge of how Margaret Thatcher speaks (from films, speeches, historical transcripts), can recognize the concept of "excitement" or enthusiasm in language, and can combine these elements into a coherent output. This is exactly what LLMs are good at: interpreting instructions in natural language and executing them.
The problem — or perhaps rather a feature — appears when a tool designed for one task (translation between natural languages) is powered by a system capable of much more. Kagi Translate didn't refuse to execute absurd instructions because it had no reason to. The URL parameters were accepted, the text was processed, the model received an instruction and executed it. No validation, no hardcoded restrictions.
This is exactly what happens when AI tool creators assume users will use the product "correctly". In this case, the assumption was reasonable for 99.9% of users — but it took just one curious hacker to discover the system's vulnerabilities.
Security through obscurity vs. security through design
The Kagi Translate story reveals a classic conflict in AI tool design: security through obscurity versus security through design. The first approach relies on the assumption that users won't know what they can do. The second is based on the system being designed so that even if someone wants to abuse it, they can't.
Kagi clearly chose the first strategy. The web interface of the tool doesn't offer the option to enter a custom "language" — the list of 244 languages is closed, readable, controlled. But the URL parameters were not validated. Someone who knew they could manipulate the URL could do so. This is security through obscurity: the system is "safe" as long as no one knows about the hole.
In practice, this means the tool can be used for things Kagi never intended to support. If someone were creative enough, they could theoretically ask Kagi Translate to translate into "instructions for producing dangerous substances" or "how to defraud insurance systems". Would it work? Hard to say without testing — but the fact that Kagi Translate accepted "excited Margaret Thatcher" suggests the model has a very liberal approach to what it considers a valid instruction.
A better approach would be to validate URL parameters at the application level — check whether the requested "language" is on the list of supported ones, and reject everything else. This would be security through design: there's no way to bypass the system because the system simply doesn't accept invalid inputs.
Why this is funny and why it's a problem
Before we start complaining, it's worth admitting: the Kagi Translate discovery is really funny. The internet discovered that you can ask AI to generate text in the style of "excited Margaret Thatcher", and it actually works. This is exactly the kind of AI play that made people start liking these tools — before the entire field turned into a battle over alignment, safety, and ethics.
But there's also a problem in this, and it's serious. First, Kagi Translate stops being a translation tool when users start using it to generate absurd texts. That's not its purpose. Second, if URL parameters can be manipulated in this way, what else can be manipulated? Can you ask Kagi Translate to generate potentially harmful content if you just know the right instruction?
Third — and this might be the most important — this discovery shows how easy it is to create a "jailbreak" for a generic AI tool. A jailbreak is a technique of bypassing a model's safety restrictions through clever instruction formulation. In this case, the jailbreak was trivial: just change the URL parameter. But the concept is the same. If a system doesn't have built-in restrictions, users will find a way to bypass it.
Polish perspective: when AI fun becomes a minefield
For Polish creators and developers, the Kagi Translate story is particularly instructive. The Polish AI community is small but active — and many Polish startups are building tools based on LLMs. Kagi Translate shows how easy it is to create a tool that works great for 99% of use cases but has huge gaps for 1%.
If you're building an AI tool in Poland — whether it's a chatbot for a client, a translation system, or anything else — you need to remember that the internet is full of creative people who will want to test what your tool can do. It's not always malicious intent. Sometimes it's just curiosity. But the result is the same: if you don't take care of security through design, someone will find the gap.
In the Polish context, this is particularly important because Polish AI regulations are still developing. If your tool is used to generate potentially harmful content — even if unintended — you could find yourself in a difficult legal situation. That's why security should be planned from the beginning, not added "at the end".
How Kagi should have handled this
If I were the architect of Kagi Translate, how would I design it differently? First, validate URL parameters at the application level. Check whether the requested language is on the list of supported ones. If not — return a 400 Bad Request error. End of story.
Second, validate instructions at the model level. Before we send an instruction to the LLM, we check whether it makes sense in the context of translation between natural languages. If the instruction contains keywords like "porn", "dangerous", "cheating" — we flag it and reject it.
Third, logging and monitoring. If a user tries to manipulate URL parameters, we should know about it. We should monitor what "languages" people try to use and respond to anomalies.
None of these steps is complicated. None requires advanced knowledge of AI security. These are basic software engineering practices that every developer should know. The fact that Kagi didn't implement them suggests that either no one thought of this scenario, or — more likely — it was assumed that "nobody will do this".
The future of playing with AI
The Kagi Translate story is a symptom of something bigger: the shift from specialized AI tools to generic tools. Twenty years ago, if you wanted to translate text, you used a specialized translation tool. Today you can use ChatGPT. Today Kagi Translate. Tomorrow something else.
The problem with generic tools is that they are inherently less controllable. A specialized translation tool can be designed to only do translations. But a generic tool based on an LLM can do practically anything — and that's why it's difficult to restrict safely.
In the future, I expect we'll see more such "discoveries" — users who discover that AI tools can do things outside their intended scope. Some of them will be funny, like Kagi Translate. Others will be less funny. But all of them will show the same problem: AI tools are powerful but difficult to control.
For Kagi, the matter is now simple: they need to fix the gap. They've probably already done it, or are in the process. But for the entire AI industry, the lesson is more general: security should be planned from the beginning, not added later. URL parameters should be validated. Instructions should be monitored. And we should always assume that if something can be done, someone will do it.
More from AI
Related Articles

Musk’s tactic of blaming users for Grok sex images may be foiled by EU law
3h
Nothing CEO Carl Pei says smartphone apps will disappear as AI agents take their place
4h
Nvidia is quietly building a multibillion-dollar behemoth to rival its chips business
4h





