The gen AI Kool-Aid tastes like eugenics

Foto: An AI-generated image of a bunch of white men standing around and looking at a half-full pitcher of Kool-Aid placed on an elevated stage.
As many as 150 years ago, ideas were born that today, according to director Valerie Veatch, form the foundation of modern generative artificial intelligence. In her new documentary *Ghost in the Machine*, the author challenges technological optimism by pointing to direct links between contemporary machine learning and 19th-century eugenics. Veatch notes that the term "artificial intelligence" is primarily a marketing ploy that diverts attention from the fact that algorithms inherit the biases of their creators and are based on statistical models once developed by proponents of scientific racism, such as Francis Galton or Karl Pearson. For global users and creators, the implications of these findings are fundamental: errors in models, manifested through the generation of racist or sexist content, are not merely system "glitches" but result from the very architecture of the data. Understanding that this technology is not a neutral tool, but a product of a specific history of social thought, forces us to exercise greater vigilance when using tools such as *Sora* or *ChatGPT*. Instead of uncritically accepting the narrative of the arrival of "superintelligence," we must begin to perceive AI as a system deeply rooted in human prejudice, which requires a radical change in our approach to digital ethics and algorithmic oversight.
The Statistical Legacy of Francis Galton
The key to understanding the problem is not an analysis of the code itself, but the history of statistics upon which modern **Machine Learning** is based. In her film, Veatch goes back to Victorian England, to the figure of **Francis Galton** — Charles Darwin's cousin and the father of eugenics. Galton believed that humanity could be "improved" by eliminating traits deemed inferior, and his work on multidimensional modeling served, among other things, to categorize the attractiveness of women based on their ethnic origin. Although Galton did not build computers, his student, **Karl Pearson**, forged these prejudices into mathematical tools. It was Pearson who developed **logistic regression**, which remains one of the pillars of data classification algorithms today. "Ghost in the Machine" posits that the attempt to quantify human intelligence and behavior, initiated by eugenicists, is genetically inscribed into the architecture of AI. If the foundation of the system is a mathematical attempt to segregate and hierarchize human traits, it is hardly surprising that the final product reproduces these same biases.Marketing Fog and the Terminology Trap
Veatch rightly points out that the term "artificial intelligence" itself is one of the most successful, yet most harmful marketing maneuvers in the history of technology. Coined in 1956 by **John McCarthy** to secure research funding, the term suggests the existence of an entity capable of reasoning. In reality, we are dealing with advanced statistical systems that do not "understand" the world, but merely predict the next token or pixel based on historical data. Using anthropomorphic terms allows technology companies to avoid responsibility for systemic errors. When a model generates harmful content, it is interpreted as a "hallucination" or an unforeseen side effect of a complex machine mind. Meanwhile, as Veatch notes, it is simply a logical consequence of feeding algorithms data that is saturated with historical biases. The **GenAI** industry operates in a cycle of constant hype designed to hide the fact that beneath the hood of shiny interfaces lie mechanisms that perpetuate the status quo.Silence as a Defense Mechanism
One of the director's most striking observations is the description of the AI community's reaction to attempts at criticism. In Slack groups, where every generated image was met with enthusiastic reactions, voices concerning systemic racism were met with absolute silence. This is the phenomenon of "techno-optimism," which refuses to acknowledge the flaws in the foundations upon which it builds its identity.- GIGO (Garbage In, Garbage Out): Systems are only as good as the data they learn from. If history is full of discrimination, AI will replicate it as a pattern of correctness.
- White Spaces: Algorithms often associate prestigious locations (such as art galleries or boardrooms) with specific phenotypic traits, which is a direct result of bias in the training database.
- Lack of Transparency: Companies like OpenAI or Anthropic are increasingly less likely to share details regarding datasets, making independent audits of their ethics impossible.
Algorithmic Predestination
The modern gold rush surrounding generative artificial intelligence is based on a belief in inevitable progress and a "superintelligence" that will solve humanity's problems. However, the analysis presented in "Ghost in the Machine" suggests something quite different. If we do not redefine the foundations on which we build these models, AI will not become a tool of liberation, but the most powerful instrument in history for preserving historical injustices."The truth is that the term 'artificial intelligence' means nothing; it's a marketing term and it always has been. We have to be precise in the words we use, because their lack of precision allows for the concealment of harmful ideologies" – Valerie Veatch.Looking at the development direction of models such as **GPT-5** or subsequent iterations of **Sora**, it is clear that the industry is prioritizing scale over fixing foundations. The optimization of algorithms for efficiency and visual realism comes at the expense of ethical integrity. My prediction is clear: in the coming years, we will witness a deepening rift between the aesthetic perfection of generated content and its sociological toxicity. Without a radical change in the approach to data selection and algorithmic transparency, artificial intelligence will remain merely a digital monument to eugenics, dressed in the robes of modern engineering. The narrative of "neutral technology" will eventually collapse, giving way to a hard discussion about the political and ideological nature of every line of code written.
More from AI
Related Articles

New court filing reveals Pentagon told Anthropic the two sides were nearly aligned — a week after Trump declared the relationship kaput
23h
Writer denies it, but publisher pulls horror novel after multiple allegations of AI use
Mar 20
Microsoft rolls back some of its Copilot AI bloat on Windows
Mar 20



