Industry5 min readThe Register

AI will write code, but prepare to babysit it - and be sure you speak its language

P
Redakcja Pixelift0 views
Share
AI will write code, but prepare to babysit it - and be sure you speak its language

Foto: The Register

Instructing artificial intelligence to act like a "programming expert" paradoxically degrades the quality of the code it generates. The latest research, discussed in March 2026 by The Kettle team, debunks the myth of AI autonomy in the software development process. Although language models can create advanced structures, the phenomenon known as "vibe coding" requires constant human curation. It turns out that uncritically trusting algorithms leads to the replication of errors that only experienced specialists can detect. For the global technology market, the signal is clear: reducing development teams in hopes of full automation is a risky strategic mistake. AI does not replace the programmer but becomes a tool requiring "babysitting"—continuous supervision and bug fixing. Users and companies must understand that the key to success is not the generation of code itself, but the ability to verify what the machine provides. The practical implication for the creative and IT industries is obvious: the role of the expert is shifting from writing from scratch toward critical editing and debugging. Ultimate responsibility for application stability and security remains in human hands, as AI still lacks the intuition necessary to understand business context.

The vision of a world where an army of algorithms replaces legions of programmers is becoming an increasingly popular topic in the boardrooms of technology corporations. However, reality, as is usually the case, turns out to be much more complex. Although artificial intelligence can generate a neat poem or a function in Python, the final result almost always requires the watchful eye of an expert to catch subtle errors and logical gaps. In the industry, the term vibe coding is beginning to dominate, perfectly capturing the superficial nature of the current generation of AI tools – they write code that "looks" good, but does not necessarily work as intended.

In the latest episode of The Kettle podcast, Brandon Vigliarolo, along with Tobias Mann and Tom Claburn, attempted to demystify the role of AI in software engineering. Their diagnosis is clear: any company planning to reduce development teams in the hope of full automation of the code creation process is exposing itself to serious risk. Artificial intelligence in its current form is not an autonomous engineer, but an assistant requiring constant supervision, whose mistakes can be more costly than human labor.

The expert paradox and the prompting trap

One of the most surprising threads raised by the editors of The Register is the research results cited by Tom Claburn. They show that the commonly used prompting technique of instructing a model with the words "act like an expert in the field of software engineering" produces the opposite of the intended effects. Instead of raising the quality of the generated code, AI models subjected to such a suggestion often provide lower-quality solutions, shedding new light on our understanding of the mechanisms of Large Language Models (LLM).

This phenomenon exposes a fundamental limitation of current systems. Models such as GPT-4 or Gemini do not understand the logic of programming in the way a human does. They operate on the statistical probability of the occurrence of subsequent tokens. When we impose the role of an "expert" on them, they may fall into the trap of over-complicating code or generating structures that are statistically associated with advanced projects but are incorrect or inefficient in a specific context. As a result, the programmer must spend more time "babysitting" – monitoring and correcting the code – than if they had written it from scratch.

Vibe coding and the illusion of productivity

The term vibe coding is becoming increasingly popular in discussions about creative technologies and programming. It describes a situation where developers rely on the "feel" of the model, accepting snippets of code that seem correct at first glance. The problem is that in software engineering, a "vibe" is not enough – code must be deterministic, secure, and scalable. AI can create a sophisticated structure that, upon closer analysis, turns out to be full of logical errors or vulnerabilities to attacks.

  • AI models generate code that requires constant verification by senior developers.
  • Attempts to replace experienced engineers with AI lead to technical debt.
  • Tools like GitHub Copilot or DeepSeek are support, not a replacement for human logic.
  • Understanding the language AI uses is key to not falling victim to model hallucinations.

The editors of The Kettle emphasize that AI is great at repetitive "boilerplate" tasks but fails in moments requiring a deep understanding of system architecture. Without people who can "talk" to the code and understand its structure under the hood, projects based solely on AI quickly become impossible to maintain. It is this necessity of "caring" for the code that makes the programming profession safer than press headlines might suggest.

Why you won't fire your programmers

The financial perspective often tempts managers to seek savings where AI promises speed. However, as Brandon Vigliarolo notes, reducing development teams based on belief in the capabilities of artificial intelligence is currently a strategic mistake. Companies that decide on such a step may quickly discover that the cost of fixing AI-generated errors exceeds the savings resulting from fewer positions. Experts are essential not only for writing code but, above all, for auditing and integrating it within larger ecosystems.

Modern programming using AI is becoming a form of editing. The developer shifts from the position of creator toward an editor-in-chief who must approve every "text" (code snippet) provided by their assistant. If the editor does not know the language and rules of grammar (programming logic), they will not be able to spot errors that could lead to a total system failure. Therefore, proficiency in programming languages remains a key competence, even if the physical typing of characters on the keyboard is taken over by an algorithm.

The future of software engineering does not belong to artificial intelligence alone, but to a hybrid model of cooperation. AI will write code, but humans will have to "raise" it, debug it, and adapt it to real business needs. Companies that understand that AI is a tool that increases the capabilities of an expert, rather than a replacement for them, will gain a competitive advantage. Those who believe that an algorithm alone will carry the weight of creating complex systems will be left with code that has a great "vibe" but is not suitable for production deployment.

Source: The Register
Share

Comments

Loading...