The future of code is exciting and terrifying

Foto: The Verge AI
Programming is changing at a pace that surprises even experienced developers. Instead of writing code independently, more and more people are managing AI agents and projects — this is the main topic of the latest episode of The Vergecast, where Paul Ford describes his journey into the world of "vibe coding." Ford admits that he builds more than ever before, solves problems faster, and takes on more interesting projects, but at the same time feels deep ambivalence about this transformation. Tools like Claude Code make coding accessible to everyone, but this raises questions about the future of the profession and the quality of software being created. Ford explains why you can simultaneously love and hate AI — the technology opens new possibilities, but also raises concerns about the future of the industry. The episode also addressed the topic of differences between phone markets in the USA and the rest of the world. It turns out that buyers in the United States are losing access to the best cameras — these can be found in Xiaomi, Oppo, and Honor models, although one must accept less conventional design.
A few years ago, writing code was reserved for specialists. Today, anyone — from marketers to entrepreneurs — can open Claude Code and start building. This is not an exaggeration, it is a reality that changed faster than anyone expected. But this democratization of programming brings with it questions that the industry is not yet ready to answer. What does code become when anyone can write it? And what becomes of programmers when they must transition from creation to management?
Paul Ford, a writer, entrepreneur, and technology thinker with a decade of experience, called this phenomenon "vibe coding". This term perfectly captures the new reality — writing code not based on precise specifications, but rather on feeling, intuition, and experimentation. Ford himself is going through this transformation. He builds more than ever before, solves real problems, takes on interesting projects, but at the same time experiences a strange, ambivalent feeling. It turns out that you can love and hate AI at the same time — and you have to.
This is not merely an anecdote about one entrepreneur. It is a signal of profound changes across the entire software industry. Developers increasingly rarely sit down to write line after line of code. Instead, they spend time managing AI agents, defining directions, testing results, and making architectural decisions. Code still gets created, but its origin and creation process have undergone a fundamental transformation.
Read also
Coding for everyone — is this really democratization?
Claude Code and similar tools make programming accessible to people without traditional training. You don't need to wait five years in college, you don't need to go through a bootcamp. Open the app, describe what you want to do, and AI will start writing. This sounds like the fulfillment of an old industry promise — programming for everyone.
But here comes the first complication. Access to a tool is not the same as the ability to use it effectively. Anyone can type a prompt, but can everyone judge whether the generated code is secure, efficient, and easy to maintain? Can everyone debug when something goes wrong? The history of technology shows that tools that democratize access often create a new class of users who don't fully understand what they're doing. In the case of code, this can have significant consequences.
Of course, for many people this is not a problem. If you're writing a simple automation script, a small web application, or a data analysis tool, AI can do in an hour what would traditionally take a day. That is real value. But when you start building systems that are critical to your business, that need to scale to millions of users, or that handle sensitive data — suddenly this new skill becomes insufficient.
Ford describes this as the moment when you discover that you can love this new possibility, but at the same time fear its consequences. The ability to quickly prototype and build is exciting. But the responsibility for the code you create — even if its lines were written by a machine — remains human.
From writing to managing — the new role of the programmer
The traditional image of a programmer — a person sitting in front of a computer, writing code hour after hour — is already a thing of the past for an increasing number of professionals. Today, experienced developers describe their work completely differently: managing AI agents, defining architectures, making decisions about what should be automated and what requires human input.
This is a radical change. Code still gets created, but the programmer becomes more like a conductor of an orchestra than a musician playing an instrument. They must understand what each agent does, how they communicate with each other, where bottlenecks or potential errors might be. This requires a different set of skills — less about syntax, more about systems thinking.
For many seniors in the industry, this is a challenge. They spent years mastering specific programming languages, specific frameworks, specific optimization techniques. And now it turns out that this knowledge is still important, but in a completely different context. It's no longer about writing the best code — it's about directing the process in which code is created.
At the same time, for younger developers this could be an opportunity. If you don't have to spend the first years of your career writing boilerplate and simple functions, you can move on to more interesting problems sooner. But this also means that traditional educational paths may become less relevant. Why learn all the intricacies of Python if AI can do it for you?
Code quality in an AI-generated world
This is a question that keeps engineers awake around the world. When a human writes code, they are responsible for its quality. When AI writes code — who is responsible? The developer who accepted it? The company that created the tool? Both sides?
Claude and other AI models are impressive, but they are not infallible. They can generate code that looks good but has subtle security flaws. They can write functions that work for typical cases but fail on edge cases. They can create solutions that are inefficient when scaled to larger data sets. All of this requires human review, but reviewing code generated by AI is different from reviewing code written by a human.
When you read code written by someone else, you try to understand their intentions, their logic, their approach to the problem. This helps in finding logical errors. But code generated by AI? It's often a black box. You know what it does, but you don't always know why it does it one way and not another. This can lead to a situation where you accept code because it works, but you don't fully understand it.
For security, this can be a problem. If no one on the team fully understands how a critical piece of code works, and a security vulnerability appears — how quickly can you fix it? How do you ensure that the fix doesn't introduce new problems?
The Polish developer industry — is it prepared?
In Poland, the IT market is growing dynamically, but it is largely based on outsourcing and teams working for foreign clients. This is a model that relies on talent availability and low labor costs. Tools like Claude Code could fundamentally change this dynamic.
On one hand, Polish companies can use these tools to increase the productivity of their teams. Instead of hiring new people, they can do more with existing resources. This could be a competitive advantage. On the other hand, if a foreign client starts using Claude Code to generate code that they previously outsourced to Polish companies — what will happen to those companies?
This is not a theoretical question. We are already seeing the first signs of this trend. Companies are starting to experiment with AI to generate code. Some are discovering that they can do internally what they previously outsourced. This does not mean that the IT industry in Poland will collapse — the industry always adapts — but it does mean that it must change.
Polish talent should focus on what AI cannot do — on architecture, on understanding business, on managing complex projects, on mentoring. These skills will become increasingly valuable. But this requires a shift in thinking — from "I write code" to "I manage the code creation process".
Security and responsibility in the era of AI-generated code
Code generated by AI is code that must be subjected to particularly thorough security testing. Language models can learn patterns from training data, but they don't always understand the security context. They can generate code that is vulnerable to SQL injection, cross-site scripting, buffer overflow — classic vulnerabilities that every experienced programmer has learned to avoid.
This creates new challenges for security teams. Traditionally, code security review focused on finding logical errors and implementation gaps. Now it must also focus on finding errors that may be artifacts of the generation process. Did the AI model trained on GitHub code also learn bad practices? How much can we trust code whose origin is partially unknown?
Legal responsibility is another issue that still awaits resolution. If code generated by Claude causes financial loss or violates personal data — who is responsible? The developer who accepted it? Anthropic, which created Claude? The company that implemented it? The legal regulations are unclear here, and this creates risk for all parties.
Ambivalence as a new standard in tech
Paul Ford talks about being able to love and hate AI at the same time. This is not a contradiction — it is a realistic approach to the transformation that is happening in the industry. Tools like Claude Code are truly powerful and can truly change the way we work. But this change brings with it uncertainty, threats, and unanswered questions.
The excitement is justified. The ability to quickly prototype, build applications, solve problems — that is real value. But the fear is also justified. Fear of losing skills, fear of responsibility for code you don't fully understand, fear of changes in the job market, fear of the security of systems that depend on this code.
This is an ambivalence that the industry must accept. You cannot be completely optimistic or completely pessimistic. You have to be realistic — seeing both the potential and the threats, and acting based on this complete vision.
Transformations in education and talent development
If programming is changing so fundamentally, then education must change as well. Traditional bootcamps and online courses focused on learning specific programming languages may become less relevant. Instead, education should focus on skills that AI cannot easily replace — critical thinking, problem-solving, business understanding, communication.
But this does not mean that learning traditional programming becomes useless. Quite the opposite. To effectively use tools like Claude Code, you must understand what code does, what are good practices, what are the pitfalls. Learning programming fundamentals becomes more important, not less. But the context changes — you are not learning to write code, but to manage the process of code creation.
Universities and technical schools must adapt. They may no longer teach specific frameworks, but they must teach architecture, algorithms, security, testing. These fundamentals will be important regardless of whether you write code manually or generate it using AI.
The future of code — between promise and threat
The future of programming is truly exciting and truly terrifying. Tools like Claude Code can democratize access to software creation, can increase productivity, can allow teams to do more with fewer resources. But they can also lead to situations where code becomes less understandable, less secure, less maintainable.
The key is finding balance. AI should be a tool that empowers people, not replaces them. Developers should be trained in using these tools, but also in critically evaluating their results. Organizations should have processes that ensure code generated by AI meets the same security and quality standards as manually written code. Legal regulations should be clear about responsibility.
Paul Ford was right — you can love and hate AI at the same time. This is not weakness, it is wisdom. An industry that can maintain this ambivalence, that can be both optimistic and cautious, will be in the best position to benefit from this transformation while minimizing its threats. The future of code will be what we build it to be — and it is up to us whether it will be a future we will love.









