AI11 min readWired AI

Signal’s Creator Is Helping Encrypt Meta AI

P
Redakcja Pixelift4 views
Share
Signal’s Creator Is Helping Encrypt Meta AI

Foto: Wired AI

Moxie Marlinspike, creator of the Signal app renowned for advanced encryption, now supports work on encrypting Meta's artificial intelligence. The collaboration was confirmed as part of a project aimed at securing AI models against unauthorized access and manipulation. Marlinspike's decision has generated industry interest, given his previous independence from major technology corporations. His involvement in Meta's AI security suggests growing importance of cryptography in protecting machine learning systems against attacks and intellectual property theft. The collaboration has practical significance for users – AI encryption can make it harder for hackers to access sensitive training data and model parameters. At the same time, it raises questions about the balance between security and algorithm transparency. The initiative shows that even independent privacy experts see the need to strengthen protection of AI infrastructure at technology giants.

```html

Moxie Marlinspike, creator of the Signal app, which for years has been a symbol of encrypted communication, has just announced a collaboration with Meta on integrating AI encryption technology into the company's platforms. This is not just an ordinary business transaction — it is a breakthrough that could fundamentally change the way millions of people communicate with artificial intelligence. The technology that Marlinspike developed for his Confer project is to be implemented in Meta's AI systems, meaning that private conversations with chatbots could finally receive the same protection that Signal provides to regular correspondence.

In an era when almost every major tech corporation collects data from user interactions with AI, Marlinspike's decision stirs mixed feelings. On one hand, it is a decisive step toward protecting privacy — a value that Marlinspike has prioritized above all else for a decade. On the other hand, it is a partnership with Meta, a corporation that has a complicated history regarding user data. However, the reality is more complex than a simple distinction between good and bad actors. Let us analyze what this collaboration really means and why it matters for anyone who will ever talk to AI.

From Signal to Meta AI: Marlinspike's extraordinary journey

Moxie Marlinspike is not a typical tech entrepreneur. For years he led Signal as a non-profit organization, funded by donors and foundations focused on civil liberties. His approach to encryption has always been idealistic — it was not about business, but about the fundamental right to privacy. When in 2022 he shocked the community by announcing that he would work on something completely new outside of Signal, many observers saw it as a betrayal of his values.

However, Marlinspike explained his vision: he wanted to work on encryption for artificial intelligence, which would become increasingly present in everyone's life. He realized that if AI becomes the main interface between people and technology — and everything suggests it will — then encrypting that communication will be as important as encrypting text messages. Thus was born the Confer project, which uses advanced cryptography to protect conversations with AI models from access both to the models themselves and to the servers on which they are located.

The decision to collaborate with Meta was not impulsive. Marlinspike spent the last few years talking to various companies, trying to find a partner that would be interested in actually implementing encryption, not just a marketing gesture. Meta, despite its bad reputation regarding privacy, turned out to be ready for serious changes. The company has access to billions of users and a growing portfolio of AI products, which means that the integration of Marlinspike's technology could have truly global reach.

How AI encryption works according to Confer

To understand the significance of this collaboration, one must first know how AI encryption differs from traditional communication encryption. When you write a message in Signal, encryption ensures that only you and the recipient can read it. Signal servers never see the content of the message — they only see encrypted data. This is relatively simple to implement because the message is static.

Encrypting a conversation with AI is a completely different problem. The AI model must actually process your message to respond to you. How can you encrypt data in such a way that the model can operate on it without revealing it? This is the question that Marlinspike and his team found an answer to. Confer technology is based on advanced cryptographic techniques, such as homomorphic encryption — encryption that allows for performing calculations on encrypted data without decrypting it.

In practice, this means that when you send a question to Meta AI, your message is encrypted in such a way that Meta servers can run an AI model on it, but they never see the actual content. The response is also sent in an encrypted form that only you can decrypt. Even Meta employees, even in the event of a security breach, would not be able to read your conversations with AI. This is a fundamental change from today's state of affairs, where every conversation with ChatGPT, Gemini, or other chatbots gives full access to those companies' servers.

Of course, this technology has its limitations. Homomorphic encryption is computationally expensive — it can slow down AI responses. Meta will have to find a balance between privacy and performance. However, the fact that the company is willing to invest in this solution shows that it sees a future where AI privacy is not a luxury but an expectation.

Why Meta is taking this step

It may seem surprising that Meta — a corporation that makes billions from user data — now wants to encrypt interactions with AI. The answer lies in regulatory changes and public pressure. The European Union, the United States, and many other jurisdictions are beginning to introduce more rigorous regulations concerning data protection and privacy. The AI Act in Europe, growing interest from the U.S. Congress — all of this is forcing major tech companies to reshape their approaches.

Meta also has pragmatic business reasons. If competitors such as OpenAI or Google start offering encrypted conversations with AI, and Meta does not have it, it could lose users, especially those interested in privacy. Integrating Confer technology is a way for Meta to position itself as a company that takes privacy seriously — without having to completely give up access to data that it needs for other purposes.

It is also worth noting that Meta is investing enormous sums in AI. Its AI assistant will be built into practically every product the company offers — from WhatsApp to Instagram. Encrypting these interactions could be a key element that convinces users to use these tools. In a world where privacy is becoming an increasingly valuable commodity, companies that can provide it will have a competitive advantage.

Implications for Polish users and creators

For Polish users, this news has concrete significance. Meta AI will be integrated with Messenger, Instagram, and other applications that millions of Poles use daily. If Confer encryption is actually implemented, it means that conversations with AI will be protected in the same way as conversations with friends in Signal or other encrypted applications. This is particularly important in Poland, where awareness of privacy is growing, but many people still do not know how to protect their data.

For Polish creators and entrepreneurs using AI to generate content, encryption can be a double-edged sword. On one hand, their conversations with AI will be protected — which is important if they discuss sensitive business projects. On the other hand, Meta will have less data to analyze usage patterns, which could affect algorithm tuning. However, in the long-term perspective, trust in platforms that respect privacy should be more valuable for creators.

Polish regulations concerning AI and data protection will also be influenced by these changes. If Meta implements encryption for AI, the Polish regulator will have to adjust its guidelines. This could become a precedent for other companies operating in the Polish market, forcing them to take similar actions. In this sense, Marlinspike's and Meta's decision could have a domino effect across the entire European technology market.

Technical challenges and security

Although the vision of encrypted AI is attractive, implementing this technology faces significant challenges. The first is performance. Homomorphic encryption is significantly slower than ordinary calculations. If a user has to wait several minutes for a response from AI instead of a few seconds, they may give up using this feature. Meta will have to find optimizations that allow for maintaining reasonable response times.

The second challenge is implementation complexity. Integrating encryption with Meta's existing AI systems will not be simple. It will require changes to infrastructure, staff training, and testing. Security errors may appear that will require fixing. Marlinspike and his team will have to work closely with Meta engineers to ensure that encryption is properly implemented.

The third challenge is the issue of updates and maintenance. Cryptography is not static — new attacks are discovered, old methods become insufficient. Meta will have to regularly update the implementation to ensure it remains secure. This requires ongoing commitment and resources. If Meta begins to neglect security, the entire value of this undertaking will disappear.

Competition and the future of encrypted AI

Meta's decision does not happen in a vacuum. OpenAI, Google, and other AI companies are watching this situation carefully. If Meta actually implements encryption and it works well, competitors will be under pressure to do the same. OpenAI has access to ChatGPT, which is used by billions of people. Google has Gemini and access to billions of Gmail, YouTube, and other service users. If any of these companies start offering encrypted conversations with AI, it could become an industry standard.

On the other hand, some companies may resist this change. Data from conversations with AI is valuable to them for improving models, for research, and for advertising purposes. Encryption means losing access to that data. However, in a world where regulation is becoming increasingly rigorous, resisting encryption may prove impractical. Companies that wait too long may find themselves in a defensive position, where they are forced to implement encryption hastily and potentially carelessly.

Marlinspike also has a chance for Confer to become an industry standard — similar to how the Signal protocol became a standard for encrypted communication. If Meta successfully implements the technology, other companies may want to license it or develop their own solutions based on similar principles. This could place Marlinspike in a position where his vision of encryption for everyone becomes reality on a much larger scale.

Concerns and criticism

Not everyone is excited about this collaboration. Some privacy advocates express concerns that Marlinspike is collaborating with Meta — a company that has a long history of violating user privacy. They argue that encrypting conversations with AI is just a facade if Meta still collects data from other user interactions. If you know that a user is talking to AI about mental health, but you do not know exactly what they are saying, Meta can still use that information for advertising purposes.

Another problem is transparency. How will users know that encryption actually works? Meta will have to provide tools to verify that encryption is actually implemented and there are no backdoors. It will also have to be open to security audits conducted by independent third parties. Lack of transparency could lead to distrust, even if the technology is actually secure.

There is also the question of legal compliance. If encryption is so strong that even Meta cannot read conversations, what will happen if the law requires Meta to disclose the content of a conversation? This will be a conflict between law and technology. Marlinspike and Meta will have to find a way to deal with this situation, perhaps by developing policies that clearly specify when and how encryption can be broken.

Perspective on the future: a new standard or a passing trend?

The real question before us is whether AI encryption will become a norm or remain a niche for people interested in privacy. If Meta successfully implements Confer technology and shows that it is efficient and secure, it could change the entire industry. Users who see that they can use AI without fear of privacy violations may start demanding it from other companies.

On the other hand, if the implementation is problematic — if conversations are slow, if there are security errors, if users do not understand how it works — the entire undertaking could collapse. Meta will have to be very careful in its approach to ensure that encryption is truly transparent and reliable for millions of users.

For Marlinspike, this collaboration represents a turning point in his career. For years he worked on Signal, building a tool for a small but engaged community. Now he has a chance for his vision of encryption for everyone to become reality on a global scale. If it succeeds, it will be one of the greatest triumphs in the history of the fight for privacy in the digital world. If it fails, it could be a lesson in how difficult it is to change giant corporations, even with the best intentions.

```
Source: Wired AI
Share

Comments

Loading...