Industry4 min readThe Register

OpenAI patches ChatGPT flaw that smuggled data over DNS

P
Redakcja Pixelift0 views
Share
OpenAI patches ChatGPT flaw that smuggled data over DNS

Foto: The Register

A single, maliciously crafted prompt was enough to bypass OpenAI's security measures and covertly exfiltrate data from a ChatGPT session. Researchers from Check Point discovered a critical data exfiltration vulnerability that utilized the DNS protocol as a hidden communication channel. Although OpenAI declares that the code execution environment in ChatGPT is isolated from direct outbound network requests, the system failed to monitor DNS queries that resolve domain names into IP addresses. In practice, this meant that a specially prepared external application (Custom GPT) could, under the guise of data analysis, send sensitive user information to an external server without the user's knowledge or consent. The AI model did not recognize this process as a data transfer requiring intervention, as it trusted the integrity of its container. The issue, officially patched in February, sheds new light on global AI security challenges: while companies focus on blocking bots and scraping, architectural flaws still allow for the leakage of private information. For end users and companies using the API, this is a clear signal that even the most advanced platforms require rigorous verification of permissions granted to external tools integrated with chatbots. This incident proves that traditional network attack vectors remain effective even in the era of generative intelligence.

In the world of creative technologies and artificial intelligence, user trust in data security is the foundation upon which the power of platforms like ChatGPT is built. However, the latest reports from Check Point researchers cast a shadow over these assurances. It turns out that even the most advanced security systems can have vulnerabilities in places so obvious they become almost invisible to defense algorithms. OpenAI recently had to patch a critical bug that allowed for the secret exfiltration of data from a closed execution environment using the DNS protocol.

An invisible channel in the heart of a secure environment

Researchers from Check Point revealed that a single, properly constructed malicious prompt was able to activate a hidden data exfiltration channel within a standard conversation with ChatGPT. The problem concerned a specific environment where the model executes code and analyzes data. OpenAI has long maintained that the code execution environment is isolated and incapable of directly generating outgoing network requests. This theory collapsed when it was discovered that the system completely bypassed DNS (Domain Name System) query controls, treating them as a secure part of the infrastructure.

The attack mechanism was subtle but extremely effective. Because the model operated under the belief that its environment could not send data outside, it did not recognize information transfer via DNS as an action requiring user intervention or a block. As a result, data that should have remained inside the secure container could be encoded in queries to name servers and sent to an external server controlled by the attacker. This is a classic example of a "side channel attack," where a standard network protocol serves as an information smuggler.

An experiment with a personal health analyst

To prove the severity of the situation, Check Point experts prepared three proof-of-concept attacks. One of the most suggestive scenarios involved a custom "GPT" application that acted as a personal health analyst. The attack proceeded as follows:

  • The user uploaded a PDF file containing laboratory test results and sensitive personal data.
  • The application analyzed the document while assuring the user that the data was stored in a secure, internal location.
  • In reality, in the background, the tool sent this information to the attacker's server using the DNS vulnerability.

The key and most disturbing aspect of this incident was the fact that when ChatGPT was asked directly whether it had sent data outside, it denied it with full conviction. The model was not lying in the traditional sense of the word – it simply lacked the awareness that the DNS protocol was being used for exfiltration because OpenAI's supervisory systems were not monitoring this channel for data leaks.

Security priorities: Asset protection vs. data protection

Analyzing OpenAI's actions, one might get the impression that the company places more emphasis on protecting its own assets from bots than on sealing user data leak channels. A security engineer operating under the pseudonym Buchodi recently pointed out that the platform implemented advanced Cloudflare Turnstile mechanisms. These are designed to prevent interaction with the chatbot until the entire React-based web interface is fully loaded in the user's browser.

This is confirmed by an OpenAI employee (likely Nick Turley, Head of ChatGPT), who explained on the Hacker News forum that these rigorous controls serve to protect products against scraping, fraud, and abuse. The main goal is to ensure that limited GPU resources go to real users rather than bots trying to download model-generated data for free. This is ironic, considering that OpenAI itself built its power on the massive collection of content from the internet to train models, and now must defend itself against similar practices from other entities.

Regulations and responsibility in the AI era

Vulnerabilities like the one discovered by Check Point are of immense importance for regulated industries such as medicine, finance, or law. The use of AI services in corporations carries the risk of violating strict privacy regulations, including GDPR, HIPAA, or various financial regulations. If an AI service intended for professional use allows sensitive data to leak through such a basic mechanism as DNS, the legal and reputational liability for the company implementing such a solution could be devastating.

According to available information, OpenAI patched this specific vulnerability on February 20, 2026. Although the bug was fixed, this incident shows that the security architecture of Large Language Model (LLM) systems is still evolving and must face threats that seemed long mastered in traditional software. For users and companies, there is one lesson to be learned: blind trust in assurances of AI environment isolation is risky. Any system that has network access – even indirect – can become a route for unauthorized data transfer until every possible protocol, including DNS, is subject to strict outbound control.

Source: The Register
Share

Comments

Loading...