Tech4 min readZDNet

3 ways Cisco's DefenseClaw aims to make agentic AI safer

P
Redakcja Pixelift0 views
Share
3 ways Cisco's DefenseClaw aims to make agentic AI safer

Foto: ZDNet

As many as 80% of business leaders harbor concerns regarding data security when deploying autonomous AI agents, prompting Cisco to unveil DefenseClaw—an innovative open-source framework designed to secure the agentic AI ecosystem. This solution introduces three key layers of protection that transform how artificial intelligence systems operate on sensitive assets. The first pillar is dynamic permission management (Just-in-Time Permissions), which grants agents access to tools and data exclusively for the duration of a specific task, minimizing the risk of misuse. The second element is advanced user intent verification, preventing prompt injection attacks that could force the AI into dangerous actions. The system is rounded out by transparent reporting, providing detailed logs of every operation undertaken by the autonomous model. For users and organizations, this signifies a safer transition from simple chatbots to advanced systems that independently book travel or manage databases. DefenseClaw reduces the barrier of fear regarding uncontrolled algorithmic actions, offering developers ready-made security standards in an increasingly unpredictable AI environment. The implementation of such safeguards is becoming the foundation for building trust in the human-machine relationship.

The implementation of artificial intelligence within corporate structures is encountering a barrier that cannot be overcome simply by increasing computing power or refining language models. The problem is a lack of trust. While agentic AI promises the autonomous execution of complex tasks, networking giant Cisco points to a critical lack of an orchestration layer that would allow for the tracking and control of actions taken by digital agents. The answer to this impasse is intended to be DefenseClaw — a new solution aimed at making agentic systems secure and transparent for business.

The concept of AI agents differs fundamentally from simple chatbots. While traditional LLM models merely answer questions, agentic systems have the ability to interact with external tools, databases, and applications to independently deliver a specific result. According to experts from Cisco, it is precisely this autonomy that causes the greatest fear in the enterprise sector. Without proper oversight, an agent could make an erroneous decision in a production system, which on a corporate scale generates risks of unacceptable financial and reputational losses.

Transparency of actions through advanced orchestration

The first pillar upon which DefenseClaw is based is the creation of a transparent orchestration layer. In the current AI deployment model, agents often operate within a "black box." IT administrators see the input query and the final result, but the intermediate process — that is, which tools were called and what data was processed — remains unclear. Cisco proposes an architecture that monitors every step of the agent in real-time.

The introduction of a dedicated oversight layer allows for the mapping of AI intentions to specific actions within the network infrastructure. This enables enterprises to verify whether an agent is exceeding its granted permissions. DefenseClaw acts as a digital auditor that not only records activity but also allows for its visualization, which is crucial for understanding the logic behind the machine's autonomous decisions.

  • Full logging of agent interactions with third-party systems.
  • Verification of the decision path before executing high-risk operations.
  • The possibility of an immediate "kill switch" by the operator to interrupt the chain of actions.

Security at the intersection of networks and language models

The second aspect of DefenseClaw is the integration of AI security with deep network analytics, for which Cisco has been known for decades. Leveraging its leadership position in infrastructure, the company aims to protect agentic AI against prompt injection attacks and unauthorized data exfiltration. The system analyzes not only the content of queries but also anomalies in network traffic generated by the models themselves.

In practice, this means that if an AI agent suddenly begins sending unusual volumes of data to unknown IP addresses or attempts to access network segments that are not necessary for the task, DefenseClaw will block such activity. This is a Zero Trust approach applied to the field of artificial intelligence. Instead of trusting that the agent will perform the task securely, the system assumes that every action must be authenticated and verified against the company's security policy.

"The reason for the slow adoption of agentic AI in enterprises is the lack of an orchestration layer to track what agents are actually doing," Cisco representatives emphasize.

Standards and control in a multi-model environment

The third element of Cisco's strategy is the standardization of managing diverse agents. Modern enterprises rarely use just one model; most often, it is an ecosystem comprising solutions from OpenAI, Anthropic, or open-source models run locally. DefenseClaw is intended to serve as a universal control panel, independent of the underlying technology provider.

Thanks to this approach, cybersecurity teams can apply uniform Compliance policies to all agents operating within the organization. This eliminates the problem of silos, where each department implements its own AI tools without central oversight. DefenseClaw allows for the definition of "guardrails" that are enforced at the infrastructure level, making security an integral part of the technology stack rather than just an optional add-on in an application.

In my opinion, Cisco's initiative with the DefenseClaw system is a key moment for the industry. The transition from fascination with the capabilities of generative AI to hard operational rigor is essential for this technology to move beyond the pilot phase. If Cisco succeeds in convincing the market that their orchestration layer effectively mitigates agentic risks, we can expect a rapid acceleration in the automation of business processes that were previously considered too sensitive for artificial intelligence.

Source: ZDNet
Share

Comments

Loading...