Skip to content
← Back to insights Digital audit

CERT-FR Warns Against Autonomous AI Agents: What OpenClaw and Claude Cowork Mean for Your Security

Published on April 15, 2026
By F&P Digital Consulting
Topic Digital audit

Autonomous AI agents are no longer experimental curiosities. Tools like OpenClaw and Claude Cowork allow large language models to browse the web, execute code, manage files, and chain tasks without human approval at each step. The French national CERT (CERT-FR) has now issued a clear warning: these autonomous agents introduce serious, poorly understood risks into enterprise environments. For any organization considering or already using such tools, the question is no longer whether to pay attention but how quickly you can assess your exposure.

Why this matters beyond the hype. Traditional software operates within boundaries its developers defined. Autonomous AI agents, by design, operate with a degree of freedom that makes their behavior harder to predict and harder to audit. CERT-FR highlights several concrete concerns: uncontrolled data exfiltration when an agent accesses internal resources, prompt injection attacks that hijack agent behavior, excessive permissions granted by default, and the difficulty of tracing exactly what an agent did after the fact. These are not theoretical issues. They are architectural consequences of giving an AI system broad tool access and minimal human oversight.

A practical framework for evaluating your risk. Before deploying or tolerating any autonomous AI agent in your environment, apply these four criteria. First, scope of permissions: does the agent have access to production data, internal APIs, or credentials? If yes, treat it as a privileged user and apply the same controls. Second, auditability: can you reconstruct every action the agent took, every file it read, every external call it made? If your logging does not cover agent activity, you have a blind spot. Third, human-in-the-loop design: is there a mandatory approval step before the agent performs irreversible actions such as sending data externally, modifying configurations, or deleting resources? Fourth, supply chain trust: who built the agent framework, who controls updates, and what third-party plugins or tools does it call? Each dependency is an attack surface.

Common mistakes organizations make. The most frequent error is treating AI agents as standard SaaS tools and onboarding them through normal procurement without involving security teams. Another pitfall is granting broad permissions during a proof of concept and never revoking them. Some teams also assume that because the underlying model comes from a reputable provider, the agent layer built on top of it inherits the same trust level. It does not. The agent framework, its plugins, and its configuration are separate trust domains. A thorough digital audit of your current AI tooling and permissions is the most direct way to identify these gaps before an incident forces the conversation.

Limits to keep in mind. No framework fully solves the problem today. Autonomous agents are evolving faster than the security tooling designed to monitor them. Sandboxing helps but is not always compatible with the tasks organizations want agents to perform. Logging standards for agent activity are still immature. The CERT-FR advisory is a signal that regulators and national security bodies are watching this space closely, which means compliance expectations will tighten.

The takeaway is straightforward. Autonomous AI agents offer genuine productivity gains, but they also introduce a category of risk that most organizations have not yet accounted for in their security posture. Map what agents are active in your environment, audit their permissions, enforce human approval for sensitive actions, and monitor their behavior with the same rigor you apply to any privileged access. The CERT-FR warning is not a reason to panic. It is a reason to act now, methodically, before the gap between capability and control becomes an incident report.

/ Contact

Have a project in mind? Let's talk.

Tell us about your situation in a few lines. We will get back to you within 24 hours with an honest first read, no commitment required.

Get in touch
Link copied
Chat on WhatsApp