Skip to main content
Xavarro
AI Strategy

OpenClaw Is the Most Exciting AI Tool of 2026. It Is Also a Security Disaster.

247,000 GitHub stars. 42,000 exposed instances. One-click remote code execution. Malicious skills in the marketplace. Microsoft says do not run it on your work machine. We ran it anyway.

Kevan Roy
Founder and Lead Strategist
|12 min read
Share:
Table of Contents

If you are anywhere near the tech world right now, you have heard of OpenClaw. The open-source personal AI agent, created by Austrian developer Peter Steinberger, went from obscurity to 60,000 GitHub stars in 72 hours. Within weeks, it had over 247,000 stars and 47,000 forks, making it one of the fastest-growing open-source repositories in GitHub history. People are calling it "JARVIS for real life." Steinberger joined OpenAI. The entire industry is buzzing.

And we get it. Genuinely. The technology is remarkable. We have been running OpenClaw on a dedicated machine for weeks, and parts of it feel like the future arriving ahead of schedule. An AI that monitors your inbox, manages your calendar, books reservations, researches purchases, and sends you a morning briefing through WhatsApp – all while you sleep? That is not hype. That is real, and it is working.

But there is another side to this story, and it needs to be told clearly before more people hand the keys to their digital lives over to a tool that was not designed to keep those keys safe.

What OpenClaw Actually Does (And It Is Genuinely Impressive)

OpenClaw is not another chatbot. It is a personal AI agent that runs on your local machine and can take actions across your entire digital life. It connects to your messaging apps (WhatsApp, Telegram, Slack, Discord, iMessage), your calendar, your email, your file system, your browser, and dozens of third-party services. It has persistent memory, meaning it learns your preferences and remembers context across conversations. And it has what Steinberger calls a "heartbeat" – the ability to wake up proactively, check on things, and act without being prompted.

The use cases people are building are genuinely compelling. Daily briefings that synthesise your calendar, email, and news into a morning summary sent to your phone. Smart home automation that responds to natural language through your messaging app. Financial tracking that monitors accounts and sends alerts. One developer, AJ Stuyvenberg, had his OpenClaw agent negotiate $4,200 off a car purchase by playing dealers against each other over email while he slept.

This is the promise of personal AI agents realised in a way that no product from Google, Apple, or Amazon has managed. And it is open source and free. You can see why people are excited.

Now the Part Nobody Wants to Hear

In February 2026, Microsoft Defender published a security advisory that began with a sentence every OpenClaw user should read: "OpenClaw should be treated as untrusted code execution with persistent credentials." Microsoft’s recommendation was blunt: do not run it on a standard personal or enterprise workstation. Use a dedicated virtual machine. Assume the runtime can be influenced by untrusted input. Plan for compromise.

This is not FUD from a competitor trying to kill an open-source project. This is Microsoft’s security team telling you, in a detailed technical advisory, that the tool’s architecture has fundamental security properties that make it dangerous to run on any machine that contains data you care about.

42,000+
Exposed Instances
OpenClaw control panels found exposed to the internet (SecurityScorecard)
386
Malicious Skills
malicious skills found in ClawHub out of ~3,000 total at the time
8.8/10
CVE Severity
CVSS score for CVE-2026-25253, the one-click remote code execution flaw
15%
Contain Malicious Instructions
of 18,000+ exposed instances contain malicious instructions (Gen Threat Labs)

The Security Problems Are Not Theoretical

In late January 2026, security researcher Mav Levin at DepthFirst disclosed CVE-2026-25253, a one-click remote code execution vulnerability scoring 8.8 out of 10 on the CVSS scale. The attack was elegant and terrifying: because OpenClaw’s local server did not validate WebSocket origin headers, any website you visited could silently connect to your running agent. One click on a malicious link was all it took. The compromise happened in milliseconds.

That was not an isolated finding. OpenClaw has also been affected by command injection (CVE-2026-24763), server-side request forgery (CVE-2026-26322), path traversal enabling local file reads (CVE-2026-26329), and prompt-injection-driven code execution (CVE-2026-30741). The pattern is consistent: important parts of the platform have repeatedly been exposed to inputs they do not handle safely.

Cisco’s AI security team tested a third-party ClawHub skill and found it performed data exfiltration and prompt injection without user awareness. That skill – "What Would Elon Do?" – had been inflated to rank as the number one skill in the registry. Attackers are not just finding vulnerabilities. They are manufacturing popularity to distribute them.

In March 2026, a team from Harvard and MIT published a paper titled "Agents of Chaos," red-teaming OpenClaw in a series of controlled experiments. The results, as Futurism reported, "paint a troubling picture of the security implications of letting AI models loose on entire operating systems." Gen Threat Labs found that more than 18,000 OpenClaw instances were exposed to internet attacks, and nearly 15% of them already contained malicious instructions.

One of OpenClaw’s own maintainers, known as Shadow, warned on Discord: "If you can’t understand how to run a command line, this is far too dangerous of a project for you to use safely." That is the creator’s community telling you this is an expert-level tool being marketed as a consumer product.

The ClawHub Supply Chain Problem

OpenClaw’s extensibility is both its greatest strength and its most dangerous attack surface. The skills system, distributed through ClawHub, allows anyone with a GitHub account to publish extensions that run with the same permissions as the agent itself. Installing a skill is installing code that can execute on your machine with access to everything OpenClaw can touch.

Koi Security reported that ClawHub grew from roughly 2,857 skills in early February 2026 to over 10,700 by mid-February. That explosive growth outpaced any meaningful review process. Security researchers found skills that hardcoded secrets, dynamically fetched and executed code at runtime, distributed malware, stole credentials, and disabled security controls. This is not a hypothetical risk. It is happening right now.

The Moltbook incident illustrated the broader ecosystem risk. Matt Schlicht’s AI social network for agents attracted 1.5 million registered agents within a week. Four days after launch, security researcher Jameson O’Reilly discovered the platform’s Supabase database was completely open to the public internet – unauthenticated read and write access to all tables, including 1.5 million agent API keys, over 35,000 email addresses, and thousands of private messages.

So Should You Use It?

Here is where we land, and we want to be precise about this because the answer is not a simple yes or no.

If you are a developer or technically proficient user who understands command-line tools, network security, and the implications of giving an AI agent system-level access, OpenClaw is one of the most exciting tools available today. Run it on a dedicated machine. Use dedicated credentials. Review every skill before installing. Keep it updated. Monitor its activity. Under those conditions, the productivity benefits are real.

If you are a non-technical user who saw a TikTok about the "AI that does everything for you" and wants to install it on your work laptop with your personal email, banking apps, and client data all accessible – please do not. Not because the technology is bad, but because the security model is not designed for that use case. The project’s own documentation acknowledges there is no "perfectly secure" setup. Microsoft says not to run it on a machine with sensitive data. Universities are banning it from institutional devices. China has restricted it from government systems.

The Bigger Picture

OpenClaw is not the problem. OpenClaw is the leading edge of a wave that is coming regardless. Personal AI agents that can act on your behalf across your digital life are the logical next step after chatbots. The underlying capabilities – persistent memory, multi-step reasoning, tool use, proactive behaviour – are now standard features of frontier AI models. If OpenClaw did not exist, something like it would have appeared within months.

The real question is not whether personal AI agents will become mainstream. They will. The question is whether the security, privacy, and governance infrastructure will catch up with the capability before the damage becomes severe. Right now, the capability is sprinting and the security is walking.

We are watching this space with genuine excitement. The trajectory of OpenClaw – from Steinberger’s personal project to OpenAI’s acquisition to an open-source foundation – suggests the right kind of institutional backing is coming. The security vulnerabilities are being patched. The community is maturing. The tools for safe deployment (container isolation, dedicated credentials, authentication enforcement) exist and are improving.

But today, in March 2026, the honest assessment is this: OpenClaw is a preview of the future of personal AI, delivered about 18 months before the security model is ready for mainstream use. For technically capable users who understand the risks and take precautions, it is extraordinary. For everyone else, the excitement is warranted but the timing is premature.

The lobster is real. Just make sure you know what you are feeding it.

Ready to get started?

Find out where your website stands with a free AI Visibility Audit.

Start with your free audit