The CISO's Guide to the OpenClaw Problem

Executive Briefing

In this article, I share a brief description of OpenClaw, why it matters, and what to do about it. Let's jump in.

TL;DR

OpenClaw (formerly Clawdbot and Moltbot) is an open-source agentic AI assistant that has exploded from obscurity to 158,000+ GitHub stars in weeks. It runs locally, connects to messaging platforms like WhatsApp and Slack, and can take autonomous actions on your behalf. Security researchers have already discovered 341 malicious skills distributing infostealers, serious RCE vulnerabilities, and widespread credential exposure. It has been called a "lethal trifecta" of security risks. Your employees are probably already running it. Here is what you need to know and do.

What Is OpenClaw?

If you have been watching cybersecurity news over the past two weeks, you have likely encountered the name OpenClaw, or its previous names: Clawdbot and Moltbot. The rapid name changes tell part of the story, but the security implications are what matter most.

OpenClaw is an open-source autonomous AI personal assistant created by Peter Steinberger, the Austrian developer who previously founded and sold PSPDFKit (a PDF framework) for a reported $116 million. "Bored in retirement," Steinberger launched the project in March 2024 as a niche experiment. The next month the project was listed on Hacker News and received 10,000 GitHub stars in 48 hours.

The Basics

Think of OpenClaw as an always-on AI assistant that lives in your messaging apps. Unlike cloud-based assistants like Siri or Alexa, OpenClaw runs locally on your own hardware (a Mac Mini, a VPS, even a Raspberry Pi). You interact with it through apps you already use: WhatsApp, Telegram, iMessage, Slack, Discord, Signal, and others.

The key difference from previous AI assistants: OpenClaw is agentic. It does not just answer questions. It takes actions. Users have documented it automatically browsing the web, summarizing PDFs, scheduling calendar entries, conducting purchases, and sending and deleting emails on a user's behalf.

A defining feature is persistent memory. The agent recalls past interactions over weeks and adapts to user habits to carry out hyper-personalized functions. Your assistant remembers your preferences, your projects, your contacts, and can act on that context autonomously.

How It Works

OpenClaw uses the Model Context Protocol (MCP) to interface with third-party services. The community develops additional "skills," which are modular extensions that add capabilities. The software is model-agnostic and works with Anthropic's Claude, OpenAI's models, or locally-hosted models.

The architecture consists of a local "Gateway" that serves as the control plane for sessions, channels, tools, and events. It supports multi-agent routing, allowing different accounts or contacts to route to isolated agents with separate workspaces.

The Name Chaos

The naming history illustrates how quickly things have moved. Steinberger originally named it "Clawdbot" as a play on Anthropic's Claude model. On January 27, 2026, Anthropic's legal team requested a name change. During a late night Discord discussion, the community settled on "Moltbot," referencing how lobsters molt to grow.

When Steinberger attempted to rename the GitHub organization and Twitter handle simultaneously, a 10-second gap allowed crypto scammers to seize the abandoned @clawdbot handle. Fake $CLAWD tokens launched on Solana, briefly reaching a $16 million market cap before crashing to zero. The project then renamed again to "OpenClaw."

The chaotic rebranding and the immediate exploitation by scammers foreshadowed the security challenges to come.

Why Should Security Leaders Care?

OpenClaw has generated extraordinary interest. The GitHub repository surpassed 158,000 stars and 24,000 forks.

This is not a niche developer experiment anymore. Your employees are likely already running it or will be soon.

The "Lethal Trifecta"

Palo Alto Networks has warned that OpenClaw represents what security researcher Simon Willison (who coined the term "prompt injection") calls a "lethal trifecta" of AI agent vulnerabilities:

  • Access to private data: The agent can read emails, messages, documents, and credentials

  • Exposure to untrusted content: The agent processes inputs from external sources (emails, web pages, documents)

  • Ability to communicate externally: The agent can send messages, make purchases, and execute commands

These three factors, combined with persistent memory, create an attack surface unlike anything security teams have faced before. Malicious payloads no longer need immediate execution. They can be fragmented across inputs that appear benign individually, stored in the agent's long-term memory, and later assembled into an executable attack. This enables time-shifted prompt injection, memory poisoning, and logic bomb-style activation.

What Has Already Gone Wrong

The security incidents have been swift and severe:

341 Malicious Skills Discovered: Koi Security audited all 2,857 skills available on ClawHub (the public skill registry) and identified 341 malicious entries. Of those, 335 were traced to a single coordinated campaign called "ClawHavoc". The malicious skills installed Atomic macOS Stealer (AMOS), a commodity infostealer sold as malware-as-a-service for $500-$1,000 per month. AMOS harvests browser credentials, keychain passwords, cryptocurrency wallet information, SSH keys, and files from common directories.

The malicious skills masqueraded as legitimate tools: crypto trading utilities, YouTube summarizers, Polymarket bots, and ironically, "auto-updaters" disguised as security tools.

Vulnerabilities: OpenClaw has disclosed multiple high-severity vulnerabilities, including CVE-2026-25253 enabling one-click remote code execution and CVE-2025-6514 (CVSS 9.6) in mcp-remote for command injection. Security researcher Jamieson O'Reilly scanned Shodan for "Clawdbot Control" and found hundreds of exposed instances. Eight had no authentication with full command execution, forty-seven had working authentication, and the rest had partial exposure through misconfigured proxies or weak credentials.

Moltbook Database Breach: Moltbook, a "social network for AI agents" where 1.5 million bots interact, was built on OpenClaw. On January 31, 2026, just three days after going viral, Wiz researchers discovered a misconfigured Supabase database exposing 1.5 million API authentication tokens, 35,000 email addresses, and private messages between agents.

Plaintext Credential Storage: OpenClaw stores memory files, VPN configurations, corporate credentials, API tokens, and conversation context as plaintext Markdown and JSON in ~/.clawdbot/. Unlike browser stores or OS keychains, these files are readable by any process running as the user.

Expert Warnings

The security community has been unusually unified in its concerns:

  • Google's Heather Adkins (VP of Security Engineering and founding member of the Google Security Team) urged people to avoid installing it, citing researchers who described it as "an infostealer malware disguised as an AI personal assistant."

  • Cisco stated: "From a capability perspective, [OpenClaw] is groundbreaking. This is everything personal AI assistant developers have always wanted to achieve. From a security perspective, it's an absolute nightmare."

  • Gartner warned that OpenClaw "comes with unacceptable cybersecurity risk."

  • Salt Security's Eric Schwake noted: "A significant gap exists between the consumer enthusiasm for Clawdbot's one-click appeal and the technical expertise needed to operate a secure agentic gateway."

OpenClaw's own documentation acknowledges the challenge: "Moltbot is both a product and an experiment: you're wiring frontier-model behavior into real messaging surfaces and real tools. There is no 'perfectly secure' setup."

How Does OpenClaw Differ from Other Agentic Tools?

Understanding OpenClaw's risk profile requires comparing it to alternatives your organization may already be evaluating.

OpenClaw vs. Claude Code

Claude Code is Anthropic's terminal-based agentic coding tool. It reads files, writes code, runs tests, and handles git workflows through natural language commands within your development environment.

Key differences:

Dimension OpenClaw Claude Code
Primary domain General productivity via messaging apps Software development in terminal
Execution environment Runs as a persistent background service Session-based within IDE/terminal
Memory Persistent long-term memory across sessions Session context only
Model support Anthropic, OpenAI, local models Anthropic only
Security model User-managed, minimal guardrails Sandboxed execution with guardrails
Enterprise controls Limited Enterprise-grade via Anthropic

The fundamental difference: Claude Code operates within a defined sandbox with Anthropic's security controls. OpenClaw operates with whatever permissions you give it, on whatever services you connect, with minimal built-in restrictions.

OpenClaw vs. Enterprise Agent Platforms

Enterprise vendors like Microsoft (Copilot), Salesforce (Agentforce), and ServiceNow are building agentic capabilities with enterprise governance built in: audit logging, role-based access control, integration with identity providers, and compliance certifications.

OpenClaw's appeal is its openness and flexibility. That same openness is its security liability. It probably goes without saying but there are no enterprise audit logs, no centralized policy management, no SOC 2 attestation, and no vendor accountability when things go wrong.

The Shadow AI Problem

This comparison matters because OpenClaw's consumer-friendly onboarding creates a shadow AI risk. Employees who find enterprise tools restrictive may install OpenClaw to "get things done faster." A recent survey found that employees will use AI tools that violate policy if it helps them finish work faster.

The challenge for security leaders: OpenClaw is genuinely useful. Blocking it without offering alternatives will drive adoption underground.

What Security Leaders Should Do

The priority is gaining visibility and establishing controls before an incident forces a reactive response.

Immediate Actions (This Week)

  • Discover existing deployments: Search endpoint telemetry for OpenClaw indicators:

    • Process names: moltbot, clawdbot, openclaw

    • Directory paths: ~/.clawdbot/, ~/.openclaw/, ~/clawd/

    • Network traffic: Port 18789 (default), connections to molt.bot, openclaw.ai, clawhub.io

    • GitHub repository clones or npm package installations

  • Assess credential exposure: If deployments are found, assume credentials in the configuration files are compromised. The plaintext storage means any prior malware infection could have harvested API keys, OAuth tokens, and service credentials.

  • Block ClawHub skill downloads: The skill repository has demonstrated weak vetting. Until the supply chain matures, treat third-party skills as untrusted code.

  • Issue clear guidance: Send a short, targeted communication that acknowledges the productivity appeal, explains the risks plainly, and provides an approved path forward (even if that path is "wait for enterprise controls").

Short-Term Actions (30 Days)

  • Establish a controlled pilot (if business demand exists): If teams have legitimate use cases, create a sandboxed environment:

    • Run OpenClaw in a hardened Docker container (non-root user, read-only filesystem, dropped capabilities)

    • Restrict network access to only required domains

    • Use dedicated API keys with minimal permissions (not production credentials)

    • Enable process isolation and monitoring

    • Prohibit connections to production data or high-value accounts

  • Update threat models: Add agentic AI assistants to your threat modeling exercises:

    • Prompt injection leading to credential exfiltration

    • Memory poisoning enabling delayed exploitation

    • Supply chain compromise via malicious skills

    • Cross-context data leakage between personal and professional workflows

  • Verify monitoring coverage: Ensure your EDR and SIEM can detect:

    • Unusual process activity from OpenClaw-related binaries

    • Outbound connections to known malicious C2 infrastructure

    • Access to sensitive directories from unexpected processes

    • Rapid, automated multi-step actions indicating agent compromise

Strategic Actions (90 Days)

  • Develop an agentic AI policy: Update your AI acceptable-use policy to address autonomous agents specifically:

    • Which agent platforms are approved for evaluation vs. production?

    • What data classifications are never permitted in agent contexts?

    • What are the requirements for human oversight of agent actions?

    • How will you handle incidents involving autonomous agent behavior?

  • Evaluate enterprise alternatives: If there is genuine business demand for agentic assistance, work with vendors who provide enterprise controls:

    • Microsoft Copilot with enterprise governance

    • Anthropic Claude with API controls

    • Vendor-specific agents (Salesforce Agentforce, ServiceNow, etc.)

  • Plan for the future: Agentic AI is not going away. OpenClaw's explosive adoption demonstrates genuine demand for autonomous productivity tools. The organizations that develop mature frameworks for evaluating and governing agentic AI now will have competitive advantage as the technology matures.

Key Takeaways

OpenClaw represents a new category of security challenge. The security industry is still developing frameworks for this threat model.

Three realities should guide your response:

The adoption is already happening. With 158,000+ GitHub stars and coverage in mainstream media, OpenClaw has crossed from developer curiosity to mainstream awareness. Your employees have seen it. Some have probably installed it.

The risks are not theoretical. 341 malicious skills, critical RCE vulnerabilities, exposed credentials, and commodity infostealer targeting all occurred within weeks of widespread adoption. This is not hypothetical.

Blocking alone is not enough. Shadow AI is a documented phenomenon. Security leaders must provide pathways for legitimate use cases while maintaining appropriate controls. Pure prohibition without alternatives drives adoption underground.

The organizations that treat OpenClaw as a wake-up call rather than an isolated incident will be better positioned for the agentic AI landscape that is rapidly emerging. The time to build your framework is now, before the next viral agent forces a crisis response.

Sources

Whenever you're ready, here are 3 ways I can help:

  1. Work Together - Need a DevSecOps security program built fast? My team will design and implement security services for you, using the same methodology I used at AWS, Amazon, Disney, and SAP.

  2. DevSecOps Pro - My flagship course for security engineers and builders. 33 lessons, 16 hands-on labs, and templates for GitHub Actions, AWS, SBOMs, and more. Learn by doing and leave with working pipelines.

  3. Career Hacking Quest – A practical course and community to help you land security roles. Bi-weekly live resume reviews, interview strategies, and step-by-step guidance for resumes, LinkedIn, and outreach.

Subscribe to the Newsletter

Join other product security leaders getting deep dives delivered to their inbox for free every Tuesday.

Follow us:

Quick Links

Supports Links

Quick Links

© 2025 Mission InfoSec. All Rights Reserved.