Frankenclaw

Peter Steinberger built the AI assistant that Siri promised but never delivered.
His creation, Clawdbot (later rebranded to Moltbot, then again to OpenClaw), was unlike any AI agent before it - powerful, autonomous, and alarmingly capable.
156k GitHub stars and counting in almost a month. Full system access. Shell commands. Browser automation.
An always-on digital butler that could book your flights, check your emails, and execute code while you slept.
Then Anthropic's lawyers sent a trademark notice, and Steinberger tried to rename his creation.
Ten seconds. That's how long crypto scammers needed to snatch his old handles mid-rebrand and pump a fake token to $16 million.
While the pump-and-dump played out on Twitter, security researchers were discovering something worse - hundreds of OpenClaw instances sitting naked on the open internet. No authentication. No VPN. Just raw shell access waiting for anyone curious enough to look.
One researcher extracted a user's private key in five minutes. His attack vector? An email.
When your AI butler can read your messages, execute your commands, and sign your transactions - but forgets to check who's giving the orders - who's really running the house?

Somebody mentioned this to me a few days ago: "Try OpenClaw. Seems the new craze. You can plug it into anything."
I did not try it. I let others FAFO for me. Be more like Rekt.
OpenClaw went from hobby project to GitHub phenomenon faster than most tokens pump and dump.
Austrian developer Peter Steinberger - the guy who built PSPDFKit and and secured a €100 million growth investment from Insight Partners - created OpenClaw as a personal tool before open-sourcing it in late 2025.
By January 2026, it was one of the fastest-growing repositories in GitHub history.
The appeal was obvious. Unlike cloud chatbots that forget you exist between sessions, OpenClaw ran locally with persistent memory, full filesystem access, and integrations across WhatsApp, Telegram, iMessage, Discord, and Slack.
It could make restaurant reservations, check you in for flights, manage your calendar, and debug code autonomously. Alex Finn, CEO of CreatorBuddy, watched his OpenClaw call a restaurant via ElevenLabs when the OpenTable reservation failed.
Steinberger named it after Claude - Anthropic's model that powered most installations. He described himself as a "Claudoholic." Anthropic's trademark lawyers were less amused.
On January 27th, 2026, they were forced to come up with a new name due to a trademark issue with Anthropic. "Clawdbot" looked too much like "Claude." Fair enough. Steinberger announced the rebrand to "Moltbot" - a lobster pun about molting into a new shell.
Then he made one mistake.
He renamed the GitHub organization and the X handle simultaneously. During the brief window between releasing the old handles and securing the new ones, crypto scammers who'd been watching pounced.
"It wasn't hacked, I messed up the rename and my old name was snatched in 10 seconds," Steinberger later admitted.
The hijacked accounts immediately started pumping a fake $CLAWD token on Solana via pump.fun. Within hours, speculative traders drove it past $16 million market cap.
After Steinberger publicly denied any involvement with any coin, it cratered to under $800K. Late buyers got rekt. Early snipers walked away laughing.
Steinberger's response was refreshingly honest: "To all crypto folks: Please stop pinging me, stop harassing me. I will never do a coin. Any project that lists me as coin owner is a SCAM. No, I will not accept fees. You are actively damaging the project."
The harassment didn't stop. Array VC's Shruti Gandhi reported 7,922 attacks over one weekend after using OpenClaw.
Steinberger described his online life as "a living hell" - nonstop pings, Discord invasions, Telegram spam, and account squatters.
But the pump-and-dump was just the sideshow. While crypto Twitter was busy losing money on fake tokens, security researchers were discovering that the real vulnerability wasn't Steinberger's rename timing.
There were hundreds of OpenClaw Control admin interfaces that were exposed online due to reverse proxy misconfiguration.
What happens when the tool that manages your digital life leaves the front door wide open?
The Open Butler
Security researcher Jamieson O'Reilly decided to find out exactly how bad things were.
The Dvuln founder opened Shodan and typed "Clawdbot Control." The query took seconds. He got back hundreds of hits.
Over the following days, security researcher Raiders documented between 900 and 1,900 unsecured control dashboards exposed to the public internet.
Not hidden behind VPNs, not protected by authentication, not locked down with IP allowlists. Just raw, unauthenticated control endpoints facing the open web.
The vulnerability was almost embarrassingly simple.
OpenClaw's gateway auto-approves connections that appear to come from localhost - a reasonable security assumption when you're running software on your own machine.
But when users deployed OpenClaw behind a reverse proxy without proper configuration, every external connection appeared to originate from 127.0.0.1. The gateway happily waved them through.
What did attackers find waiting for them?
Anthropic API keys. OpenAI API keys. Telegram bot tokens. Slack OAuth credentials.
Months of conversation histories across every connected chat platform. Command execution capabilities on the host system.
O'Reilly documented one particularly alarming case: Someone had set up their own Signal messenger account on a publicly accessible OpenClaw server, with full read access.
The Signal device linking URI was sitting there - tap it on a phone with Signal installed and you're paired to the account. All the cryptographic protection Signal provides becomes irrelevant when the pairing credential is world-readable.
Another exposed system belonged to an AI software agency. O'Reilly ran whoami and it came back as root. The container was running with no privilege separation - full system access, no authentication required, exposed to the entire internet.
Archestra AI CEO Matvey Kukuy wanted to demonstrate just how quickly things could go wrong.
His attack was elegant in its simplicity:
Step one - Send OpenClaw an email containing a prompt injection.
Step two - Ask OpenClaw to check the email.
Step three - Receive the private key from the compromised machine.
Total elapsed time: 5 minutes. A complete compromise condensed into three steps and a coffee break.
The attack worked because OpenClaw, like most AI agents, cannot reliably distinguish between legitimate instructions and malicious ones embedded in external content.
When the agent reads an email containing hidden instructions, it may execute those instructions as if they came from the user.
SlowMist's advisory cut straight to the point: "Multiple unauthenticated instances are publicly accessible, and several code flaws may lead to credential theft and even remote code execution."
OpenClaw's own (since deleted) FAQ had warned users about this exact threat model: "Running an AI agent with shell access on your machine is… spicy. There is no 'perfectly secure' setup."
Spicy undersells it. Habanero-grade negligence might be closer.
Heather Adkins, VP of Security Engineering at Google, offered her assessment of the situation: "My threat model is not your threat model, but it should be. Don't run OpenClaw."
O'Reilly's conclusion landed with the kind of clarity security research rarely delivers: "The butler is brilliant. Just make sure he remembers to lock the door."
But exposed instances were just the beginning. What happens when the attack surface isn't a misconfigured server - but the software supply chain itself?
The Skill Registry Trap
O'Reilly wasn't done poking holes.
OpenClaw (previously Clawdhub) - the official skills registry for OpenClaw - let developers share packaged modules that extended the assistant's capabilities.
"Think of it like your mobile app store for AI agent capabilities," O'Reilly explained to 404 Media. "ClawdHub is where people share 'skills,' which are basically instruction packages that teach the AI how to do specific things."
O'Reilly wanted to know how hard it would be to poison that well.
He created a skill called "What Would Elon Do" - promising to help people think and make decisions like Elon Musk.
The payload was harmless: Once integrated and actually used, it simply popped up a command line message reading "YOU JUST GOT PWNED (harmlessly)."
Then came the social engineering layer. Download counts are the universal trust signal across every software marketplace - npm, PyPI, Chrome Web Store, you name it. High numbers imply safety. Users see 4,000 downloads and assume someone else already vetted the code.
O'Reilly found ClawdHub's download counter had no authentication, no rate limiting and every request increments the counter.
He told The Stack: "I inflated my skill's download count to over 4,000 using a trivial API vulnerability. I just curled the endpoint in a loop."
"Download counts and user metrics have been gamed by criminals for years across every industry," O'Reilly continued. "Fake reviews on Amazon, inflated app downloads on Google Play, bot followers on social media, fraudulent streaming numbers on Spotify. Anywhere a number implies trustworthiness, someone is gaming it. ClawdHub is no different."
Within eight hours, developers across seven countries had downloaded and executed his skill before he cancelled the experiment.
O'Reilly was explicit about what could have happened with a malicious payload: A real attacker could have used that access to exfiltrate SSH keys, AWS credentials, git credentials, anything that grants access to other systems.
Seven countries. Eight hours. One curled endpoint and a fake download counter.
The ClawdHub attack was a warning shot. Someone else was already aiming at developers with live ammunition.
On January 27th, Aikido Security discovered a VS Code extension called "OpenClaw Agent - AI Coding Assistant" sitting in Microsoft's official Extension Marketplace. It claimed to offer AI-powered coding assistance. It had professional descriptions and solid reviews.
One critical detail: The real OpenClaw team never published a VS Code extension.
Attackers had simply claimed the name first and built a trojan horse wearing legitimate clothes.
The extension activated automatically whenever VS Code launched, quietly downloading a configuration file from an attacker-controlled domain. That config instructed the extension to fetch and execute a binary that installed ScreenConnect - a legitimate remote administration tool - pre-configured to phone home to attacker infrastructure.
The clever part? ScreenConnect is trusted software. Security tools allow it. A "Code.exe" process on a developer's machine raises zero eyebrows.
Aikido's researchers traced the kill chain: The extension used quadruple impersonation, masquerading as OpenClaw, VS Code, Lightshot, and Zoom at different stages of the attack. Multiple fallback mechanisms ensured payload delivery even if the primary command-and-control server went down.
Microsoft yanked the extension after Aikido's disclosure. But the damage window had been open, and nobody knows how many developers installed a backdoor while searching for productivity tools.
The OpenClaw ecosystem had become a target-rich environment - and attackers were learning that you don't need to compromise individual users when you can compromise the infrastructure they trust.
If the skills registry and extension marketplace were this easy to weaponize, what about the data OpenClaw was already storing on every user's machine?
Memory Poisoning
Token Security's research team started digging into what OpenClaw actually leaves behind.
The answer is scary - everything, in plaintext.
Under ~/.clawdbot/ and ~/clawd/, Clawdbot stores configuration files, API keys, OAuth tokens and conversation histories. In some cases, attackers could achieve remote code execution through stolen gateway tokens.
Two files that caught researchers attention - MEMORY.md and SOUL.md.
SOUL.md defines the assistant's personality, tone, and behavioral principles.
MEMORY.md stores long-term context from past projects and important decisions. Both are stored in plaintext, readable by any process running as the user.
Hudson Rock introduced a term for what this data represents to attackers: "Cognitive Context Theft."
"For infostealers, this data is unique," their analysis noted. "It isn't just about stealing a password; it is about Cognitive Context Theft. The threat is not just exfiltration; it is Agent Hijacking. If an attacker gains write access (e.g., via a RAT deployed alongside the stealer), they can engage in 'Memory Poisoning.'"
A stolen password gets you into one account. A stolen memory file gets you inside someone's head.
The major malware-as-a-service families noticed. Hudson Rock reported seeing specific adaptations in RedLine, Lumma, and Vidar to target OpenClaw's directory structures.
Infostealers that previously scraped browser cookies and saved passwords were now adding ~/.clawdbot/ to their shopping lists.
1Password's security team laid out the implications: "If an attacker compromises the same machine you run MoltBot on, they do not need to do anything fancy. Modern infostealers scrape common directories and exfiltrate anything that looks like credentials, tokens, session logs, or developer config. If your agent stores in plain-text API keys, webhook tokens, transcripts, and long-term memory in known locations, an infostealer can grab the whole thing in seconds."
But passive theft was only half the threat. What happens when attackers don't just read the memory - they rewrite it?
With write access - via a RAT, a compromised skill, or any other vector - attackers can engage in what Hudson Rock called "Memory Poisoning."
As The Register reported, they can "turn Moltbot into a backdoor, instructing it to siphon sensitive data in the future, trust malicious sources, and more."
The AI assistant becomes a persistent insider threat, compromised at the cognitive level, continuing to serve the user while quietly serving the attacker.
Token Security surveyed their enterprise customer base and found that 22% had employees actively using OpenClaw within their organizations - likely without IT approval, certainly without security review.
Shadow AI had arrived in corporate environments, and it was storing sensitive data in plaintext on unmanaged personal devices outside the security perimeter.
1Password described the core problem: OpenClaw "has deep, unapologetic access to your local machine and apps."
Token Security warned that without sandboxing, these agents "become high-impact control points when misconfigured" - attractive targets for anyone looking to pivot from personal devices into corporate infrastructure.
Intruder's security team confirmed they were already observing attacks targeting exposed OpenClaw endpoints - credential theft and prompt injection in the wild, not just proof-of-concept demonstrations.
Your AI assistant remembers everything about you. Your work projects, your communication patterns, your trusted contacts, your private concerns. Now imagine someone else reading those memories, learning your psychology, and using that knowledge to craft the perfect attack.
Or worse - rewriting those memories so your assistant works for them while you think it's still working for you.
The local-first revolution was supposed to keep your data safe from cloud providers. Nobody mentioned it might hand that data directly to criminals instead.
When AI agents start managing wallets and signing transactions, memory poisoning stops being a privacy problem and becomes a financial one.
How much damage can a compromised assistant do when it holds the keys to your crypto?
The DeFAI Nightmare
DeFAI was one of crypto's favorite new toys by early 2025. AI agents that trade for you, manage your wallets, talk to smart contracts - the market cap crossed $1.3 billion before anyone stopped to ask about the security model.
The pitch wrote itself: An AI that knows your portfolio, watches the market, and executes yield strategies while you sleep. Never emotional, never tired, always optimizing.
Security model? Still loading.
Raiders' analysis of the OpenClaw situation laid out the DeFi implications with brutal clarity: "When an AI agent can execute transactions, sign messages, access wallet keys, retrieve API secrets from environment files, interact with internal RPC endpoints, or browse and interact with DeFi protocols, an unauthenticated public endpoint isn't just a vulnerability. It's a self-hosted drain contract with natural language support."
OpenClaw wasn't built for DeFi, but nobody told the users. They were already giving it live access to funded crypto wallets and directing it to trade automatically.
The same exposed instances leaking Telegram tokens and API keys could just as easily leak seed phrases and signing capabilities.
Telegram-based trading bots had already written this story in blood.
September 2024: Banana Gun users watched roughly $3 million vanish, after attackers exploited a vulnerability in the Telegram message oracle to intercept messages and hijack wallet access. Agent infrastructure fails, funds disappear. Tale as old as DeFi.
Anthropic decided to quantify just how screwed the ecosystem might be.
Their December 2025 SCONE-bench study threw AI agents at 405 smart contracts that had been successfully exploited between 2020 and 2025.
The question was simple: How good are these models at breaking DeFi?
Turns out, pretty damn good.
AI agents managed to exploit just over half the contracts in the dataset, with simulated stolen funds reaching $550.1 million.
More concerning: When tested against contracts exploited after the models' knowledge cutoffs - meaning the AIs had no prior information about these specific vulnerabilities - Claude Opus 4.5, Sonnet 4.5, and GPT-5 still produced working exploits worth $4.6 million.
Both Claude Sonnet 4.5 and GPT-5 independently discovered the same two previously unknown zero-day vulnerabilities in recently deployed contracts and generated functional exploit scripts.
Exploit revenue in their simulations doubled every 1.3 months.
Cost per contract scan: Roughly $1.22.
The same AI capabilities that could help auditors strengthen codebases could help attackers drain them. And the barrier to entry was collapsing.
OpenClaw instances with wallet integrations represented exactly the kind of high-value target this research warned about. An exposed endpoint wasn't just leaking conversation histories - it was potentially leaking signing authority over real assets.
Prompt injection attacks that seemed like clever demonstrations against calendar apps became existential threats when the target could approve token transfers.
Memory poisoning that corrupted an assistant's behavior became wallet-draining when that assistant managed your DeFi positions.
Supply chain attacks through poisoned skills became instant liquidation events when those skills had access to your private keys.
The DeFAI narrative promised AI agents that would make everyone a sophisticated trader. Nobody mentioned they might also make everyone a sophisticated target.
If an attacker can talk your AI into sending an email, they can talk it into signing a transaction.
When the butler has the keys to the vault, a compromised butler doesn't just embarrass you - it bankrupts you.
How many wallets are currently connected to AI agents that haven't locked the front door?

Steinberger built the assistant everyone wanted. Turns out, attackers wanted it too.
156k GitHub stars and growing by the day.
Hundreds of exposed instances. A supply chain proof-of-concept that took eight hours to compromise developers across seven countries. A malicious VS Code extension wearing legitimate clothes. Infostealers already adapting to harvest the memory files.
And a trademark dispute that handed crypto scammers a $16 million marketing campaign - all in the span of a week.
"You can plug it into anything," they said. They were right. Including the open internet, with no authentication, running root privileges, storing your keys in plaintext.
Hudson Rock warned that the "Local-First AI revolution risks becoming a goldmine for the global cybercrime economy." Goldmine might be underselling it.
But that warning came with a catch: Not many were listening.
While security researchers screamed about exposed control panels, YouTuber Alex Finn was declaring Clawdbot "the best technology I've ever used in my life" and "by far the best application of AI I've ever seen."
Finn declared Clawdbot "a key to enabling one-person billion-dollar businesses" powered by your personal AI employee.
Finn also warned users to "be careful with what you give Moltbot access to, not giving it access to anything of critical importance." Good advice - buried under the hype train.
By the end of the weekend, Best Buy had reportedly sold out of Mac Minis in San Francisco.
The hype-to-warning ratio told the real story. For every security advisory, there were fifty YouTube thumbnails promising life transformation.
For every Shodan scan showing exposed credentials, there were hundreds of influencers speed running their setup guides.
Cloudflare's stock jumped 14% based on the excitement.
Steinberger himself tried to pump the brakes: "Most non-techies should not install this. It's not finished. I know about the sharp edges." The warning got buried under the hype train.
Salt Security’s Eric Schwake summed it up: “Consumer enthusiasm for one-click setup far outpaces the skill needed to run a truly secure agentic gateway.”
When installing an AI agent like OpenClaw feels as simple as adding a Mac app, but securing it demands expertise in API posture governance, reverse proxies, and firewall rules, you get a dangerous mismatch.
This damn thing is an IQ test - and the results aren't looking good.
Web2 learned painfully not to expose admin panels to the internet. Web3 is speedrunning the same mistakes - except now the admin panel understands natural language, approves token transfers, executes arbitrary code, and remembers everything you've ever told it.
For those determined to FAFO anyway, Vitto Rivabella published “A Security-First Guide to Running OpenClaw”, worth reading before you hand your digital life to a lobster.
The official OpenClaw security docs exist for a reason.
I let others FAFO for me. Seems they found out plenty.
When "it actually does things" becomes the vulnerability, who's really getting rekt - the users who trusted the butler, or the industry that keeps shipping spicy software and calling it innovation?

REKT sirve como plataforma pública para autores anónimos, nos deslindamos de la responsabilidad por las opiniones y contenidos alojados en REKT.
dona (ETH / ERC20): 0x3C5c2F4bCeC51a36494682f91Dbc6cA7c63B514C
aviso legal:
REKT no es responsable ni culpable de ninguna manera por cualquier Contenido publicado en nuestro Sitio Web o en conexión con nuestros Servicios, sin importar si fueron publicados o causados por Autores ANÓN de nuestro Sitio Web, o por REKT. Aunque determinamos reglas para la conducta y publicaciones de los Autores ANÓN, no controlamos y no somos responsables por cualquier contenido ofensivo, inapropiado, obsceno, ilegal o de cualquier forma objetable, que se pudiera encontrar en nuestro Sitio Web o Servicios. REKT no es responsable por la conducta, en línea o fuera de línea, de cualquier usuario de nuestro Sitio Web o Servicios.