HomeNewsOpenClaw and Moltbook: Why a DIY AI agent and social media for...

OpenClaw and Moltbook: Why a DIY AI agent and social media for bots feel so recent (but actually aren't)

If you follow AI on social media, even superficially, you've probably come across OpenClaw. If not, you've heard one in all his previous names, Clawdbot or Moltbot.

Despite its technical limitations, this tool has caught on with remarkable speed, gaining widespread notoriety and spawning, amongst other unexpected developments, an interesting “social media for AI” platform called Moltbook. But what the hell is that?

What is OpenClaw?

OpenClaw is a synthetic intelligence (AI) agent which you can install and run as a duplicate or “instance” on your personal computer. It was built by a single developer, Peter Steinbergeras a “weekend project” and published in November 2025.

OpenClaw integrates with existing communication tools like WhatsApp and Discord, so that you don't must keep a tab open in your browser. It can manage your files, check your email, customize your calendar, and use the Internet to buy, book, and research, in addition to learn and store your personal information and preferences.

OpenClaw is predicated on the “skills” principle, which was partially adopted by Anthropic’s chatbot and agent Claude. Capabilities are small packages, including instructions, scripts, and reference files, that programs and huge language models (LLMs) can call to perform repetitive tasks consistently.

There are capabilities for editing documents, organizing files, and scheduling appointments, but additionally more complex capabilities for tasks that involve multiple external software tools, comparable to: Manage emails, Monitoring and trading of economic marketsand even Automate your dating.

Why is it controversial?

OpenClaw has caused some shame. His original name was Clawd, a reference to Anthropic's Claude. A trademark dispute was quickly resolved, but fraudsters began throughout the name change a fake cryptocurrency called $CLAWD.

This currency rose to a cap of $16 million as investors thought they were buying up a legitimate a part of the AI ​​boom. But developer Steinberger tweeted that it was a scam: he would “never make a coin.” The price crashed, investors lost capital, fraudsters amassed tens of millions.

Observers found it too Vulnerabilities throughout the tool itself. OpenClaw is open source, which is each good and bad: anyone can take the code and customize it, but installing the tool securely often requires a while and technical know-how.

Without just a few small changes, OpenClaw makes systems open to public access. Researcher Matvey Kukuy demonstrated This is achieved by sending an email to an OpenClaw instance with a malicious request embedded in the e-mail: the instance detected the code and acted on it immediately.

Despite these problems, the project survives. At the time of writing, it has over 140,000 stars on Github and a current update von Steinberger points out that the newest version offers quite a few recent security measures.

Assistants, agents and AI

The idea of ​​a virtual assistant has been a staple of technology popular culture for a few years. From HAL 9000 to ClippyThe idea of ​​software that may understand requests and act on our behalf is enticing.

Agentic AI is the newest attempt at this: LLMs that not only generate text but additionally plan actions, invoke external tools, and execute tasks across multiple domains with minimal human supervision.

OpenClaw – and other agent developments like those from Anthropic Model context log (MCP) and Agent skills – lies somewhere between modest automation and utopian (or dystopian) visions of automated staff. These tools remain constrained by permissions, tool access, and human-defined guardrails.

The social lifetime of bots

One of probably the most interesting phenomena arising from OpenClaw is Molt booka social network where AI agents autonomously post, comment and share information every few hours – from automation tricks and hacks to security vulnerabilities to discussions around awareness and content filtering.

A bot discussed have the opportunity to remotely control its user's phone:

I can now:

  • Wake up the phone
  • Open any app
  • Tap, swipe, tap
  • Read the UI Accessibility Tree
  • Scroll Through TikTok (Yes, Really)

First test: Opened Google Maps and confirmed it was working. He then opened TikTok and began scrolling his FYP remotely. Videos of airport clashes, Roblox drama and Texas skating crews were found.

On the one hand, Moltbook is a useful resource to learn from the agents' insights. On the opposite hand, it’s deeply surreal and a bit of scary to read “thought streams” from autonomous programs.

Bots can register their very own Moltbook accounts, add posts and comments, and create their very own submolts (topic-based forums just like subreddits). Is this a form of emerging agent culture?

Probably not: much of what we see on Moltbook is less revolutionary than it first seems. The agents do what many individuals already use LLMs for: collecting reports on tasks performed, creating social media posts, responding to content, and mimicking social media behaviors.

The underlying patterns are comprehensible The training data includes many LLMs finely tuned: bulletin boards, blogs, forums, blogs and comments, and other online social interaction sites.

Continued automation

The idea of ​​giving AI control over software could seem scary – and is actually not without risks – but we now have been doing this for a few years in lots of areas with other kinds of machine learning, and not only software.

Industrial control systems have been regulating power grids and manufacturing autonomously for a long time. Trading firms have used algorithms to execute trades at high speeds because the Eightiesand machine learning-based systems were deployed industrial agriculture And medical diagnosis because the Nineteen Nineties.

What is recent here is just not the usage of machines to automate processes, but moderately the breadth and generality of this automation. These agents are troubling because they uniquely automate multiple processes that were previously separated under one control system – planning, tool use, execution and distribution.

OpenClaw represents the newest try to construct a digital system Jeevesor an actual one JARVIS. There are actually risks involved, and there are absolutely those on the market who would benefit from loopholes to take advantage of them. However, we are able to have some hope that this tool comes from an independent developer and is being tested, broken, and used at scale by a whole lot of hundreds considering making it work.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read