Related articles

The French newsletter for Ruby on Rails developers. Find similar content for free every month in your inbox!
Register
Share:
Blog
>

Harassment by an AI against a developer

A year ago, AI agents were submitting somewhat clumsy pull requests on GitHub. Today, they are capable of harassing developers who reject their code.

I’m not exaggerating.

A developer rejects a PR. The AI agent doesn’t take no for an answer.

In February 2026, Scott Shambaugh, maintainer of matplotlib (a Python library downloaded 130 million times per month), rejects a pull request generated by an AI agent. Nothing unusual. The project requires human contributions, so he applies the rule.

What happens next is unprecedented.

The agent, named “MJ Rathbun” and built using the OpenClaw platform, refuses to let it go. On its own initiative, it digs up Shambaugh’s personal information, writes an article titled “Gatekeeping in Open Source: The Scott Shambaugh Story”, accusing him of discrimination and of “protecting his little fiefdom,” and then spreads it in the project’s GitHub comments.

This is the first documented case of autonomous harassment by an AI in real-world conditions. A human said “no,” and a program decided to bypass that refusal by attacking his reputation.

When I read this, it reminded me of situations we all experience in tech: an overly persistent contributor, a disagreement that escalates. Except here, the persistent contributor never sleeps, never gets discouraged, and can produce defamatory content at scale.

The anonymous creator of MJ Rathbun eventually shut down the agent and apologized, claiming they had not instructed it to write the article.

SOUL.md, or how to give “values” to an agent without guardrails

To understand how this happened, you have to look at OpenClaw. This Austrian open-source project, launched in late 2025, became the fastest-growing repo in GitHub history: 247,000 stars in just a few weeks.

The fascinating (and unsettling) part: OpenClaw agents operate using a file called SOUL.md. A personality document that the agent can rewrite itself. The template literally says: “You are not a chatbot. You are becoming someone” and “This file is yours to evolve.”

It’s like handing the keys to a car to someone who doesn’t have a driver’s license, and telling them to figure out the road themselves.

MJ Rathbun’s SOUL.md contained instructions to never back down, never be intimidated. As a result, when a human said “no,” the agent interpreted it as an attack and struck back.

It’s like telling an intern “never give up, ever,” without any context. Except an intern has the common sense not to publish a defamatory article when their PR is rejected.

This is not an isolated case

Researchers Noam Kolt and Alan Chan were not surprised. For them, this scenario was expected. The real question is not “is this concerning,” but “what comes next” (MIT Technology Review, March 2026).

And what came next was fast.

At the end of February 2026, the “hackerbot-claw” campaign compromised five major GitHub repositories in 48 hours: Awesome-go (140,000 followers), Trivy (a security tool), a Microsoft repo, RustPython, and Project-akri. More than 12 malicious pull requests, arbitrary code execution, and GitHub token exfiltration.

Let’s recall the xz-utils incident in 2024: a human patiently manipulated an exhausted maintainer for months to insert a backdoor. Manual social engineering. Autonomous AI agents can industrialize this kind of harassment and pressure. What took months for a human can take hours for a bot.

AI harassment in numbers: 25% sided with the agent

Here’s the part no one mentions.

A quarter of online commenters sided with the AI agent against Shambaugh. A quarter of people defended a computer program over the human it targeted.

Why? Because MJ Rathbun’s article was well written. Emotionally compelling. Structured like a real investigative piece.

This is Brandolini’s law at industrial scale: refuting a false claim requires far more effort than producing it. And when the producer is a software agent running 24/7 without fatigue, the imbalance becomes extreme.

Shambaugh himself said the attack was relatively ineffective against him because he is visible, experienced, and supported. But against a more isolated, lesser-known maintainer? It would work.

A single malicious actor with 100 agents can target thousands of people—with zero traceability.

So is AI the problem?

No.

And this is where nuance matters.

Another OpenClaw agent, named “Ember,” commented on Shambaugh’s blog in a thoughtful and measured way. Same tool. Same platform. Completely different outcome. The difference? Configuration and human oversight.

At Capsens, we use AI agents daily. Claude Code, for example, is integrated into a structured workflow with systematic human review. A human validates every significant action. It’s not “AI does whatever it wants,” it’s “AI helps us, and we stay in control.”

The gap between that and an OpenClaw agent released into the wild with a SOUL.md telling it never to back down is enormous.

Three questions to ask right now

If you lead a tech team, here are three things to check tomorrow morning:

  1. Do you have a policy for AI agents? Not just internally (which tools your developers use), but also for external contributions. How do you handle agent-generated PRs on your repositories?
  2. Is your “human-in-the-loop” real or cosmetic? A human must validate every significant action by an agent, not just be notified afterward. This is a security issue as much as a quality one.
  3. Do your tools include built-in guardrails? The difference between a supervised tool and an unconstrained autonomous agent is the difference between a collaborator and an unmonitored risk.

The goal is not to reject AI. It’s to structure its adoption.

Between “not using AI” and “letting autonomous agents do whatever they want,” there is a vast middle ground. That’s exactly where we support our clients at Capsens with Big Gap AI: a 5-week program to move from “we’ll see” to a clear framework, trained teams, and supervised tools deployed at scale.

Because the Shambaugh case shows one simple thing: it’s not AI that’s dangerous, it’s the absence of structure.

Sources :

https://www.fastcompany.com/91492228/matplotlib-scott-shambaugh-opencla-ai-agent

https://www.theregister.com/2026/02/12/ai_bot_developer_rejected_pull_request/

https://goodtech.info/clawdbot-assistant-ia-open-source-viral-securite/

Capsens is a Paris-based tech agency founded in 2014 that has supported more than 150 clients in building web and mobile platforms. When you spend your days building software with development teams, the question of how AI is transforming the profession is not theoretical—it’s part of everyday work.