Vibe coding is flooding open source with security debt

AI-generated code is overwhelming open source. cURL killed its bug bounty. Ghostty bans AI PRs. Tailwind lost 80% of revenue. The security data shows 2.74x more vulnerabilities. Here is what the evidence says.

7 min read
Vibe coding is flooding open source with security debt

TL;DR: AI-generated code contributions are overwhelming open source maintainers, introducing 2.74x more security vulnerabilities, and creating a sustainability crisis that GitHub itself had to address with new repository controls.

On February 1, 2026, Daniel Stenberg shut down cURL's bug bounty program. Not because he found all the bugs. Because he couldn't find them anymore under the avalanche of AI-generated reports flooding his inbox.

"The current torrent of submissions put a high load on the curl security team," Stenberg wrote. In one sixteen-hour window, seven HackerOne submissions arrived. Some contained real bugs. None identified actual vulnerabilities. He'd already counted twenty submissions in the first three weeks of 2026 alone.

cURL handles HTTP requests for roughly half the internet. The tool is embedded in cars, game consoles, medical devices, and virtually every Linux distribution. And its security review process was drowning in what RedMonk analyst Kate Holterhoff now calls "AI Slopageddon."

Three projects, three different walls

Stenberg wasn't alone. Within weeks, three of the most respected open source maintainers in different corners of the ecosystem hit their breaking points.

Mitchell Hashimoto, creator of Terraform and founder of Ghostty, went from requiring mandatory AI disclosure in August 2025 to a zero-tolerance policy by January 2026. Submit bad AI-generated code to Ghostty and you're permanently banned. "This is not an anti-AI stance," Hashimoto clarified. "This is an anti-idiot stance."

Steve Ruiz, founder of tldraw, took the most extreme step: auto-closing all external pull requests. His reasoning cut deeper than workflow annoyance. In a world of AI coding assistants, Ruiz questioned whether code from external contributors held any value at all.

GitHub itself responded on February 13, 2026, shipping new repository settings that let maintainers disable pull requests entirely or restrict them to collaborators only. The fact that the world's largest code hosting platform had to build a kill switch for contributions tells you how far things have gone.

The numbers behind the noise

The maintainer revolt isn't just about annoyance. The security data paints a more concrete picture.

Black Duck's 2026 Open Source Security and Risk Analysis (OSSRA) report found that mean vulnerabilities per codebase jumped 107% year over year. The report analyzed 947 codebases across 17 industries and captured what it called "a software ecosystem transformed by AI-assisted development, where code, dependencies, and risks are being introduced at unprecedented speed."

Independent research confirms the pattern. A December 2025 analysis of 470 open source GitHub pull requests found that code co-authored by generative AI contained approximately 1.7 times more "major" issues than human-written code. Security vulnerabilities specifically were 2.74 times higher. Logic errors and misconfigurations also spiked, with misconfigurations appearing 75% more often.

Aikido Security's 2026 report put a finer point on the real-world impact: AI-generated code is now the cause of one in five security breaches.

And the governance gap is wide open. Only 24% of organizations perform comprehensive evaluations of AI-generated code covering intellectual property, licensing, security, and quality. Two-thirds of audited codebases contained license conflicts, the highest rate in OSSRA history.

When "not one line of code" meets production

The theoretical risks stopped being theoretical when security researchers started looking at vibe-coded applications in production.

Moltbook, a social platform for AI agents, went viral in early 2026. Founder Matt Schlicht said publicly that he had a "vision for the architecture" but "did not manually write one line of code." Security firm Wiz found a misconfigured Supabase database with full read and write access, exposing 1.5 million API authentication tokens, 35,000 email addresses, and private messages. The root cause: a single Supabase configuration setting. Row Level Security was never turned on.

The Moltbook team fixed the issue within hours of disclosure. But the database sat open during the platform's entire viral period, and no one can confirm whether someone accessed it before the researchers did.

Then came Orchids, a vibe coding platform claiming around one million users. Cybersecurity researcher Etizaz Mohsin discovered a vulnerability that allowed a zero-click attack: injecting malicious code into another user's project without any action from the victim. In a controlled demonstration for BBC News, Mohsin took control of a reporter's laptop, changing the wallpaper and creating files on the desktop. The reporter did nothing.

Mohsin spent weeks trying to reach Orchids to disclose the vulnerability. Their response, when it finally came: the team was "overwhelmed with inbound" and "possibly missed" his messages. The San Francisco company has fewer than ten employees.

The economics that make this worse

Code generation got cheap. Code review did not.

A contributor can paste an issue into an AI tool, get a plausible-looking patch, and submit it in under a minute. The maintainer still needs 30 to 60 minutes to review it properly: checking for security issues, verifying correctness, ensuring it fits the project's architecture, testing edge cases.

This asymmetry creates a ratio problem. As the cost of generating contributions approaches zero, the volume of submissions grows without bound. But the human review capacity stays fixed or declines as maintainers burn out under the load.

The burnout itself predates AI. But AI removed the buffers that kept it manageable. Before, contributing required enough understanding to write the patch manually. That effort filtered out most noise. Now the filter is gone, and the flood is unrelenting.

The Tailwind paradox

No case illustrates the downstream effects more clearly than Tailwind CSS.

On January 6, 2026, Tailwind Labs laid off 75% of its engineering team. CEO Adam Wathan attributed the decision to the "brutal impact AI has had on our business." The paradox: Tailwind usage was growing faster than ever. Major companies including Shopify, GitHub, and NASA use it. But revenue had dropped nearly 80%.

The mechanism was indirect but devastating. AI coding assistants generate Tailwind CSS automatically, so developers stopped visiting the documentation. Documentation traffic fell 40% over two years, which meant developers never discovered the commercial plans that sustained the business.

Within 48 hours of the layoffs, Google's AI team, Vercel, Gumroad, and others pledged financial support. But the sponsorship model is a patch, not a fix. Tailwind's experience revealed a structural vulnerability in the open source business model that extends far beyond one framework.

When AI agents make decisions about which packages to use, which docs to consult, and which issues to file, the human engagement that funds open source development evaporates. Fewer eyes on documentation. Fewer human-filed bug reports. Fewer developers who feel personal investment in a project's survival.

What happens next

The responses so far have been defensive: close the door, ban the submissions, disable the feature. These are rational short-term moves, but they don't address the underlying dynamics.

Some projects are experimenting with contribution requirements that re-raise the bar. Mandatory test coverage. Required issue linkage before PR submission. Automated AI-detection screening (though this remains unreliable). Deposit-based bug bounties where reporters stake funds against the validity of their report.

GitHub's new settings help, but they shift the problem rather than solving it. A project that disables external PRs also loses the legitimate contributions that have powered open source for decades.

The security angle may force faster action. As regulatory frameworks like the EU AI Act take effect, the liability for shipping unreviewed AI-generated code will shift from theoretical to legal. Organizations that can't demonstrate code provenance and review processes may face compliance risks that dwarf the developer productivity gains.

For now, the data points converge on a single conclusion: the tools that make code generation effortless have not made code review, security validation, or maintainer support any easier. Until that asymmetry closes, open source will keep absorbing the debt.

Related links

Stay in the loop

Get notified when we publish new posts. No spam, unsubscribe anytime.

Vibe Coding Is Flooding Open Source With Security Debt