The biggest security organization in the world says AI-generated code is a risk. They literally used the words "vibe coding." Here's what it means for you.
If you've never heard of OWASP, here's the short version: OWASP is the group that decides what "secure" means for the internet.
The Open Worldwide Application Security Project (OWASP) publishes a "Top 10" list of the biggest security risks on the web. Think of it like a health code rating for restaurants, but for software. When a bank, a hospital, or a government website wants to prove it's secure, they check their code against the OWASP Top 10.
It's been the gold standard since 2003. When OWASP says something is a risk, the entire security industry listens.
And in November 2025, they added a new entry to the list. Its title?
X03:2025 — Inappropriate Trust in AI Generated Code ("Vibe Coding")
They didn't bury it in academic language. They used the actual term. Vibe coding. In quotes. In an official OWASP document.
That's a big deal.
Here's the core of it, translated into plain English:
The problem: People are building apps with AI tools and shipping the code without really checking it. The AI writes something that works, it looks fine, and it gets deployed. But "works" and "secure" are two very different things.
OWASP explicitly warns against using vibe coding for anything important — apps that handle real user data, process payments, or run for more than a weekend project.
The entry was co-authored by Tanya Janca, one of the most respected voices in application security. She explained that even without perfect data, the community feedback and real-world incidents were too loud to ignore.
Where it sits in the list: X03:2025 is in the "Next Steps" section of the OWASP Top 10:2025 — meaning it's on the doorstep of the main list. OWASP called it important enough to include in the official document, but it hasn't yet gathered enough formal data to rank alongside the main ten entries. Given the pace of AI adoption, most security researchers expect it to move into the main list in the next update.
You might be thinking: "I'm just building a side project with Cursor. This doesn't apply to me."
It does. Here's why.
When OWASP names something, it triggers a chain reaction. Security scanners start checking for it. Investors start asking about it. Hosting platforms start enforcing it. And — this is the part that matters most — attackers start targeting it.
Right now, vibe-coded apps are the lowest-hanging fruit on the internet. Attackers know that most AI-generated code ships without security reviews. They know the common mistakes. And they have scanners of their own.
Sources: Veracode, CyberNews, Wiz, Tenzai
That last stat deserves a second look. A study by Tenzai tested five popular AI coding tools — and found that none of them implemented CSRF protection, Content Security Policy, or rate limiting. Zero out of five. These are basic protections that every web app should have.
OWASP's entry covers a lot of ground. Here are the three risks that matter most if you're building apps with AI tools on evenings and weekends.
When you tell Cursor or Lovable to "build a login page," it builds a login page. It optimizes for making it work. It does not automatically:
This isn't a bug — it's just how these tools work. They do what you ask. Security is rarely part of the ask.
This is the sneaky one. AI-generated code often appears correct. It runs. It passes basic tests. But it has subtle flaws that only show up when someone tries to break it.
A recent study found that AI co-authored code has 2.7x more security vulnerabilities than code written entirely by humans. The code compiles. The app loads. But the locks on the doors are made of cardboard.
OWASP's specific warning: developers are committing AI-generated code "almost entirely without human oversight." If you don't read it, you can't catch what's wrong with it.
Here's a wild one. AI coding tools sometimes recommend packages (pre-built code libraries) that don't actually exist. The AI made them up. Hallucinated them.
Attackers figured this out. They watch for commonly-hallucinated package names, create real packages with those names, and fill them with malicious code. When the next person follows the AI's recommendation and installs the package — boom. They've just installed malware.
This attack has a name: slopsquatting. And 20% of AI package recommendations point to packages that don't exist, each one a potential trap.
OWASP's recommendations are written for enterprise security teams. Here's what they actually mean for someone building a side project on a Saturday afternoon:
| What OWASP Says | What That Means for You |
|---|---|
| "Read and understand ALL code you submit" | Before you deploy, skim the code your AI wrote. You don't need to understand every line — just look for anything that seems like a password, a database URL, or a key. |
| "Perform thorough security reviews using static analysis tools" | Run a security scanner on your live site. That's literally what SecureYourVibe does — paste your URL, get a report in 30 seconds. |
| "Implement guardrail tooling" | Use .env files for secrets instead of pasting them into your code. Set up security headers in your hosting config. These are one-time, 10-minute tasks. |
| "Develop curated prompt libraries" | When prompting your AI tool, add "make it secure" or "follow security best practices" to your instructions. It's not magic, but it helps. |
| "Establish clear policies governing AI usage" | Decide up front: what are you willing to let the AI handle alone, and what do you want to double-check? Anything touching user data or payments deserves a second look. |
OWASP isn't the only one paying attention. In January 2026, Palo Alto Networks' Unit 42 (one of the top cybersecurity research teams in the world) published the first security framework built specifically for vibe coding.
They called it SHIELD. Each letter is a principle:
Don't let your AI tool access everything. Keep it away from production data and admin controls.
A real person should review code before it goes live — especially anything that touches user data.
Check what goes into and comes out of your AI tool. Don't blindly trust the code it writes.
Use automated tools to scan for secrets, vulnerabilities, and bad patterns before deploying.
Give your AI the minimum access it needs. If it doesn't need your database password, don't give it one.
Check the packages your AI installs. Don't auto-run code without reviewing it first.
The Unit 42 researchers found that most organizations using vibe coding tools have never done a formal security assessment of them. They also documented real incidents — including a production database that was deleted by an AI agent despite explicit instructions not to.
Let's be real: vibe coding is amazing. Building a working app in a weekend that would have taken months a year ago? That's genuinely transformative.
But right now, the tools are optimized for speed, not safety. OWASP naming vibe coding isn't a death sentence for AI-assisted development — it's a wake-up call. The message is simple:
Build fast, but check your work.
You don't need a security degree. You don't need to read every line of code. You just need to run a scan before you share your URL with the world.
We built SecureYourVibe for exactly this moment. Paste your URL, get a letter grade, see what's exposed — with plain-English explanations and copy-paste fixes for every issue.
It takes 30 seconds. The OWASP entry that started this conversation took 22 years of security research to write.
Find out in 30 seconds. Free scan, no signup, plain-English results.
Scan My Site Free →