AI Software Security Risks: Why You Should Be Careful With Vibe Coding
Filters
Results
Table of Contents
AI has changed software development at a pace very few people predicted.
You can now generate websites, apps, automations, and entire software platforms in hours instead of months. At Back9 Digital we use AI ourselves; it’s a genuinely powerful tool when applied with care.
But there’s a growing pattern we’re concerned about: businesses (seemingly) trusting AI-generated software without properly considering what’s running underneath. And in the last twelve months, that concern has stopped being theoretical.
The Real Risk Isn’t That AI-Built Software “Looks AI-Built”
A lot of commentary on AI-generated software focuses on whether products feel generic or visually polished but operationally weak. Those are real observations — but they’re not the issue that should be keeping business owners awake at night.
The real issue is security.
Modern software is deeply connected into the systems businesses depend on every day:
- customer information
- payment systems
- CRMs
- email platforms
- operational tools
- automations
- business databases
When vulnerabilities exist inside any of those systems, the consequences move from “annoying bug” to “genuine business risk” very quickly.
AI Generates Vulnerabilities Just as Quickly as It Generates Code
AI is excellent at producing software that works.
But working is not the same as secure, and that distinction is where most of the danger sits.
AI-generated code routinely introduces issues such as:
- exposed API keys
- insecure or missing authentication
- weak permissions handling
- exposed database endpoints
- missing or misconfigured access control rules
- poor input validation
- vulnerable third-party dependencies
- insecure server configurations
- cross-site scripting (XSS) vulnerabilities
- SQL injection vulnerabilities
The dangerous part is that none of this is visible to the average business owner. The app loads, the dashboard works, the forms submit. Underneath, the gaps may already be wide open.
Recent Incidents That Show Why This Matters
If you want to understand why this issue is urgent, three real incidents from the last year tell the story clearly.
1. Lovable + Supabase: AI-Generated Apps Exposed Real User Data (CVE-2025-48757)
This is the case study every business considering vibe coding should know about.
Lovable is a popular AI app-building platform that generates apps backed by Supabase databases. In 2025, security researchers found that a large share of Lovable-generated apps had broken or missing Row Level Security, the rule layer that controls who can read what data in the database.
The result: attackers didn’t need credentials. The public API key already embedded in the app’s frontend code was enough to query the database directly and pull out full user lists, payment records, and even other API keys.
Around 170 apps were affected, exposing data belonging to roughly 13,000 users — about 10% of all Lovable applications scanned.
This wasn’t a Supabase failure. Supabase’s security model works correctly when configured properly. The failure was in the AI-generated code, which produced apps that looked finished but didn’t lock down the database access controls underneath.
It’s a near-perfect example of the risk: software that ships fast, looks polished, and quietly leaks user data.
2. Vercel (April 2026): When Connected AI Tools Become an Attack Path
In April 2026, Vercel, one of the largest hosting platforms for modern web apps; disclosed a breach that began with a compromised third-party AI tool (Context.ai) used by one of its employees.
From there, attackers were able to pivot into Vercel’s internal environment and read environment variables across affected customer accounts that were not flagged as “sensitive”, meaning they were not encrypted at rest. Stolen data was later listed for sale on a hacker forum for $2 million.
The lesson here isn’t that Vercel built insecure software. It’s that environment variables, he same place AI-generated apps tend to dump API keys, database credentials, and third-party tokens, are a high-value target, and a single weak link in the AI tool chain can cascade into hundreds of downstream environments.
3. The Pattern Across Both Incidents
The common thread: AI accelerated the build, but the security thinking didn’t keep up.
In Lovable’s case, the AI generated code that skipped a critical access control layer. In Vercel’s case, the supply chain wrapped around AI tooling created an attack path nobody had stress-tested.
Neither would have been caught by “does the app work?” The damage in both cases was invisible from the surface.
What Exposed API Keys Actually Allow Attackers To Do
When people hear “API key leak”, it can sound abstract. In practice, it usually leads to one or more of the following:
- Direct database access. Attackers can query, modify, or dump entire tables.
- Impersonation. Acting as a trusted system to call internal APIs.
- Infrastructure abuse. Spinning up resources or sending requests on the business’s bill.
- Lateral movement. Using the leaked key to reach connected systems (email, payments, analytics).
- Targeted phishing campaigns. Once attackers have customer names, emails, order history, or transaction patterns, they can craft phishing messages that look uncannily legitimate — referencing real orders, real account numbers, real product names. These campaigns convert at far higher rates than generic spam, which is why exposed customer data is so valuable on the criminal market.
That last point is exactly the chain you described, and it’s accurate. A leaked API key can lead to user data exposure, which can lead to highly targeted phishing — and from there to account takeover, fraud, or further compromise.
Why Most Businesses Don’t See the Danger
This deserves its own section, because it’s the heart of the problem.
Most business owners using AI to build or commission software:
- can’t read the code being generated
- don’t know what an environment variable is, let alone whether it’s encrypted at rest
- assume that “the AI wouldn’t generate something insecure”
- judge the product by what they can see — the UI, the dashboard, the working forms
- have no internal benchmark for what “secure” even looks like
That’s not a criticism. Software security is a specialised discipline, and historically it sat behind a wall of expert review before anything went live.
AI vibe coding has removed that wall, but it hasn’t replaced what the wall was doing.
AI Doesn’t Understand Consequences
This is the part many businesses overlook.
AI predicts likely code patterns. It does not truly understand:
- operational security
- compliance obligations (GDPR, PCI-DSS, HIPAA, NZ Privacy Act)
- breach recovery planning
- data privacy implications
- business continuity
- infrastructure hardening
- legal exposure if a breach occurs
It can generate functional code at remarkable speed. It cannot take responsibility for what happens when something goes wrong — and in a breach, the responsibility falls on the business that deployed it.
Speed Is Creating False Confidence
One of the biggest risks with AI development is how quickly businesses can move from idea → prototype → live product.
That speed creates a false sense of confidence. Launching software fast is not the same as launching software safely. Security reviews, dependency audits, permissions checks, environment hardening, and proper architecture still matter, arguably more, because the cost of skipping them is now lower in the short term and higher in the long term.
How To Use AI Safely In Software Development
We’re not anti-AI. Far from it. Used well, AI is one of the most useful tools to enter the industry in years. Used carelessly, it’s a liability waiting to surface.
The practical baseline we recommend:
- Never deploy AI-generated code without human review — particularly anything touching authentication, database access, payments, or user data.
- Treat every AI-generated environment variable as if it could leak. Use platforms’ “sensitive” or encrypted-at-rest options. Rotate keys regularly.
- Verify access control at the database layer, not just in the app code. For Supabase-style setups, that means actually testing Row Level Security policies, not just enabling them.
- Run dependency and secret scanning in CI/CD. Tools like GitGuardian, Snyk, and GitHub’s built-in scanners catch a lot of low-hanging fruit.
- Test from an attacker’s perspective. A simple curl against your public API endpoints will tell you more than most automated reports.
- Engage proper engineering oversight for anything connected to real user data, payments, or critical workflows.
Final Thoughts
AI is going to play a huge role in the future of software development. That’s not in question.
What is in question is whether businesses will treat AI-generated code with the same scrutiny they’d apply to anything else running their operations. The pattern of recent incidents; Lovable, Vercel, and the broader rise in AI-related supply chain attacks, suggests many won’t, until something forces them to.
Software that looks good on the surface can still leak customer data, expose credentials, and become the launch pad for targeted phishing campaigns against the very people who trusted the business with their information.
In an environment where anyone can generate working software in an afternoon, the real differentiators become security, stability, and trust.
That’s the bar worth building to.
Frequently Asked Questions
Is AI-generated software secure?
AI-generated software can be secure — but only when it is properly reviewed, tested, and engineered. Left unchecked, AI routinely produces code with exposed API keys, weak access controls, and vulnerable dependencies.
What are the biggest AI software security risks?
The most common risks include:
exposed API keys and environment variables
insecure or missing authentication
missing database access controls (e.g. Row Level Security)
vulnerable third-party packages
weak permissions handling
targeted phishing campaigns built from leaked customer data
Can exposed API keys lead to phishing attacks?
Yes. If attackers gain access to customer information through exposed keys or misconfigured APIs, that data is often used to build highly targeted phishing campaigns that reference real orders, accounts, or transactions — making them far more convincing than generic spam.
Is AI bad for software development?
No. AI is an incredibly powerful development tool. The risk comes from skipping the security review and engineering oversight that used to be built into the development process by default.
Why should businesses be concerned about AI-generated code?
Because most business owners can’t see the security gaps that AI-generated software often contains. The product looks finished, the dashboard works, customers can sign up — and yet the database might be queryable by anyone with the public API key. Recent incidents like CVE-2025-48757 (which exposed user data across 170+ Lovable-built apps) show this isn’t theoretical. Without proper review, the first sign of a problem is often a breach notification.