Updated May 2, 2026

How to ship a vibe-coded app without security holes

A security checklist for taking AI-generated apps from prototype to production without trusting the first draft blindly.

The fastest way to create a fragile app in 2026 is to generate a convincing prototype and treat it as production software. AI app builders can make real things quickly, but they are optimized for momentum. Security is a different mode. Before a vibe-coded app handles users, payments, private data, or admin actions, it needs a deliberate hardening pass.

Begin by leaving the preview environment. Export the code or recreate the app in a normal repository, install from a clean clone, and write down the runtime pieces: framework, database, auth provider, storage, queues, email, payments, and deploy target. If the app only works in the builder preview, the security review is mostly theater.

Give the first agent a mapping task, not a patch:

Map this generated app. List every route, server action, API handler, database table, external service, environment variable, and place where user-controlled input reaches storage or payment logic. Do not edit files.

That map becomes the checklist for the hardening pass.

Start with authentication. Confirm which provider is used, where sessions are stored, how tokens expire, and whether protected routes are truly protected on the server. It is common for generated apps to hide UI links while leaving backend routes too permissive. Security is not a button disappearing from the sidebar. Security is the server refusing the wrong user.

Next review authorization. Multi-tenant SaaS apps need tenant boundaries everywhere. A user from one workspace should not read another workspace's records by changing an ID in a URL or request body. Ask your AI tool to identify every query that touches tenant-owned data, then inspect the result yourself. Add tests that prove cross-tenant access fails.

Use a focused prompt for that work:

Find every database query that reads or writes tenant-owned data. For each query, say where the tenant or workspace id comes from, whether it is checked on the server, and what test proves cross-tenant access fails. Do not change code yet.

Secrets are another frequent problem. Generated code may include placeholder keys, client-visible environment variables, or logging that prints sensitive data. Search the repository for keys, tokens, passwords, private URLs, and anything that looks like a credential. Make sure server-only secrets are not exposed through client bundles or public config.

Database rules need human attention. Check migrations, indexes, cascade behavior, required fields, and whether destructive actions are reversible. AI can invent schemas that are convenient for a demo but awkward for real product changes. If the app stores user content, add deletion behavior, export behavior, and a policy for what happens when an account is closed.

Payments deserve a separate review. Stripe or another billing provider should be integrated through server-side webhook verification, not optimistic client state. Make sure paid access is granted from verified payment events, not from a browser redirect alone. Test failed payments, canceled subscriptions, plan changes, and replayed webhooks.

This is the minimum test set for a generated SaaS app:

AreaTest that should exist
AuthAnonymous users cannot call protected server routes.
Tenant dataUser A cannot read or update User B's workspace records by changing an id.
BillingPaid access changes only after a verified webhook event.
WebhooksReplayed or unsigned webhook payloads are rejected.
SecretsNo server secret is bundled into client code or logged in production paths.
Admin actionsNon-admin users cannot reach admin mutations directly.

Ask the agent to write these tests before broad cleanup. Tests reveal whether the generated architecture is understandable enough to harden.

Add the boring defenses. Rate-limit expensive endpoints. Validate input on the server. Use prepared queries or safe ORM patterns. Add CSRF protection where appropriate. Set secure cookie flags. Add basic audit logs for admin actions. Keep dependency versions current. Run static analysis and dependency scanning. Ask CodeRabbit, Greptile, Copilot, or Claude Code to review for issues, but treat their output as a checklist, not proof.

Finally, reduce blast radius before launch. Use a staging environment. Rotate any keys that touched generated or public demos. Turn on backups. Add error monitoring. Keep the first user cohort small. Put risky actions behind admin review. The goal is not to make AI-generated code morally pure. The goal is to make the product understandable, observable, and recoverable.

Vibe coding is excellent for finding the shape of a product. Shipping is where discipline returns. The teams that succeed will keep the creative speed while refusing to outsource trust.

Get the tool watchlist

Occasional notes on new AI coding tools, pricing changes, and practical comparisons.