Hakuna mutata? More like secura your data...
I'll be real: when I launched Historia and Duhbate, security was not on my mind. Not even a little. I was in full product mode — designing, prompting, shipping, iterating. User base of maybe six people (three of which were me testing from different devices). What was I going to secure? The fact that I'd used the wrong shade of teal?
But something shifts when your projects stop being experiments and start being real products. Real users. Real data. Real sessions. And that's when I had my "oh no" moment.
Because here's the thing nobody warns you about when you start building with AI: the AI will happily help you ship insecure code and never once flag it as a problem. It's not trying to hurt you. It's just focused on what you asked for. And if you didn't ask for security, you probably didn't get it.
The Product Leader Blind Spot
For most of my career, security wasn't my problem to solve. It was on the list, yes — right there in the PRD under "technical considerations" — but a developer was going to own it. I was the person who handed off the design and trusted that the right safeguards would be built in by people who actually knew what they were doing.
That's a completely reasonable way to work when you have a team.
When you're a one-person show building with Claude or Cursor, you are the team. You're the designer, the PM, the developer, and now (whether you like it or not) the security engineer. Except you only trained for one of those four jobs.
This isn't a reason to stop building. It's a reason to get a little smarter about what to ask for.
What AI Gets Right (and What It Quietly Skips)
AI coding assistants are incredible at making things work. They'll wire up your auth flow, connect your database, handle your API calls, and ship you a fully functional feature in the time it used to take to write a Jira ticket about it.
What they won't do automatically is ask: "Hey, should we be validating this input before it hits the database?" or "Is it cool that this API route has no rate limiting?" or "Do you want to make sure authenticated users can only access their own data?"
Those aren't oversights. They're just outside the default scope. The AI is solving the problem you described, not the problem you didn't know to describe.
So the fix is straightforward, even if the implementation isn't: you have to know enough to ask the right questions.
The Non-Technical Builder's Security Checklist
This isn't an exhaustive list for an enterprise security team. It's the stuff I wish someone had handed me before I started shipping. If you're a designer, a PM, or a creative person building real products with AI, start here.
🔐 Authentication and Authorization
- Are you using a trusted auth provider? Don't roll your own auth. Tools like Supabase Auth, Clerk, or NextAuth exist specifically so you don't have to. Use them.
- Can a user access someone else's data? This is Row Level Security (RLS) in Supabase, or equivalent in your stack. If your database has no rules about who can read what, every authenticated user can potentially read everything. Ask your AI to set up RLS policies explicitly.
- Are your admin routes protected? Just having a
/adminpath doesn't mean it's locked. Make sure middleware is actually checking session state before rendering protected pages.
🧱 Input Validation
- Are you validating data before it touches your database? Never trust what comes in from a form, an API call, or a URL parameter. Ask your AI to add server-side validation (not just client-side) for every input that affects your database.
- Are you protecting against SQL injection? If you're using an ORM or a tool like Supabase, you're mostly covered by default — but confirm it. If you're writing raw queries, ask explicitly about parameterized queries.
🚦 Rate Limiting and Abuse Prevention
- Can someone spam your API endpoints? Without rate limiting, a single bad actor (or just an accidental loop in your own code) can rack up serious costs or take your app down. Ask your AI to add rate limiting to any public-facing endpoint, especially ones that trigger AI calls or send emails.
- Is your signup flow protected? Unprotected signup flows are an invitation for bot accounts. Even a basic CAPTCHA or email verification step helps.
🗝️ Secrets and Environment Variables
- Are your API keys actually secret? If you're building a frontend app and calling an API directly from the browser, those keys can be exposed. Backend routes or edge functions should handle anything sensitive.
- Is your
.envfile in your.gitignore? Yes, this sounds basic. Check anyway. I've seen people push API keys to public repos and not realize it for weeks. - Have you rotated any keys that might have been exposed? If you're not sure, rotate them. It takes five minutes and the alternative is worse.
🗄️ Database Access
- Is your database open to the public? Supabase projects have "anon" and "service_role" keys. The service role key has full access to your database with no RLS restrictions. Keep that one server-side only, always.
- What happens if someone calls your API without being logged in? Test it. Actually log out and try to hit the endpoints your app uses. If you can access other users' data as an unauthenticated user, you have a problem.
📦 Dependencies and Updates
- Are you running outdated packages? Security vulnerabilities get patched in updates. Run
npm auditperiodically. You don't need to fix every warning, but you should know what's there. - Do you know what's actually in your dependencies? You probably don't need to audit every package, but if an AI suggests adding an obscure library you've never heard of, take 10 seconds to Google it first.
🧪 Testing Your Own Security
- Have you tried to break your own app? Log in as User A. Copy a URL that shows User A's data. Log in as User B. Paste the URL. Can you see User A's data? If yes, that's an authorization failure.
- Have you asked your AI to review your auth flow specifically for security issues? Not just "does this work" but "is this secure." Those are different questions and they get different answers.
The Prompt That Changed How I Build
I started adding a line to almost every session where I'm building something that touches user data:
"Before we finish, review what we just built specifically for security issues. Flag any inputs that aren't validated, any routes that aren't protected, any places where one user could access another user's data, and any secrets that might be exposed."
It's not magic. But it shifts the AI's focus from "did we ship the feature" to "did we ship it safely." Those are different modes and you have to explicitly ask for the second one.
You Don't Have to Become a Security Engineer
None of this means you need to go deep on OWASP Top 10 or get a certification. It means you need to know enough to ask the right questions, and to not assume that because something works, it's safe.
The beauty of building with AI is real. The speed is real. The ability to bring a product vision to life without a team behind you is genuinely remarkable. But the AI is a contractor, not a co-founder. It builds what you spec. You have to spec for safety.
Your users are trusting you with their data. That's true even if you have 12 of them.
What did you think?
Share with friends