Back to Blog
securityai codevulnerabilitiescode reviewowasp

Security Best Practices for AI-Generated Code: What Every Developer Needs to Know

Learn how to review AI-generated code for security vulnerabilities and implement safeguards to prevent common attack vectors.

B
Bootspring Team
Engineering
February 20, 2026
7 min read

AI coding assistants are powerful, but they're trained on billions of lines of code—including insecure code. Without careful review, AI-generated code can introduce vulnerabilities into your application. Here's how to stay secure.

The Security Risk Reality#

Studies show that AI-generated code contains security vulnerabilities in 30-40% of cases when generating security-sensitive code. Common issues include:

  • SQL injection vulnerabilities
  • Cross-site scripting (XSS)
  • Hardcoded credentials
  • Insecure cryptographic practices
  • Path traversal vulnerabilities

Top Vulnerabilities in AI-Generated Code#

1. SQL Injection#

AI often generates string concatenation for queries:

1# AI-generated (vulnerable) 2def get_user(username): 3 query = f"SELECT * FROM users WHERE username = '{username}'" 4 return db.execute(query) 5 6# Secure version 7def get_user(username): 8 query = "SELECT * FROM users WHERE username = %s" 9 return db.execute(query, (username,))

Always use: Parameterized queries, ORMs with proper escaping.

2. Cross-Site Scripting (XSS)#

AI may not escape user input in templates:

1// AI-generated (vulnerable) 2function displayMessage(message) { 3 document.getElementById('output').innerHTML = message; 4} 5 6// Secure version 7function displayMessage(message) { 8 document.getElementById('output').textContent = message; 9} 10 11// Or with sanitization when HTML is needed 12import DOMPurify from 'dompurify'; 13function displayRichMessage(message) { 14 document.getElementById('output').innerHTML = DOMPurify.sanitize(message); 15}

Always use: Text content, HTML sanitization libraries, framework escaping.

3. Hardcoded Secrets#

AI often uses placeholder credentials that look real:

1// AI-generated (dangerous) 2const config = { 3 apiKey: 'sk_live_abc123xyz', // Looks like a real key 4 dbPassword: 'admin123', 5 jwtSecret: 'super-secret-key' 6}; 7 8// Secure version 9const config = { 10 apiKey: process.env.API_KEY, 11 dbPassword: process.env.DB_PASSWORD, 12 jwtSecret: process.env.JWT_SECRET 13};

Always use: Environment variables, secrets managers, never commit secrets.

4. Insecure Cryptography#

AI may suggest outdated or weak cryptographic methods:

1# AI-generated (weak) 2import hashlib 3password_hash = hashlib.md5(password.encode()).hexdigest() 4 5# Secure version 6import bcrypt 7password_hash = bcrypt.hashpw(password.encode(), bcrypt.gensalt())

Always use: bcrypt/argon2 for passwords, modern algorithms (AES-256, RSA-2048+).

5. Path Traversal#

AI may not validate file paths:

1# AI-generated (vulnerable) 2def read_file(filename): 3 with open(f'/uploads/{filename}', 'r') as f: 4 return f.read() 5 6# Attack: filename = "../../../etc/passwd" 7 8# Secure version 9import os 10def read_file(filename): 11 base_path = '/uploads' 12 safe_path = os.path.normpath(os.path.join(base_path, filename)) 13 if not safe_path.startswith(base_path): 14 raise ValueError("Invalid path") 15 with open(safe_path, 'r') as f: 16 return f.read()

Always use: Path validation, chroot, allowlists.

Security Review Checklist#

Use this checklist for every AI-generated code block:

Input Handling#

  • All user input is validated
  • Input length limits are enforced
  • Input types are verified
  • Special characters are handled

Authentication#

  • Passwords are hashed with strong algorithms
  • Sessions are managed securely
  • Authentication tokens are properly validated
  • Rate limiting is implemented

Authorization#

  • Access controls are enforced
  • Privilege escalation is prevented
  • Resource ownership is verified

Data Protection#

  • Sensitive data is encrypted at rest
  • Data is encrypted in transit (HTTPS)
  • PII is handled according to regulations
  • Logs don't contain sensitive data

Injection Prevention#

  • SQL queries are parameterized
  • Shell commands don't include user input
  • Template injection is prevented
  • LDAP injection is prevented

Automated Security Scanning#

Static Analysis Tools#

Integrate security scanning into your workflow:

1# .github/workflows/security.yml 2name: Security Scan 3on: [push, pull_request] 4 5jobs: 6 security: 7 runs-on: ubuntu-latest 8 steps: 9 - uses: actions/checkout@v4 10 11 - name: Run Semgrep 12 uses: returntocorp/semgrep-action@v1 13 with: 14 config: >- 15 p/security-audit 16 p/secrets 17 p/owasp-top-ten 18 19 - name: Run Snyk 20 uses: snyk/actions/node@master 21 env: 22 SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}

Runtime Protection#

1// Add security headers 2app.use(helmet()); 3 4// Rate limiting 5app.use(rateLimit({ 6 windowMs: 15 * 60 * 1000, // 15 minutes 7 max: 100 // limit each IP to 100 requests per windowMs 8})); 9 10// CSRF protection 11app.use(csrf({ cookie: true }));

AI-Specific Prompting for Security#

Ask AI to consider security explicitly:

1Generate a user authentication function with: 2- Parameterized queries only 3- bcrypt password hashing 4- Rate limiting consideration 5- Input validation 6- No hardcoded values 7- Secure session handling 8 9Include comments explaining security decisions.

Better prompts lead to more secure code:

1// Well-prompted AI output 2export async function authenticateUser( 3 email: string, 4 password: string, 5 ipAddress: string 6): Promise<AuthResult> { 7 // Validate input format 8 if (!isValidEmail(email)) { 9 // Don't reveal if email exists 10 return { success: false, error: 'Invalid credentials' }; 11 } 12 13 // Check rate limiting 14 const attempts = await getRateLimitAttempts(ipAddress); 15 if (attempts > MAX_LOGIN_ATTEMPTS) { 16 await logSecurityEvent('rate_limit_exceeded', { ipAddress }); 17 return { success: false, error: 'Too many attempts' }; 18 } 19 20 // Use parameterized query 21 const user = await db.user.findUnique({ 22 where: { email: email.toLowerCase() } 23 }); 24 25 if (!user) { 26 // Constant-time comparison to prevent timing attacks 27 await bcrypt.compare(password, DUMMY_HASH); 28 return { success: false, error: 'Invalid credentials' }; 29 } 30 31 // Verify password with bcrypt 32 const valid = await bcrypt.compare(password, user.passwordHash); 33 if (!valid) { 34 await incrementLoginAttempts(user.id); 35 return { success: false, error: 'Invalid credentials' }; 36 } 37 38 // Generate secure session 39 const session = await createSecureSession(user.id); 40 41 return { success: true, session }; 42}

Security Training for AI Usage#

Team Guidelines#

Create a policy document:

1# AI Code Security Policy 2 3## Required Reviews 4- All AI-generated code handling user input 5- All AI-generated authentication/authorization code 6- All AI-generated cryptographic code 7- All AI-generated file operations 8 9## Prohibited Patterns 10- Direct SQL string concatenation 11- innerHTML with user content 12- eval() or equivalent 13- Hardcoded credentials (even for testing) 14 15## Required Tools 16- Run Semgrep before committing 17- Enable IDE security linting 18- Use dependency scanning

Security-Focused Code Review#

When reviewing AI-generated PRs:

  1. Check trust boundaries: Where does user input enter?
  2. Trace data flow: How does data move through the system?
  3. Verify output encoding: Is output properly encoded for context?
  4. Test edge cases: What happens with malformed input?

Dependency Security#

AI often suggests packages without checking security:

1// AI suggests a package 2import coolParser from 'cool-parser'; // Is this safe? 3 4// Check before using 5// 1. npm audit 6// 2. Snyk vulnerability database 7// 3. Package maintenance status 8// 4. Download counts and community trust
1# Check package security 2npm audit 3npx snyk test 4 5# Check package health 6npm view cool-parser 7# Look for: recent updates, maintainer activity, issue count

Incident Response for AI Vulnerabilities#

If you discover a vulnerability in AI-generated code:

  1. Assess scope: What data/systems are affected?
  2. Patch immediately: Fix the vulnerability
  3. Audit similar code: Search for the same pattern elsewhere
  4. Update prompts: Refine AI instructions to prevent recurrence
  5. Add tests: Create security tests for this vulnerability type
1// Add regression test 2describe('SQL Injection Prevention', () => { 3 it('rejects SQL injection attempts', async () => { 4 const maliciousInput = "'; DROP TABLE users; --"; 5 6 // Should not throw or execute dangerous query 7 const result = await getUser(maliciousInput); 8 expect(result).toBeNull(); 9 10 // Verify table still exists 11 const count = await db.user.count(); 12 expect(count).toBeGreaterThan(0); 13 }); 14});

Tools and Resources#

ToolPurposeIntegration
SemgrepStatic analysisCI/CD, IDE
SnykDependency scanningCI/CD, Git
TrivyContainer scanningCI/CD
OWASP ZAPDynamic testingCI/CD
Burp SuitePenetration testingManual

Learning Resources#

  • OWASP Top 10
  • CWE/SANS Top 25
  • NIST Secure Coding Guidelines
  • Your framework's security documentation

Conclusion#

AI-generated code isn't inherently insecure—but it isn't inherently secure either. Treat it like code from any junior developer: review it carefully, test it thoroughly, and never assume it's correct.

Security is your responsibility, not the AI's.


Bootspring includes built-in security scanning for AI-generated code. Catch vulnerabilities before they reach production.

Share this article

Help spread the word about Bootspring