AI coding assistants are powerful, but they're trained on billions of lines of code—including insecure code. Without careful review, AI-generated code can introduce vulnerabilities into your application. Here's how to stay secure.
The Security Risk Reality#
Studies show that AI-generated code contains security vulnerabilities in 30-40% of cases when generating security-sensitive code. Common issues include:
- SQL injection vulnerabilities
- Cross-site scripting (XSS)
- Hardcoded credentials
- Insecure cryptographic practices
- Path traversal vulnerabilities
Top Vulnerabilities in AI-Generated Code#
1. SQL Injection#
AI often generates string concatenation for queries:
Always use: Parameterized queries, ORMs with proper escaping.
2. Cross-Site Scripting (XSS)#
AI may not escape user input in templates:
Always use: Text content, HTML sanitization libraries, framework escaping.
3. Hardcoded Secrets#
AI often uses placeholder credentials that look real:
Always use: Environment variables, secrets managers, never commit secrets.
4. Insecure Cryptography#
AI may suggest outdated or weak cryptographic methods:
Always use: bcrypt/argon2 for passwords, modern algorithms (AES-256, RSA-2048+).
5. Path Traversal#
AI may not validate file paths:
Always use: Path validation, chroot, allowlists.
Security Review Checklist#
Use this checklist for every AI-generated code block:
Input Handling#
- All user input is validated
- Input length limits are enforced
- Input types are verified
- Special characters are handled
Authentication#
- Passwords are hashed with strong algorithms
- Sessions are managed securely
- Authentication tokens are properly validated
- Rate limiting is implemented
Authorization#
- Access controls are enforced
- Privilege escalation is prevented
- Resource ownership is verified
Data Protection#
- Sensitive data is encrypted at rest
- Data is encrypted in transit (HTTPS)
- PII is handled according to regulations
- Logs don't contain sensitive data
Injection Prevention#
- SQL queries are parameterized
- Shell commands don't include user input
- Template injection is prevented
- LDAP injection is prevented
Automated Security Scanning#
Static Analysis Tools#
Integrate security scanning into your workflow:
Runtime Protection#
AI-Specific Prompting for Security#
Ask AI to consider security explicitly:
Better prompts lead to more secure code:
Security Training for AI Usage#
Team Guidelines#
Create a policy document:
Security-Focused Code Review#
When reviewing AI-generated PRs:
- Check trust boundaries: Where does user input enter?
- Trace data flow: How does data move through the system?
- Verify output encoding: Is output properly encoded for context?
- Test edge cases: What happens with malformed input?
Dependency Security#
AI often suggests packages without checking security:
Incident Response for AI Vulnerabilities#
If you discover a vulnerability in AI-generated code:
- Assess scope: What data/systems are affected?
- Patch immediately: Fix the vulnerability
- Audit similar code: Search for the same pattern elsewhere
- Update prompts: Refine AI instructions to prevent recurrence
- Add tests: Create security tests for this vulnerability type
Tools and Resources#
Recommended Tools#
| Tool | Purpose | Integration |
|---|---|---|
| Semgrep | Static analysis | CI/CD, IDE |
| Snyk | Dependency scanning | CI/CD, Git |
| Trivy | Container scanning | CI/CD |
| OWASP ZAP | Dynamic testing | CI/CD |
| Burp Suite | Penetration testing | Manual |
Learning Resources#
- OWASP Top 10
- CWE/SANS Top 25
- NIST Secure Coding Guidelines
- Your framework's security documentation
Conclusion#
AI-generated code isn't inherently insecure—but it isn't inherently secure either. Treat it like code from any junior developer: review it carefully, test it thoroughly, and never assume it's correct.
Security is your responsibility, not the AI's.
Bootspring includes built-in security scanning for AI-generated code. Catch vulnerabilities before they reach production.