The rapid ascent of Agentic AI and vibe coding in 2025 has transformed how software is built, promising unprecedented productivity and economic gains. But as organisations rush to capitalise on these trends, the security implications of AI-generated code demand urgent attention.
The Security Dilemma: More Code, More Vulnerabilities
AI-powered coding tools and agentic systems are rewriting the rules of software development. Yet, research consistently shows that code generated by large language models (LLMs) is frequently riddled with vulnerabilities-unless explicitly guided otherwise. These vulnerabilities span from poor input validation and hardcoded secrets to the use of outdated dependencies and insufficient error handling.
A 2025 study found that nearly half of code snippets produced by leading LLMs contained impactful bugs or security flaws, opening the door to malicious exploitation. AI-generated code is not inherently secure, and the risk of introducing business logic errors, SQL injection, cross-site scripting (XSS), and other critical vulnerabilities is significantly higher compared to code written by experienced human developers.
Why Is AI-Generated Code So Risky?
Several factors contribute to the heightened risk profile:
- Blind Trust in AI Output: Developers may deploy AI-generated code without fully understanding its inner workings, leading to hidden vulnerabilities.
- Lack of Context: LLMs may not consistently apply security best practices, especially with ambiguous or generic prompts.
- AI-Specific Attack Vectors: New threats such as prompt injection can manipulate agentic tools into producing insecure or even malicious code.
- Complacency and Speed: The push for rapid delivery can bypass essential security checks, peer reviews, and testing, allowing critical flaws to slip into production.
- Training Data Risks: Models trained on unsanitised code repositories risk inheriting vulnerabilities or even backdoors intentionally seeded by attackers.
Secure-by-Design: Is It Achievable?
While secure-by-design is possible in theory with robust prompting, detailed guardrails, and security-focused review processes it remains a work in progress. The industry is still maturing, and most organisations lack the comprehensive systems needed to ensure that AI-generated code is consistently safe.
The Productivity Trap: Security as an Afterthought
The promise of Agentic AI is irresistible: faster delivery, lower costs, and democratised software creation. But this very promise can lead organisations to deprioritise security, relying on point solutions that generate high volumes of false positives and security noise. This overwhelms security teams and leaves developers without actionable guidance.
Empowering Developers: From Noise to Action
To address these challenges, security teams must shift from reactive, fragmented approaches to proactive, orchestrated strategies:
- Automated Security Scanning: Integrate tools for SAST, SCA, secrets detection, DAST, IaC, container scanning, and SBOM analysis to catch vulnerabilities early and often.
- Correlated, Deduplicated Findings: Use platforms that aggregate and correlate scan results across tools, deduplicating alerts to highlight only the most critical, exploitable issues.
- Actionable Fix Guidance: Surface findings with clear, context-rich remediation steps so developers can address vulnerabilities confidently and efficiently.
- Continuous Security Education: Equip developers with ongoing training on secure coding and the unique risks of AI-generated code.
- Security-Focused Code Reviews: Prioritise specialised reviews of AI-generated code, especially for input validation, authentication, and authorisation.
Meet Smithy: Security Orchestration for the AI Era
Imagine a platform, Smithy-that orchestrates all your security scans (SAST, SCA, Secrets, DAST, IaC, Container Scanners, SBOMs), correlates and deduplicates findings, and surfaces only the most critical, actionable vulnerabilities with clear fix guidance. Smithy empowers security teams to cut through the noise, enabling developers to ship secure code faster even in the age of Agentic AI and vibe coding.
Agentic AI is rewriting the future of software, but security cannot be an afterthought. With the right orchestration, actionable insights, and a relentless focus on secure-by-design, organisations can harness the power of AI without sacrificing trust or safety.