
A new study from Black Duck looks at how organizations worldwide are transforming their software security initiatives (SSIs) to manage the risks introduced by AI adoption, increasing regulatory pressures, and the need for more agile security training approaches.
The 16th edition of the Building Security In Maturity Model (BSIMM) shows that AI is now the defining challenge in application security. Organizations are simultaneously securing AI-powered coding assistants and defending against AI-enabled attacks.
The report highlights three major shifts: a 10 percent rise in teams using attack intelligence to track emerging AI vulnerabilities; a 12 percent increase in using risk-ranking methods to determine where LLM-generated code is safe to deploy; and a 10 percent uptick in applying custom rules to automated code review tools to catch issues unique to AI-generated code.
Government mandates are pushing organizations to strengthen application security, with a sharp focus on software supply chain transparency and securing development environments. Nearly 30 percent more organizations are now producing SBOMs to meet transparency requirements. BSIMM16 also reports a 50 percent plus surge in automated verification of infrastructure security and more than 40 percent growth in streamlining responsible vulnerability disclosure — driven by the EU Cyber Resilience Act and evolving US government demands.
Organizations are also expanding their focus beyond internally developed code to secure the entire software supply chain ecosystem. In addition to the significant increase in SBOM adoption for deployed software, BSIMM16 observes more than a 40 percent rise in establishing standardized technology stacks.
Application security training is changing too. Traditional multi-day security courses are being replaced by just-in-time, bite-sized learning that fits modern development workflows and learner preferences. BSIMM16 reports a 29 percent increase in organizations delivering expertise through open collaboration channels, giving teams instant access to security guidance.
“The real risk of AI-generated code isn’t obvious breakage — it’s the illusion of correctness. Code that looks polished and professional can still conceal serious security flaws,” says Jason Schmitt, CEO of Black Duck. “We’re witnessing a dangerous paradox: developers increasingly trust AI-produced code that lacks the security instincts of seasoned experts. That’s why the surge in SBOM adoption reported in BSIMM16 is so critical, since it gives organizations the transparency to understand exactly what’s in their software — whether written by humans, AI, or third parties — and the visibility to respond quickly when vulnerabilities surface. As regulatory mandates expand, SBOMs are moving beyond compliance — they’re becoming foundational infrastructure for managing risk in an AI-driven development landscape.”
You can get the full BSIMM16 report from the Black Duck site.
Image credit: sdecoret/depositphotos.com

