The security challenges in AI-assisted software development
As artificial intelligence (AI) tools become more widely used in the software development process, their impact on security is becoming clearer.
According to recent research, nearly 70% of organisations have discovered vulnerabilities caused by AI tools while one in five have experienced a serious incident as a result of those vulnerabilities.
When hunting for the factors responsible, 45% of security leaders, developers, and application security engineers point the finger of blame at developers. This is not helped by the fact that, in 2025, 46% of developers confirmed that they don't trust the accuracy of the AI tools they are using, up from 31% in 2024.
The message this research provides is clear: the human element of the software development process cannot be ignored. Senior IT leaders need to ensure that developers are proficient in protection, with awareness of the potentially harmful security mistakes that AI can make. They also need to know how to detect and remediate these flaws.
The issue requires immediate attention as research shows the proportion of development teams using AI tools has increased to 94%. The key driver is a need for increased productivity and efficiency.
The problem is exacerbated by the rapid growth of shadow AI within many organisations. Indeed, research has revealed more than 50% of developers do not use provided AI tools but instead deploy others not authorised by the IT department.
This trend is making it much more difficult to track down where possible errors exist, and which parties are responsible. This can have a wide range of consequences for organisations including negative brand perception, customer churn, costly fixes, and revenue losses.
Why self-governance is vital
For this reason, having self-governance requirements in place is increasingly important. This will remain the case until regulatory policies are devised and implemented.
Until that point, security leaders must not delay working with development teams to make certain that approved tools, security processes and best practices are in place to produce high-quality, safe software.
This needs to incorporate a baseline of foundational rules regarding tool deployment, as well as the upskilling of developers so they can readily identify inconsistencies and security errors when reviewing AI-written code.
This level of self-governance is vital as, when an attack causes damage to an organisation, stakeholders demand accountability. It's not acceptable for senior managers that there weren't any regulations to guide their decisions and actions.
At the moment, in many cases developers are the unintended, de facto primary influencers who determine which guardrails, if any, exist. This is occurring without the input of legal or compliance teams and it can have dangerous consequences.
Establishing and enforcing effective policies
For this reason, security leaders need to step in and establish and enforce policies so that the productivity pressures driving the widespread deployment of AI in the software development lifecycle (SDLC) do not result in unacceptable, or even grave, risks. Three ways this can be achieved are:
- Invest in knowledge-building and upskilling:
Senior security leaders must set baselines for foundational security rules, and work with developer teams so members understand what the rules are, and why they matter. As part of this, ongoing training that leverages hands-on, real-world scenarios will enable software engineers to identify patterns they find in day-to-day production situations that are often linked to vulnerabilities.
- Evaluate the AI tech stack:
Care needs to be taken to trace all AI tools in use and observe how developers are deploying them. Through thorough evaluation, leaders can assess whether the activity is within the range of acceptable risk, based upon metrics-driven reporting.
- Oversee policy refinement and enforcement:
A team of leaders and experienced, high-performing software engineers should be established and given the brief of staying up to date with evolving trends and adapting policies as needed. This will also foster peer enforcement of approved AI usage, while discouraging shadow AI.
The human factor cannot be overlooked in this situation. Everything starts with building a culture that positions developers to prevent threats before they happen.
Without clear guardrails for developers, AI risks becoming a liability rather than an asset for firms. When AI plays a role in a data breach, responsibility will trace back not to the software but to the executives and development teams that deployed it.
Companies should therefore adopt a proactive self‑governance approach to developer risk encompassing upskilling, awareness programs, oversight of AI usage, and ongoing policy refinement and enforcement rather than waiting for external regulation to dictate standards. After all, the developers themselves are not to blame, it's the long-standing system in which they have operated, wherein secure coding best practices were not taught, nor enforced.