Gartner finds security leaders need to champion AI TRiSM
By 2026, organisations that operationalise artificial intelligence (AI) transparency, trust and security will see their AI models achieve a 50% improvement in terms of adoption, business goals and user acceptance, according to Gartner.
Mark Horvath, VP Analyst at Gartner, says, “CISOs can’t let AI control their organisation. AI requires new forms of trust, risk and security management (TRiSM) that conventional controls don’t provide."
“Chief information security officers (CISOs) need to champion AI TRiSM to improve AI results by, for example, increasing the speed of AI model-to-production, enabling better governance or rationalising AI model portfolio, This can eliminate up to 80% of faulty and illegitimate information."
Not only does AI pose considerable data risks as sensitive datasets are often used to train AI models, but the accuracy of model outputs and the quality of the data sets might vary over time, which can cause adverse consequences.
The implementation of AI TRiSM enables organisations to understand what their AI models are doing, how well they align with the original intentions and what can be expected in terms of performance and business value.
AI TRiSM is a team sport
Jeremy D’Hoinne, VP Analyst at Gartner, says, "AI TRiSM cannot be led by a single business unit. “It calls for education and cross-team collaboration. CISOs must have a clear understanding of their AI responsibilities within the broader dedicated AI teams, which can include staff from the legal, compliance and IT and data analytics teams.”
Without a robust AI TRiSM program, AI models can work against the business introducing unexpected risks, which causes adverse model outcomes, privacy violations, substantial reputational damage and other negative consequences.
AI risk management priorities
Since AI may be seen as any other application, CISOs might need to recalibrate expectations within and outside of the team. Once the expectations are set, the CISO and their teams need to take the following five AI risk management actions:
Capture the extent of exposure by inventorying AI used in the organisation and ensure the right level of explainability. Drive staff awareness across the organisation by leading a formal AI risk education campaign. Support model reliability, trustworthiness and security by incorporating risk management into model operations. Eliminate exposures of internal and shared AI data by adopting data protection and privacy programs. Adopt specific AI security measures against adversarial attacks to ensure resistance and resilience.
About Gartner for Cybersecurity Leaders
Gartner for Cybersecurity Leaders equips security leaders with the tools to help reframe roles, align security strategy to business objectives and build programs to balance protection with the needs of the organisation.
Gartner delivers actionable, objective insight that drives smarter decisions and stronger performance on an organisation’s mission-critical priorities.