SecurityBrief New Zealand - Technology news for CISOs & cybersecurity decision-makers
Story image

New CSA guide addresses securing LLM-backed software systems

Thu, 15th Aug 2024

The Cloud Security Alliance (CSA) has released a new paper addressing the unique challenges and risks associated with the use of Large Language Models (LLMs). The report, titled "Securing LLM Backed Systems: Essential Authorization Practices," aims to guide system architects and engineers in navigating the complexities of incorporating LLMs into their software systems. This comprehensive guide has been drafted by the AI Technology and Risk Working Group within the CSA.

The paper focuses on key design principles and best practices pertinent to authorization concerns when building systems that leverage LLMs. It highlights various design patterns such as Retrieval Augmented Generation (RAG), RAG access using either a vector or relational database, RAG via API calls to external systems, and LLM systems that write and execute code, as well as LLM-backed autonomous agents. For each pattern, the guide describes recommendations, considerations, and possible pitfalls to aid system architects in making well-informed decisions.

"As LLM technology evolves, sharing knowledge and experiences within the community is crucial," stated Laura Voicu, a lead author of the paper and a member of the working group. "A collaborative approach, such as that offered in this report, will help harness the full potential of LLMs without sacrificing high security and authorization standards. It's our hope that this guide will enable system designers to securely build systems utilizing the powerful flexibility this new class of tools offers."

The report also introduces several components that are essential for the efficient functioning and security of LLM-backed systems. These include vector databases, which are increasingly important for the management and retrieval of high-dimensional data vectors in AI systems. Orchestrators are discussed for their role in managing LLM inputs and outputs, coordinating interactions with other services, and mitigating risks like prompt injection.

Additionally, the paper highlights the role of LLM caches in speeding up response times while necessitating control checks to prevent unauthorised access. Validators are also covered, as they provide a critical defense layer against potential attacks, though the primary protection should remain deterministic authorization.

Nate Lee, another lead author of the paper and a member of the working group, commented on the rapidly changing landscape of best practices. "Many of the designers building these systems are at the frontier of integrating LLMs into distributed systems. As we gain more experience and our collective knowledge about the space grows, what we consider to be best practices will change so it's critical to stay on top of the latest developments in the space. The state-of-the-art is moving at a breakneck pace, and what was once a best practice is likely to be tomorrow's legacy pattern," he said.

This paper comes at a time when the implementation of LLMs is growing, necessitating a clear understanding of how to securely integrate these models into software systems. While LLMs offer powerful new capabilities, they also introduce unique risks that need to be thoroughly examined and mitigated.

The CSA's guide aims to provide system architects and engineers with the information needed to make informed choices about the secure design of products that utilise LLMs. By adhering to the recommendations and best practices outlined in the report, professionals in the field can better navigate the complexities and implement robust security measures in their LLM-backed systems.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X