SecurityBrief New Zealand - Technology news for CISOs & cybersecurity decision-makers
Story image

Endor Labs launches AI model scoring system for security

Fri, 25th Oct 2024

Endor Labs has introduced a new functionality, Endor Scores for AI Models, aimed at providing organisations with a means to evaluate the security, popularity, quality, and activity of open source AI models available on the Hugging Face platform.

This scoring system is intended to assist developers in identifying which open source AI models are safest for their use. The functionality can offer answers to questions posed in natural language, such as determining if a model has known vulnerabilities or if it is corporately sponsored, among other queries.

Open source AI models offer a realm of possibilities for organisations, allowing developers to select from a vast repository of models suited to their needs. However, these opportunities also come with associated risks, such as the potential for malicious code or dependencies that could introduce vulnerabilities into a company's infrastructure. Endor Labs has created this scoring system to help mitigate these risks.

Varun Badhwar, co-founder and CEO of Endor Labs, commented on the importance of this development, stating, "It's always been our mission to secure everything your code depends on, and AI models are the next great frontier in that critical task. Every organization is experimenting with AI models, whether to power particular applications or build entire AI-based businesses. Security has to keep pace, and there's a rare opportunity here to start clean, and avoid risks and high maintenance costs down the road."

This development parallels the trajectory of open source software (OSS), where a wealth of options often hides significant risks. Open source AI models, much like OSS, can encompass numerous indirect dependencies, which may harbour vulnerabilities. Endor Scores for AI Models provides scores that inform developers about the security status of AI models, thereby enabling them to make informed choices.

Potential risks from using open source AI models can be wide-ranging. Pre-trained models from platforms such as Hugging Face might contain embedded malicious code or create complex dependencies that are difficulty to manage. Licensing issues may also arise if organisations fail to comply with intellectual property and copyright terms.

Endor Scores for AI Models facilitates the task of selecting the most appropriate AI models by assessing them based on pre-set metrics. It provides developers with an uncomplicated way to find suitable models by answering specific questions about model characteristics or performance metrics. Consequently, developers can more reliably choose models that not only align with their technical requirements but also adhere to security protocols.

Existing customers can now access Endor Scores for AI Models as part of their service package. The enhanced functionality is immediately available for those seeking to improve their AI model selection and security evaluation processes.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X