Endor Labs today added an ability to detect open-source artificial intelligence (AI) models downloaded from the Hugging Face repository that have been incorporated into source code.
Andrew Stiefel, senior product marketing manager for Endor Labs, said the company is essentially extending the reach of its software composition analysis (SCA) tools to include AI models that have been included within the source code of an application.
That capability makes it now possible to evaluate those AI models for risk levels that might warrant blocking the usage of a specific AI model in an application, he added.
Each model is evaluated using Endor Score, a framework for assessing risk based on 50 evaluation criteria that span four dimensions: Security, activity, popularity and quality. That tool makes it possible to more easily identify AI models with questionable sources, practices, dependencies or licenses that might create issues in a production environment.
Additionally, as an AI model is updated the Endor Labs tool will, for example, discover that Python files that could contain malicious code have been added since the model was first added.
Endor Labs also added a reusable finding policy capability for setting and enforcing guardrails across multiple AI models or application development teams can add their own custom policies for the specific risk factors.
Those insights can be surfaced using the DroidGPT tab in the user interface (UI) of the Endor Labs SCA tool or can be extracted using a command line interface (CLI) or application programming interface (API).
Any organization seeking to comply with standards such as the ISO/IEC 42001:2023 or NIST-AI-600-1 specification will need to be able to identify which specific type of AI model is being used in their application. It’s not clear how many organizations are scanning AI models before adding to a software build, but it’s only a matter of time before more organizations will demand to know what types of AI models are being incorporated into an application.
In effect, the AI model is simply another type of software artifact that should be included in a software bill of materials (SBOM).
Less clear is to what degree governments around the world will require increased transparency into AI applications. The European Union (EU) is currently at the forefront of such efforts, while the U.S. appears to be backpedaling on previous executive orders now that there is a new administration.
Ultimately, organizations, regardless of what regulations are being enforced will need to track which types and versions of AI models are being employed. Many of these AI models are now primary targets of cyberattacks that are attempting to, for example, poison outputs by exposing AI models to false data. In addition, different versions of AI models have different levels of costs associated with employing them, which will require DevOps teams to closely track which AI model is being employed for a specific use case.
The challenge, as always when it comes to software engineering, is simply knowing what components, no matter how large, are being used at any given point in time across a software development lifecycle that is now more challenging to manage than ever.