What is an AI Bill of Materials (AIBOM)?
As organizations implement more AI models and tools with AI capabilities, they need to understand what is under the hood to ensure AI remains secure and risk free. Read on to learn what an AIBOM is, how it works, and more.
AIBOM definition
An AI bill of materials (AIBOM) is a comprehensive list of every component used to build an AI model or software that uses AI. AIBOMs enable organizations to accurately track, update, and manage components, licenses, and dependencies. They help to improve an organization’s security posture by creating transparency around AI models in the workplace.
AI models, tools, and software are just like any other application, and organizations need an accurate inventory of the components, libraries, and dependencies used. With an AIBOM, organizations can ensure they have the necessary security tools, policies, and processes in place to keep AI models secure and business-critical data safe.
Many organizations already use software bill of materials (SBOM) to help solidify the secure software development lifecycle (SSDLC) and reduce supply chain risk. Understanding the components and dependencies used to create software enables security teams to identify and patch vulnerabilities quickly.
Why AIBOMs are important
Understanding what components comprise AI tools is important for organizations to ensure they align with industry regulations and follow AI governance frameworks, such as the European Union’s AI Act and various U.S. states’ legislation. The EU’s AI Act will go into full effect in 2027 and requires data transparency and quality, accountability, and monitoring.
AIBOMs foster transparency around AI models and tools. Organizations should know what’s inside an AI model, such as data lineage, algorithms, data sources, etc.
With this information, security teams can document and maintain visibility for model lineage, data provenance, and dependency mapping. AIBOMs help make it easier to implement threat detection and security posture management around AI models and infrastructure.
Additionally, AIBOMs enable accountability into who built what component and who is responsible for keeping it updated, configured correctly, and secure. With AIBOMs, security and DevOps teams can remediate vulnerabilities quicker and hold those responsible for all components.
AI security risks AIBOMs help mitigate
Just like any other piece of software, AI models have risks or threats to watch out for. Some examples of AI security risks include:
- Data poisoning: Threat actors inject malicious data into training sets to corrupt or manipulate models into making mistakes and creating bias and backdoors.
- Prompt leakage: A form of prompt injection that threat actors use to manipulate AI instructions to bypass safeguards in place.
- LLMjacking: Threat actors use stolen credentials or otherwise gain unauthorized access to an AI model and hijack resources to train malicious models, mine cryptocurrency, etc.
- Compromised datasets: Threat actors inject adversarial data into public datasets, which reduces reliable outputs for a number of users.
- Framework vulnerabilities: Threat actors exploit popular AI development libraries to compromise an organization’s systems.
- Model inversion: Threat actors reconstruct proprietary or private information from AI models through repeated querying and using outputs to infer training data.
Benefits of using an AIBOM
AIBOMs provide transparency for organizations, which enables them to develop security policies and controls. Other AIBOM benefits include:
- Compliance and governance: By knowing all the components of an AI model, organizations can ensure they remain compliant with current and future regulations. AIBOMs make auditing a smoother process.
- Accountability: AIBOMs enable accountability to help clarify ownership for components.
- AI reproducibility: AIBOMs support or assist in reproducibility.
- Bias detection: AIBOMs improve traceability and make bias detection faster and easier to correct.
Challenges of an AIBOM
AIBOMs are not without challenges, which include:
- Lack of standardization: There isn’t yet a uniform method for inventorying AI models, which can impact security controls and policies.
- Increased complexity: AI models are more complex than general software and infrastructure and components are evolving, which makes it difficult to manage effectively.
- Balancing transparency and confidentiality: AIBOMs encourage detailed provenance, but that information can expose proprietary or sensitive intellectual property. Completeness may be limited when defining third-party models lacking transparency.
AI model components to secure
SBOMs provide a comprehensive look at software libraries, but AIBOMs go much further. AIBOMs can include the physical infrastructure used to run the AI models and tools, as well as the nonphysical layers such as trust boundaries, data sources, dependencies, etc.
How in-depth an AIBOM goes will depend on your organization. Like SBOMs, it’s important to determine how much information is possible to include in the BOM.
AIBOMs are similar to SBOMs and NIST recommends including model source and version, performance metrics, dependencies, licensing information, data sources, and data types and classifications.
AIBOMs are more complex than SBOMs and an inventory of AI model components can include the following:
- Physical hardware: Graphics processing units (GPUs), central processing units (CPUs), neural processing units (NPUs), tensor processing units (TPUs), memory, servers, network, and storage devices.
- Containerized components: Base images and libraries like PyTorch or TensorFlow.
- Runtime and frameworks: Training pipelines, inference servers, libraries, and proprietary code.
- Model artifacts: Models, algorithms, parameters, and version.
- Datasets: Model name, format, classification, version, and provenance.
- Interfaces and protocols: APIs, model user interface, tokenization processes, and protocol components.
Additionally, organizations might not be able to get all the information, such as runtime and container-level components, if using a third-party AI model.
Get visibility into AI with AIBOMs
AI shouldn’t be a black box. Use AIBOMs to inventory every dependency and library inside, so you can accurately identify where risks live and keep assets secure. AI introduces new threats, such as model drift and poisoned datasets, so it’s important to extend SBOM practices to AI models.
Read our full breakdown about AIBOMs to learn about AI visibility and governance best practices, AI-specific risks, and the cloud-native, containerized stack AI models run on.
