Model Cards Explained
Simple, structured model information
Making informed decisions on AI
Model cards are simple, structured overviews of how an advanced AI model was designed and evaluated, and serve as key artifacts supporting Google’s approach to responsible AI.
Google first proposed model cards in a 2018 research paper (updated in 2019), defining many of the elements that can still be found in model cards today. By making this information easy to access, model cards support responsible AI development and the adoption of robust, industry-wide standards for broad transparency and evaluation practices.
It can be helpful to think of a model card as a "nutrition label" for the underlying models that power AI applications.
Nutrition labels provide essential information about food products. Model cards do the same for AI by outlining the key facts about a model in a clear, digestible format.
By explaining how a model was built, tested, and performs, model cards make it easier to understand and compare models.
Model cards can be thought of as a summarized and more digestible version of detailed, academic-style technical reports, such as the Gemini 1.5 technical report. A model card is not intended as a replacement for, but instead a complement to, these deeper technical reports.
Some companies call these reports by other names, such as a “system card” or a “safety report.”
Google is working with multi-stakeholder organizations on helping to standardize documentation of models.
Model cards are principally designed for people developing AI applications. Model cards can help developers build applications that focus on a model’s strengths, and avoid or provide mitigations for a model’s limitations.
However, model cards can help anyone with sufficient technical knowledge to understand more about how a model works.
Details such as user benefits or safety testing results can also help inform groups like policymakers, researchers, and others.