The OMB use case inventory requirement explained. A complete template you can use immediately. Risk assessment framework. Human oversight requirements. You will leave this lesson with 3 draft use case entries ready for review.
OMB M-25-21 requires every federal agency to maintain a public inventory of AI use cases. This is not optional, and it is not just a paperwork exercise. The inventory serves three purposes: transparency to the public, internal governance oversight, and accountability for how AI is actually being used in government operations.
The inventory must be submitted to OMB and published publicly. Most agencies post theirs on their website under an "AI Inventory" or "AI Use Cases" section. If you want to see what a completed inventory looks like, the GSA, HHS, and DOD all have published examples you can reference.
Who contributes to the inventory? Program offices, not just IT. If your office is using or planning to use an AI tool to support your mission, you are responsible for providing the use case entry. Your CAIO or CIO shop will compile and submit the inventory, but the content has to come from people who actually know the programs.
This is why you need to know how to write one.
OMB specifies the minimum required fields for a use case inventory entry. Here is what you need to document:
| Field | What to Include |
|---|---|
| Use Case Name | A plain-English name for the AI application. Not a vendor name — describe what the AI does. |
| Summary Description | 2-3 sentences describing what the AI system does, what data it uses, and what it produces as an output. |
| Business Owner | The office or program that owns and operates this use case. Not the vendor. Not IT. |
| AI Technology Used | The specific tool, model, or platform. Include whether it is FedRAMP authorized and at what level. |
| Data Used | What data feeds the AI system. Classification level. Whether it includes PII or other sensitive categories. |
| Stage of Development | Initiated, In development, Deployed, or Retired. |
| Impact Assessment | Does this AI system affect rights, safety, or significant government decisions? High / Medium / Low. |
| Human Oversight Mechanism | Specifically how a human reviews, validates, or can override AI outputs. Required for all use cases. |
| Benefits and Risks | Specific efficiency gains or mission improvements expected. Known risks and how they are mitigated. |
Use this template to draft your entries. You can adapt the language but keep all required fields present:
USE CASE NAME: [Plain English description of what the AI does] Example: "Meeting Summary Generation for Policy Staff" NOT: "ChatGPT" or "AI Assistant" SUMMARY DESCRIPTION: [Agency/Office] uses [AI tool/platform] to [specific function]. The system takes [input data type] and produces [output type]. This capability supports [mission function] by [specific benefit]. BUSINESS OWNER: [Office name, GS-level of responsible official] AI TECHNOLOGY: [Tool name + version/model] FedRAMP Status: [Authorized / In Process / Not Applicable] Authorization Level: [Moderate / High / IL4 / etc.] DATA USED: Data Type: [Description of data inputs] Classification: [Unclassified / CUI / IL4 / etc.] Contains PII: [Yes/No — if yes, specify type] Data Source: [Internal system / External / Both] STAGE: [Initiated / In Development / Deployed / Retired] IMPACT ASSESSMENT: [High / Medium / Low] Rights or Safety Impact: [Yes/No — explain if Yes] Significant Government Decision: [Yes/No — explain if Yes] HUMAN OVERSIGHT: Review Mechanism: [How humans review AI outputs before use] Override Capability: [How humans can override AI recommendation] Responsible Role: [Title of person responsible for oversight] BENEFITS: [Specific, quantified where possible: "Reduces meeting summary drafting time from 45 minutes to 5 minutes per meeting, freeing analyst time for substantive policy work."] KNOWN RISKS AND MITIGATIONS: Risk: [Specific risk] Mitigation: [Specific control or procedure]
Not all AI use cases carry the same risk. OMB requires that you assess the impact level of your use case — specifically whether it affects individual rights, public safety, or significant government decisions. Here is a practical framework for making that determination:
Any AI system that directly influences decisions about: benefits eligibility, law enforcement actions, immigration status, credit or housing, employment, education, healthcare, criminal justice, or other rights-affecting determinations. These require mandatory human review before any action, detailed documentation, and regular audits.
AI systems that support internal government operations, process internal data, or assist with drafting and analysis — but where a human makes all final decisions and the AI output has no direct external effect. Most document processing, internal communications, and knowledge management tools fall here.
Administrative tools, productivity assistance, internal search and retrieval, meeting notes, scheduling assistance. These have limited governance requirements but still need a human oversight description in the inventory.
Every use case in the federal AI inventory must describe how humans maintain oversight of the AI system. "A human reviews the output" is not sufficient. You need to specify:
Take the 3 task candidates you identified in Day 1 and turn them into full use case entries using the template above. Work through each one:
When you are done, you will have 3 draft use case entries that your CAIO or CIO could potentially include in your agency's inventory. That is a real deliverable — not a practice exercise.
Our 3-day bootcamp includes a hands-on federal AI governance workshop — use case drafting, risk assessment, and oversight framework design. Section 127 eligible.
Reserve Your Seat →