Day 3 of 5
Day 3 AI for Federal Employees / Day 3

Writing AI Use Cases for Your Agency

The OMB use case inventory requirement explained. A complete template you can use immediately. Risk assessment framework. Human oversight requirements. You will leave this lesson with 3 draft use case entries ready for review.

50 min read Template included 3 use cases produced

The Use Case Inventory Requirement

OMB M-25-21 requires every federal agency to maintain a public inventory of AI use cases. This is not optional, and it is not just a paperwork exercise. The inventory serves three purposes: transparency to the public, internal governance oversight, and accountability for how AI is actually being used in government operations.

The inventory must be submitted to OMB and published publicly. Most agencies post theirs on their website under an "AI Inventory" or "AI Use Cases" section. If you want to see what a completed inventory looks like, the GSA, HHS, and DOD all have published examples you can reference.

Who contributes to the inventory? Program offices, not just IT. If your office is using or planning to use an AI tool to support your mission, you are responsible for providing the use case entry. Your CAIO or CIO shop will compile and submit the inventory, but the content has to come from people who actually know the programs.

This is why you need to know how to write one.

What a Use Case Entry Must Contain

OMB specifies the minimum required fields for a use case inventory entry. Here is what you need to document:

Field What to Include
Use Case Name A plain-English name for the AI application. Not a vendor name — describe what the AI does.
Summary Description 2-3 sentences describing what the AI system does, what data it uses, and what it produces as an output.
Business Owner The office or program that owns and operates this use case. Not the vendor. Not IT.
AI Technology Used The specific tool, model, or platform. Include whether it is FedRAMP authorized and at what level.
Data Used What data feeds the AI system. Classification level. Whether it includes PII or other sensitive categories.
Stage of Development Initiated, In development, Deployed, or Retired.
Impact Assessment Does this AI system affect rights, safety, or significant government decisions? High / Medium / Low.
Human Oversight Mechanism Specifically how a human reviews, validates, or can override AI outputs. Required for all use cases.
Benefits and Risks Specific efficiency gains or mission improvements expected. Known risks and how they are mitigated.

The Use Case Template

Use this template to draft your entries. You can adapt the language but keep all required fields present:

Federal AI Use Case Template
USE CASE NAME: [Plain English description of what the AI does]
  Example: "Meeting Summary Generation for Policy Staff"
  NOT: "ChatGPT" or "AI Assistant"

SUMMARY DESCRIPTION:
  [Agency/Office] uses [AI tool/platform] to [specific function].
  The system takes [input data type] and produces [output type].
  This capability supports [mission function] by [specific benefit].

BUSINESS OWNER: [Office name, GS-level of responsible official]

AI TECHNOLOGY: [Tool name + version/model]
  FedRAMP Status: [Authorized / In Process / Not Applicable]
  Authorization Level: [Moderate / High / IL4 / etc.]

DATA USED:
  Data Type: [Description of data inputs]
  Classification: [Unclassified / CUI / IL4 / etc.]
  Contains PII: [Yes/No — if yes, specify type]
  Data Source: [Internal system / External / Both]

STAGE: [Initiated / In Development / Deployed / Retired]

IMPACT ASSESSMENT: [High / Medium / Low]
  Rights or Safety Impact: [Yes/No — explain if Yes]
  Significant Government Decision: [Yes/No — explain if Yes]

HUMAN OVERSIGHT:
  Review Mechanism: [How humans review AI outputs before use]
  Override Capability: [How humans can override AI recommendation]
  Responsible Role: [Title of person responsible for oversight]

BENEFITS:
  [Specific, quantified where possible: "Reduces meeting summary
  drafting time from 45 minutes to 5 minutes per meeting,
  freeing analyst time for substantive policy work."]

KNOWN RISKS AND MITIGATIONS:
  Risk: [Specific risk]
  Mitigation: [Specific control or procedure]

Risk Assessment Framework for Government AI

Not all AI use cases carry the same risk. OMB requires that you assess the impact level of your use case — specifically whether it affects individual rights, public safety, or significant government decisions. Here is a practical framework for making that determination:

High Impact Use Cases (require enhanced review)

Any AI system that directly influences decisions about: benefits eligibility, law enforcement actions, immigration status, credit or housing, employment, education, healthcare, criminal justice, or other rights-affecting determinations. These require mandatory human review before any action, detailed documentation, and regular audits.

Medium Impact Use Cases

AI systems that support internal government operations, process internal data, or assist with drafting and analysis — but where a human makes all final decisions and the AI output has no direct external effect. Most document processing, internal communications, and knowledge management tools fall here.

Low Impact Use Cases

Administrative tools, productivity assistance, internal search and retrieval, meeting notes, scheduling assistance. These have limited governance requirements but still need a human oversight description in the inventory.

Human Oversight: The Non-Negotiable Requirement

Every use case in the federal AI inventory must describe how humans maintain oversight of the AI system. "A human reviews the output" is not sufficient. You need to specify:

Practical tip: For most medium-impact use cases, a reasonable oversight description reads like: "The [role] reviews all AI-generated [output type] for accuracy and completeness before submission or action. The reviewer has full authority to modify or reject any AI output. Final decisions are made by [role] based on their professional judgment."
Day 3 Exercise

Draft 3 AI Use Cases for Your Office

Take the 3 task candidates you identified in Day 1 and turn them into full use case entries using the template above. Work through each one:

  1. Write the use case name. Make it descriptive and functional: "Contract Requirements Summarization for Acquisition Staff" rather than "AI Tool."
  2. Complete the summary description using the three-sentence structure: what the AI does, what data it uses, what it produces.
  3. Determine the impact level using the framework above. Most tasks identified from Day 1 will be Low or Medium. If any are High, note that enhanced review procedures will be required.
  4. Write the human oversight description using the four-element structure: who, what, when, how.
  5. Identify at least one specific risk and its mitigation for each use case.

When you are done, you will have 3 draft use case entries that your CAIO or CIO could potentially include in your agency's inventory. That is a real deliverable — not a practice exercise.

Key Takeaways from Day 3

Want to go deeper on federal AI governance?

Our 3-day bootcamp includes a hands-on federal AI governance workshop — use case drafting, risk assessment, and oversight framework design. Section 127 eligible.

Reserve Your Seat →