Everything a federal employee, contractor, or small business needs to know about AI policy, compliance frameworks, authorized tools, and SBIR funding pathways in 2026.
Issued April 3, 2025, OMB Memorandum M-25-21 is the primary federal AI governance directive. It replaced M-24-10 and set binding requirements for all executive branch agencies.
Agencies must accelerate responsible AI adoption across operations, remove bureaucratic barriers to AI deployment, and prioritize use cases that improve government efficiency and mission outcomes. The memo emphasizes innovation first, with risk management proportional to risk.
Every covered agency must designate a CAIO responsible for AI governance, inventory maintenance, risk management, and workforce AI literacy. CAIOs report to agency leadership and coordinate with OMB's AI Council.
Agencies must maintain a public inventory of AI use cases at ai.gov. High-impact and rights-impacting uses require enhanced documentation, testing, and human oversight. Low-risk uses face lighter requirements to avoid over-regulation.
High-impact AI — affects rights, benefits, safety, or critical infrastructure. Requires human review, explainability documentation, bias testing, and appeals mechanisms.
Low-risk AI — administrative tasks, scheduling, internal productivity. Streamlined approval with standard security controls sufficient.
Agencies may request waivers from specific M-25-21 requirements for national security or operational necessity. Waivers require CAIO and CISO sign-off plus OMB notification. Not intended as a loophole — intended for genuine mission exceptions.
90 days: Designate CAIO and update use case inventory
180 days: Publish AI strategy and workforce plan
Annual: Report AI use cases and outcomes to OMB
Ongoing: Continuous monitoring of high-impact AI systems
M-25-21 supersedes Biden's EO 14110 (Oct 2023) guidance components that were rescinded by the Trump administration in January 2025. However, underlying statutes (FISMA, Section 508, NIST mandates) remain in force independent of executive direction. Agencies should verify current status of specific EO 14110 provisions with their General Counsel. See: whitehouse.gov/ai for current policy.
Real deployments across the federal enterprise — from warfighting to welfare benefits. These represent the landscape of where AI is actively delivering mission value in 2025-2026.
Uses ML to analyze drone and satellite imagery for object detection and targeting assistance. Managed by the Chief Digital and AI Office (CDAO), formerly JAIC. One of DoD's longest-running operational AI programs.
REACH VET program uses ML on EHR data to identify veterans at highest risk of suicide and proactively connect them with care. Reviewed 6 million veterans annually.
Machine learning models score tax returns for fraud risk before refunds are issued, flagging suspicious patterns in real time. Estimated to have prevented billions in fraudulent refunds annually.
NLP models extract and classify data from disability claim documents, medical records, and forms — reducing manual adjudicator workload and accelerating decisions for claimants.
Computer vision on satellite imagery tracks crop health, yield forecasts, and drought impacts. Used for crop insurance programs and commodity reporting across 900M+ monitored acres.
Traveler Verification System uses facial biometrics to match arriving passengers against passport and visa photos. Deployed at 33+ major airports; processes millions of passengers monthly.
NLP and knowledge graph tools surface connections across case files, wiretap transcripts, and public records. Used by analysts to accelerate lead development in complex investigations.
AI tools review Medicare/Medicaid prior authorization requests and clinical notes for coding accuracy, fraud indicators, and medical necessity — reducing manual review backlogs.
ML models predict weather-driven capacity constraints, optimize traffic flow programs, and route aircraft around congestion. Reduces delays and fuel burn across the National Airspace System.
AI systems on the Mars rovers execute terrain navigation and science targeting autonomously due to communication delays. JPL also uses ML for spacecraft anomaly detection and downlink prioritization.
ML models forecast renewable energy output variability, optimize grid dispatch decisions, and detect anomalies in critical infrastructure control systems across the national grid.
AI analyzes import data, pricing patterns, and supply chain records to identify potential dumping, sanctions evasion, and counterfeit goods — flagging cases for trade enforcement action.
Neural machine translation processes diplomatic cables and foreign media in 70+ languages to support intelligence reporting and diplomatic preparation. Reviewed and refined by human analysts.
ML models simulate economic outcomes of sanctions regimes, tariffs, and monetary policy. FinCEN also uses AI to identify suspicious transaction patterns across financial institution reports.
Deep learning models screen molecular candidates for therapeutic potential, reducing early-stage drug discovery timelines. AI pathology tools assist researchers in analyzing tissue samples at scale.
AI-driven network monitoring identifies anomalous behavior across .gov networks in near real-time. CISA's Continuous Diagnostics and Mitigation (CDM) program increasingly incorporates ML-based threat detection.
ML models optimize last-mile delivery routes across 160M+ delivery points daily, reducing fuel costs and improving on-time delivery rates. Computer vision sorts packages automatically at processing facilities.
NLP models process unemployment insurance claims, flag fraud patterns, and route complex cases to appropriate adjudicators. Accelerated processing during COVID-19 surge demonstrated scale potential.
AI chatbots guide students through FAFSA, loan repayment options, and forgiveness programs. Analytics identify students at risk of default for proactive outreach before delinquency.
Satellite imagery analysis and sensor network ML detect pollution events, track air quality exceedances, and prioritize facility inspections based on risk scores derived from historical compliance data.
Seven frameworks every federal AI practitioner must understand. These are not optional — they're law, policy, or contractual requirements depending on your agency and system type.
| Framework | What It Covers | Who It Applies To | Status |
|---|---|---|---|
| OMB M-25-21 | Federal AI governance, CAIO requirements, use case inventory, risk classification | All executive branch agencies | Active (Apr 2025) |
| NIST AI RMF 1.0 | AI risk management across four functions: Govern, Map, Measure, Manage | Federal agencies (per M-25-21); broadly adopted by contractors | Active (Jan 2023) |
| FedRAMP | Cloud service security authorization for federal use. Moderate = most agencies; High = sensitive/law enforcement | All cloud services used by federal agencies | Active |
| DoD IL4 / IL5 / IL6 | Impact levels for DoD cloud: IL4 = CUI, IL5 = CUI + national security, IL6 = Secret | DoD and DoD contractors | DoD Only |
| Section 508 | Accessibility requirements for federal IT; AI tools must be usable by people with disabilities | All federal agencies and funded recipients | Active |
| EO 14110 (Biden) | Sweeping AI safety, security, and equity requirements; many provisions rescinded Jan 2025 | Was: all agencies. Most provisions rescinded by EO 14179 (Trump) | Largely Rescinded |
| FISMA | Federal Information Security Modernization Act. Requires risk-based security programs for all federal systems including AI | All federal agencies and contractors | Active (statutory) |
| ATO (Authority to Operate) | FISMA-mandated formal authorization process before deploying any federal IT/AI system | All federal systems | Required |
GOVERN: Establish AI risk management policies, roles, and culture.
MAP: Identify AI risks and context for each use case.
MEASURE: Quantify, assess, and test AI risks using metrics.
MANAGE: Treat, respond, and monitor identified risks.
Download the full framework at airmf.nist.gov.
1. Categorize — FIPS 199 system categorization (Low/Mod/High)
2. Select — Choose NIST SP 800-53 security controls
3. Implement — Deploy controls including AI-specific ones (model cards, bias testing)
4. Assess — Independent Security Assessment (3PAO for FedRAMP)
5. Authorize — Authorizing Official (AO) signs ATO
6. Monitor — Continuous monitoring and annual assessment
AI-specific additions: document training data provenance, test for demographic bias, maintain explainability artifacts.
FedRAMP Low — public-facing, no PII. Very few AI tools here.
FedRAMP Moderate — most civilian agency use. Covers CUI categories. ChatGPT Enterprise, Microsoft 365 Copilot (GCC) achieve this level.
FedRAMP High — law enforcement, emergency services, health. AWS Bedrock GovCloud, Azure Government, Google Cloud for Government achieve High.
Check current authorizations at marketplace.fedramp.gov.
IL2 — Publicly releasable, no CUI. Commercial clouds.
IL4 — Controlled Unclassified Information (CUI). Non-national security. Azure Government, AWS GovCloud (US) achieve IL4.
IL5 — CUI + national security systems. Higher bar. Microsoft Azure Government (IL5 authorized), AWS GovCloud IL5.
IL6 — Classified SECRET. Requires dedicated Secret cloud (e.g., Microsoft Azure Government Secret, AWS C2S).
Most commercial AI tools are IL2 or IL4 at best. IL5 AI requires special deployment architecture.
The Small Business Innovation Research program is the most accessible federal funding path for AI startups and small businesses. $3-4B+ distributed annually. Here is the full landscape.
Prove your technical concept is feasible. Short performance period (6-12 months). Competitive application with technical proposal. No prototype required, but strong preliminary data helps.
Build and validate your prototype. Much larger award. Requires Phase I completion (usually). This is where serious product development happens. Direct-to-Phase-II (D2P2) available at some agencies.
No SBIR funds — Phase III is transition to procurement. Agencies can sole-source Phase III contracts to Phase II winners without competition. This is where SBIR becomes revenue.
Like SBIR but requires formal partnership with a university or federally-funded R&D center (FFRDC). At least 30% effort at the research institution. Ideal for academic AI spinouts.
DoD / OSD SBIR — largest AI SBIR volume. CDAO, DARPA, Army DEVCOM, Navy NAVAIR, Air Force AFRL all run separate solicitations. Apply via dodsbirsttr.mil.
DARPA BAAs — Broad Agency Announcements for high-risk, high-reward research. Not SBIR, but open to small businesses. Visit darpa.mil.
NSF SBIR — strong for foundational AI, ML tools, and AI-enabled products. Apply via seedfund.nsf.gov. America's Seed Fund awards 400+ Phase I annually.
NIH SBIR — health AI, clinical ML, digital therapeutics. Apply via grants.nih.gov. 3 cycles per year.
Other Agencies — DHS, DOE, NASA, USDA, and Commerce (NIST) all participate. Track all open solicitations at sbir.gov.
OTA Consortia — Other Transaction Authority agreements bypass FAR. NSTXL, NTSA, AFWERX, DIU all offer non-SBIR AI contracts open to small businesses. Faster cycle times than traditional procurement.
You must be registered in SAM.gov (System for Award Management) before submitting any SBIR proposal or receiving federal funds. Registration is free and takes 1-3 weeks. You will receive a UEI (Unique Entity Identifier) and eventually a CAGE code. Renew annually. Register at sam.gov. Also register in the DSIP portal for DoD SBIR: dsip.afrl.af.mil.
Not all AI tools are created equal for government use. Below is the current authorization status of major AI platforms as of April 2026. Always verify with your agency CISO and check marketplace.fedramp.gov before deploying.
| Tool / Platform | FedRAMP Level | DoD IL | Notes for Federal Users |
|---|---|---|---|
|
Claude via AWS Bedrock GovCloud
Anthropic / Amazon Web Services
|
FedRAMP High | IL5 | Accessed through AWS GovCloud (US) Bedrock service. Suitable for CUI, sensitive agency workloads. Data stays in GovCloud boundary. |
|
Microsoft Copilot (GCC High)
Microsoft
|
FedRAMP High | IL5 | Government Community Cloud High environment. Integrates with M365 GCC High. Suitable for DoD and sensitive agency use. Requires GCC High tenant. |
|
Azure OpenAI Service (Government)
Microsoft
|
FedRAMP High | IL5 | GPT-4 models in Azure Government region. FedRAMP High authorized. Many DoD programs use this for AI workloads requiring IL4/IL5 compliance. |
|
ChatGPT Enterprise
OpenAI
|
FedRAMP Moderate | IL2 only | Suitable for unclassified, non-CUI civilian agency workloads. Not approved for DoD sensitive work. Consumer ChatGPT.com is NOT authorized for any federal use. |
|
Gemini for Google Workspace (Government)
Google
|
FedRAMP Moderate | IL2-IL4 | Available in Google Workspace for Government. Google Cloud has FedRAMP High; Gemini integration for High workloads varies by product. Verify current status. |
|
Amazon Q (Business / Developer)
Amazon Web Services
|
FedRAMP High | IL5 | Available in AWS GovCloud. Enterprise AI assistant for agencies already on AWS infrastructure. Supports RAG over agency document libraries. |
|
DeepSeek (any model)
DeepSeek AI (China)
|
Not Authorized | Banned | Banned on government devices by Navy, NASA, Congress, and multiple agencies. Chinese origin presents national security risk. Do not use for any federal work. |
|
Consumer AI tools (Claude.ai, Gemini.com, Perplexity, etc.)
Various
|
Not Authorized | Not Authorized | Consumer-tier AI products do not meet federal data handling requirements. Do not input any CUI, PII, or sensitive agency data. Personal use only on personal devices. |
If your data is public / unclassified / no CUI — FedRAMP Moderate tools are generally acceptable (with agency approval).
If your data is CUI / FOUO / sensitive but unclassified — require FedRAMP High or IL4 at minimum.
If your data is national security / DoD sensitive — IL5 required. Very few commercial AI tools qualify.
Always get written CISO approval before using any AI tool with government data.
Whether you're a program manager, analyst, contractor, or agency leader — here is the fastest compliant path from AI novice to AI practitioner in federal government.
Find your CAIO. Read your agency's AI strategy (required by M-25-21). Check what tools your agency has already approved — many have a pre-approved software list. Don't assume commercial tools are available just because colleagues use them personally.
OMB M-25-21 requires agencies to build AI literacy across the workforce. GSA's AI training, OPM guidance, and agency-specific programs are free. For hands-on applied skills, the Precision AI Academy bootcamp provides federal-context AI training with real use cases — including prompt engineering, AI policy navigation, and deploying AI in government workflows.
Start with a low-risk, high-frequency task: drafting communications, summarizing lengthy reports, coding routine data analysis, or searching policy documents. Document the current state (time spent, error rate) so you can demonstrate ROI after implementing AI.
Work with your CISO and IT team to identify which FedRAMP-authorized AI tools are approved for your data sensitivity level. Submit a formal request if needed. Document your use case and data types. Never skip this step — using unauthorized tools with government data is a FISMA violation.
Run a 30-day pilot. Measure outcomes: time saved, quality improvement, error reduction. Document your methodology per NIST AI RMF (Govern/Map/Measure). If results are positive, write up a brief for your CAIO — this is how AI use cases get added to agency inventories and scaled.
If your AI solution solves a problem others in government have, it may qualify for SBIR funding or an OTA contract. Work with your contracting officer to determine if an AI procurement or R&D solicitation is appropriate. Small businesses can respond to agency SBIR solicitations — or agencies can issue BAAs for novel problems they need solved.
The eight questions we hear most often from federal employees, contractors, and SBIR applicants.
Primary sources every federal AI practitioner should bookmark.