Day 1 of 5
Day 1 AI for Managers / Day 1

What AI Can and Can't Do — The Manager's Mental Model

Before you evaluate tools, build business cases, or lead pilots, you need a clear mental model of what AI actually does. This lesson gives you a framework that cuts through the hype and lets you make sharper decisions about when AI is the right answer — and when it isn't.

45 min read 1 exercise Reusable framework

The Problem with How AI Gets Presented to Managers

Every vendor pitching you an AI tool wants you to believe it can do everything. Every breathless news article makes AI sound like it will either save civilization or end it. Neither is useful to you as a manager trying to run a team and hit quarterly goals.

What you need is not enthusiasm or anxiety — you need a working mental model. A way of thinking about what AI does that is accurate enough to make good decisions, practical enough to apply quickly, and durable enough to use as AI continues to evolve.

Here is one that holds up: AI is a pattern-completion engine. It learns patterns from enormous amounts of existing data and uses those patterns to complete new inputs. That's it. When you type a prompt into ChatGPT, you are providing the beginning of a pattern, and the AI is completing it based on what it has seen in training data.

This simple model explains both AI's capabilities and its failures — and it will help you evaluate any AI claim in about 30 seconds.

The 4 Types of AI Tasks

Everything you will ever be pitched by an AI vendor, everything your team will ever ask about, falls into one of four categories. Understanding these categories is the single most useful framework a non-technical manager can have.

Type 1
Generation
AI creates content from a prompt or set of inputs. This is where most consumer AI use cases live. AI is very good at generation tasks — but the output always needs human review.
Examples: drafting emails, writing meeting summaries, creating first-draft reports, generating social content, writing job descriptions
Type 2
Analysis
AI processes existing content and produces structured insight. Analysis tasks include summarization, extraction, comparison, and synthesis. AI is strong here when given clear instructions.
Examples: summarizing long documents, extracting key data from reports, comparing contract terms, synthesizing customer feedback, analyzing financial data
Type 3
Classification
AI assigns inputs to categories based on learned patterns. This is where AI earns its keep in operations — sorting, routing, flagging, and prioritizing at scale. Very high ROI when well-configured.
Examples: routing support tickets, categorizing incoming requests, flagging anomalies in data, scoring lead quality, tagging content
Type 4
Conversation
AI engages in natural language dialogue to answer questions, guide processes, or assist users. The most visible AI category — chatbots, assistants, and customer-facing AI. Quality varies enormously by implementation.
Examples: internal help desks, customer service bots, employee onboarding assistants, knowledge base Q&A, meeting assistants
How to use this in practice: The next time someone presents you with an AI tool, ask: "What type of AI task is this?" If they can't answer in one sentence — generation, analysis, classification, or conversation — that is a signal they don't understand the product they're selling.

Where AI Reliably Excels

Within those four task types, AI performs best when the following conditions are present:

Where AI Reliably Fails

Understanding AI's failure modes is as important as understanding its strengths. Here is where managers consistently get burned:

Novel reasoning about complex situations

AI is pattern completion. It is not reasoning in the human sense. Ask it to analyze a situation it has not seen in training data — a truly novel business problem, an unusual regulatory situation, an unprecedented organizational challenge — and it will produce something that sounds confident but may be nonsense. It completes the pattern of "what a confident answer looks like" rather than actually reasoning through the problem.

Factual accuracy with specific numbers, dates, and citations

AI hallucinates. This is not a bug that will be fixed — it is a property of how the technology works. AI will invent specific facts, fabricate citations to papers that do not exist, and state incorrect statistics with complete confidence. Any AI output that includes specific numbers, named sources, or specific dates must be independently verified.

Judgment calls that require context you haven't provided

AI only knows what you tell it. It does not know your organization's history, your team's dynamics, the political sensitivities of a particular stakeholder, or the unwritten rules that govern how decisions get made in your context. When AI gives you advice on complex organizational situations, it is working without this context — and the gaps show.

Highly creative or genuinely original work

AI is very good at producing competent, conventional work quickly. It is not good at genuine originality. The insights that come from combining two things nobody has combined before, the creative leaps that change how an industry thinks, the contrarian position that turns out to be right — these require human judgment. AI can assist with execution but rarely with the generative insight.

The Manager's BS Detector for AI Claims

You will be pitched a lot of AI tools. Here is a five-question filter that surfaces the most important information in ten minutes:

Question What a Good Answer Sounds Like Red Flag Answer
What specific task does this do? "It classifies incoming support tickets into 12 categories and routes them to the right team." "It uses AI to transform your entire operations." (No specifics.)
How accurate is it? Specific accuracy rate with test methodology. "92% accuracy on a held-out test set of 10,000 tickets." "It's very accurate" or "it gets better over time." (No numbers.)
What happens when it's wrong? Clear description of failure modes and how humans catch and correct errors. "It rarely makes mistakes." (Evasion — all AI makes mistakes.)
Can you show me a live demo on our actual data? Yes. Demo on realistic data similar to yours. "We'll set that up in phase 2." (They know it won't perform on real data.)
What do customers who have been using this for a year say? Specific reference customers you can call, with measurable outcomes they've achieved. "We're still early in rollout." (No proven track record.)
Day 1 Exercise

Categorize 10 Tasks in Your Team

Think about the work your team does on a weekly basis. List 10 recurring tasks — the things that happen every week or month. Then, for each one, make two assessments:

  1. AI task type: Is this primarily generation, analysis, classification, or conversation? Some tasks are hybrid — pick the dominant type.
  2. AI suitability: Given what you know about where AI excels and where it fails, rate each task: Strong candidate / Possible with human review / Not suitable.

For each task you rate "Strong candidate" or "Possible," note the one specific condition that makes it suitable — or the one concern that needs to be addressed before deployment.

Keep this list. You will use it in Day 2 to frame your tool evaluation and in Day 3 to build a business case.

Key Takeaways from Day 1

Want to bring this framework to your whole leadership team?

Our 3-day AI bootcamp includes a full-day leadership module with live vendor evaluation workshops. Five cities. $1,490 per seat.

Reserve Your Seat →