Course Free Course Blog Federal Reserve Your Seat
Free Course · Day 1 of 5 ~45 minutes

Day 1: Your First AI API Call

In the next 45 minutes, you'll connect to the most powerful AI model on earth and make it do real work. Not through a chat window — through code you write yourself.

1
Day 1
2
Day 2
3
Day 3
4
Day 4
5
Day 5
What You'll Build

By the end of this lesson, you'll have Python scripts that send prompts to Claude's API and get intelligent responses back. You'll understand how every AI application — from ChatGPT to GitHub Copilot to autonomous agents — actually works under the hood. And you'll have two tools you can use at work tomorrow.

1
Section 1 · 10 min

Set Up Your Environment

You need two things: Python 3.12+ and the Anthropic SDK. No Docker, no cloud environment, no special hardware. This runs on any laptop made in the last five years.

Step 1: Check Python

Open a terminal (Terminal on Mac, Command Prompt or PowerShell on Windows) and run:

bash
$ python --version

You should see something like Python 3.12.3. If the version is 3.8 or higher, you're fine. If Python isn't installed or you're below 3.8, go to python.org, download 3.12+, and run the installer. Accept all defaults.

Windows note: During installation, check the box that says "Add Python to PATH." If you skip this, your terminal won't find Python. If you already installed without it, reinstall and check the box.

Step 2: Install the Anthropic SDK

With Python working, install the SDK in one command:

bash
$ pip install anthropic

This downloads the official Python library that handles authentication, request formatting, and error handling for Claude's API. Without it, you'd need to write raw HTTP requests — possible, but unnecessary.

Step 3: Get Your API Key

Your API key is the credential that lets Claude's servers know who you are and what account to bill. Getting one takes about two minutes:

1

Go to console.anthropic.com and create a free account.

2

Navigate to API Keys in the left sidebar and click Create Key.

3

Copy the key immediately — it's only shown once. It looks like: sk-ant-api03-xxxxxxxxxxxxx

Never commit your API key to GitHub. If you accidentally push a key to a public repo, revoke it immediately in the console and generate a new one. GitHub scanners find exposed API keys within minutes, and Anthropic will see the usage on your bill.

Step 4: Store Your Key as an Environment Variable

Rather than typing your key directly into code, you store it as an environment variable. The Anthropic SDK automatically reads it. This pattern — credentials in the environment, not the code — is standard practice in every professional Python codebase.

bash Mac / Linux
$ export ANTHROPIC_API_KEY="sk-ant-api03-your-key-here"
powershell Windows PowerShell
$env:ANTHROPIC_API_KEY = "sk-ant-api03-your-key-here"

This sets the variable for your current terminal session. To make it permanent, add the export line to your ~/.zshrc or ~/.bashrc file (Mac/Linux), or set it in System Environment Variables (Windows). Environment variables are one of those things you set once and forget — the SDK finds it automatically every time.

2
Section 2 · 10 min

Your First API Call

Create a new file called first_call.py in any folder. Write this exactly:

python first_call.py
import anthropic

client = anthropic.Anthropic()

message = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "Explain what an API is in exactly 3 sentences."}
    ]
)

print(message.content[0].text)

Run it:

bash
$ python first_call.py

You just made your first AI API call. In about one second, Claude processed your request, generated a response, and sent it back across the internet to your terminal. Every AI application in the world — ChatGPT, GitHub Copilot, autonomous agents, AI-powered customer support — does exactly what you just did: sends text to a model and gets text back. The difference is what you do with that text afterward.

Let's break down every line so you know exactly what you wrote:

Line-by-Line Breakdown
import anthropic
Loads the SDK you installed. This gives you the Anthropic class and everything it knows about talking to Claude's servers.
client = anthropic.Anthropic()
Creates a client object. It automatically reads your ANTHROPIC_API_KEY from the environment and sets up authenticated connections to Claude's API.
model="claude-sonnet-4-20250514"
Specifies which Claude model to use. Sonnet is the best balance of speed and intelligence — fast enough for interactive use, smart enough for complex tasks.
max_tokens=1024
The maximum length of Claude's response. 1 token ≈ 4 characters. 1024 tokens ≈ 750 words. If a response would exceed this, it stops. Set higher for longer outputs.
messages=[{"role": "user"...
The conversation history as a list of message objects. Each message has a role (user or assistant) and content. You can pass multi-turn conversations here — we'll cover that in Day 3.
message.content[0].text
Extracts the actual text from the response. The API returns a structured object — content is a list (Claude can theoretically return multiple content blocks), and .text gets the string.

Why content[0]? The API is designed to be extensible. In the future, a single response could contain text, tool calls, images, and other content types — all in the same list. Right now you almost always want content[0].text, but the list structure is intentional.

3
Section 3 · 5 min

Understanding Tokens and Pricing

You've seen max_tokens twice now. You need to understand what tokens actually are, because they determine both what Claude can process and what you pay.

A token is roughly a word fragment. More precisely: common short words like "the" or "is" are one token each. Longer words like "Anthropic" might be 2-3 tokens. "Hello world" is about 2 tokens. "Hello, world!" is 4 tokens (punctuation counts). The rule of thumb: 1 token ≈ 4 characters, or 1 token ≈ 0.75 words.

Why does this matter? Because you pay per token — input tokens (what you send Claude) and output tokens (what Claude sends back). The pricing for Claude Sonnet as of 2025:

text Claude Sonnet 4 Pricing
# Input tokens (what you send)
Input     $3.00 per million tokens

# Output tokens (what Claude sends back)
Output    $15.00 per million tokens

# Your first call cost approximately:
~50 input tokens + ~80 output tokens = $0.0001

That first call cost you about one-hundredth of a penny. A million tokens is approximately 750,000 words — roughly 10 full novels. New Anthropic accounts get free credits that cover thousands of API calls before you ever pay anything. For the first week of learning, don't think about cost at all.

When cost matters: If you process 10,000 customer emails per day, you're sending maybe 500 tokens per email — that's 5 billion input tokens per month, which at $3/million is $15,000/month. This is why production AI applications optimize their prompts obsessively. For personal tools and prototypes, cost is irrelevant.

4
Section 4 · 15 min

Make It Useful — Build a Real Tool

The first script was a proof of concept. Now let's build something you'd actually use. Everyone sends emails. Everyone has sent an email they wish they'd phrased better. Let's fix that.

Create a new file called email_rewriter.py:

python email_rewriter.py
import anthropic

client = anthropic.Anthropic()

email = input("Paste an email you want to improve:\n> ")

message = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=2048,
    system="You are a professional writing editor. Rewrite the email to be clearer, more concise, and more professional. Keep the same meaning but improve the tone and structure. Output ONLY the rewritten email, nothing else.",
    messages=[
        {"role": "user", "content": email}
    ]
)

print("\n--- Rewritten Email ---\n")
print(message.content[0].text)

Run it, paste a real email from your inbox, and watch it get transformed. This isn't a demo anymore — this is a tool you can open tomorrow before sending a difficult email to your manager.

Two new concepts appeared here that are important:

The System Prompt

The system parameter is the most powerful tool in prompt engineering. It's the instruction you give Claude before the conversation starts — like handing someone a job description before asking them to work. A few things to notice about how I wrote it:

"You are a professional writing editor." — Personas focus the model's behavior. Claude performs differently when it believes it's an editor vs. a general assistant.

"Output ONLY the rewritten email, nothing else." — Explicit output constraints prevent Claude from adding preamble like "Sure! Here's the rewritten email:". In production tools, you almost always want clean output, not conversational wrapper text.

A weak system prompt produces mediocre output. A precise system prompt produces output you can use directly. We'll spend most of Day 2 on this.

User Input from Terminal

The input() function pauses execution and waits for the user to type something (or paste text) and hit Enter. It returns whatever was typed as a string. That string goes directly into the content field of the message. Simple, but powerful — this is how you connect any external data to Claude.

5
Section 5 · 10 min

Structured Output — Make AI Give You Data

Free-form text is useful for reading. But most real applications need data — something they can store in a database, pass to another API, or display in a dashboard. The trick is telling Claude to output JSON.

Create analyze_text.py:

python analyze_text.py
import anthropic
import json

client = anthropic.Anthropic()

text = input("Paste any text to analyze:\n> ")

message = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,
    messages=[
        {
            "role": "user",
            "content": f"""Analyze this text and return a JSON object with these exact fields:
- "sentiment": positive, negative, or neutral
- "summary": one sentence summary
- "key_points": array of 3 main points
- "reading_level": estimated grade level
- "word_count": number of words

Text: {text}

Return ONLY valid JSON, no other text."""
        }
    ]
)

result = json.loads(message.content[0].text)
print(json.dumps(result, indent=2))

Paste in a news article, a report excerpt, or anything longer than a paragraph. You get back something like this:

json output
{
  "sentiment": "neutral",
  "summary": "The article examines quarterly earnings trends across the semiconductor sector.",
  "key_points": [
    "NVIDIA revenue increased 94% year-over-year driven by data center demand",
    "AMD gained market share in enterprise CPU segment",
    "Supply chain constraints remain a risk through Q3 2026"
  ],
  "reading_level": "Grade 14 (college)",
  "word_count": 847
}

Now you have real data. You can result["sentiment"] to get the sentiment string. You can iterate over result["key_points"] to build a bullet list. You can write result to a database, send it to a frontend, or pipe it into another process.

This is the pattern that makes AI useful in production. Instead of getting free-form text that a human reads, you get structured data that software processes. Every serious AI application — document processors, content moderation systems, data extraction pipelines — uses this pattern. You just built the foundation of all of them.

What if json.loads() raises an error? Sometimes Claude adds a markdown code fence around the JSON (```json). In production you'd strip it first: text.strip().removeprefix("```json").removesuffix("```").strip(). Anthropic is also building native JSON mode into the API — by the time you're building production apps, this will be even cleaner.

What You Just Learned

  • How to install and configure the Anthropic SDK in Python
  • How to make API calls with model, max_tokens, and messages
  • What tokens are, and why they determine both capability and cost
  • How system prompts shape Claude's behavior — the core tool of prompt engineering
  • How to get structured JSON output that software can process directly
  • Two tools you can actually use at work
Optional Challenge

Go Further

Finished early and want to push further? Try these before Day 2:

  • Modify the email rewriter to accept a tone parameter (formal, casual, friendly, urgent) and rewrite the same email in each tone. Compare the outputs.
  • Build a script that reads a .txt file using open(), sends the content to Claude, and writes a bullet-point summary back to a new file. You now have a document summarizer.
Day 1 Complete

You're writing real AI code.

Tomorrow you'll make Claude read and understand actual documents — PDFs, emails, reports — and extract exactly what you need from them.

Continue to Day 2: Make AI Read Your Documents
Course progress
20% complete
This is Day 1 of 5. If you're enjoying building real things, you'll love the 3-day in-person bootcamp.