Day 2 of 5
⏱ ~50 minutes
React + AI — Day 2

Calling the Claude API: Real AI Responses

The shell from Day 1 sends fake responses. Today you'll connect it to the Claude API, handle async state properly, and deal with errors and loading states in a way that feels smooth to users.

API Architecture: Never Call AI APIs Directly From the Browser

Your API key would be visible to anyone who opened DevTools. Always route AI API calls through a backend proxy — even a simple one.

bash
# Install a simple Express server
npm install express cors
npm install -D nodemon
javascript — server.js
const express = require('express');
const cors = require('cors');

const app = express();
app.use(cors({ origin: 'http://localhost:5173' }));
app.use(express.json());

const Anthropic = require('@anthropic-ai/sdk');
const client = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });

app.post('/api/chat', async (req, res) => {
  const { messages } = req.body;

  try {
    const response = await client.messages.create({
      model: 'claude-3-haiku-20240307',
      max_tokens: 1024,
      messages: messages,  // array of {role, content}
    });
    res.json({ content: response.content[0].text });
  } catch (err) {
    res.status(500).json({ error: err.message });
  }
});

app.listen(3001, () => console.log('API server on :3001'));
bash
# In a .env file (git-ignored)
ANTHROPIC_API_KEY=sk-ant-...

# Run the server
node server.js

Calling the API From React

jsx
async function sendMessage(userText) {
  const userMsg = { role: 'user', content: userText };
  setMessages(prev => [...prev, userMsg]);
  setIsLoading(true);
  setError(null);

  try {
    const response = await fetch('http://localhost:3001/api/chat', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({
        messages: [...messages, userMsg]  // send full history
      })
    });

    if (!response.ok) throw new Error(`HTTP ${response.status}`);

    const data = await response.json();
    setMessages(prev => [...prev, {
      id: Date.now(),
      role: 'assistant',
      content: data.content
    }]);
  } catch (err) {
    setError('Failed to get response. Try again.');
  } finally {
    setIsLoading(false);  // always clear loading, even on error
  }
}
⚠️
Pass the full message history. Claude is stateless — it doesn't remember previous messages unless you send them. Always include the full messages array including the new user message.

Loading State UI

jsx
function TypingIndicator() {
  return (
    
); } // In the messages render: {messages.map(msg => )} {isLoading && } {error &&
{error}
}

useCallback and Performance

jsx
import { useState, useCallback } from 'react';

// Without useCallback, sendMessage recreates on every render
// ChatInput would re-render unnecessarily
const sendMessage = useCallback(async (userText) => {
  // ... the function body
}, [messages]);  // only recreate when messages changes
📝 Exercise
Connect to the Real Claude API
  1. Create server.js with the Express proxy and install @anthropic-ai/sdk.
  2. Add your API key to a .env file and add .env to .gitignore.
  3. Update App.jsx to call your proxy instead of the fake response.
  4. Add isLoading state and disable the input while waiting.
  5. Add a typing indicator component that shows while loading.
  6. Add error handling with a dismissible error message.

Lesson Summary

  • Never call AI APIs from the browser — your key would be exposed. Always use a server-side proxy.
  • Pass the full message history on every request — AI APIs are stateless.
  • Always use finally to clear loading state — it runs whether the request succeeds or fails.
  • Disable inputs and show a loading indicator during API calls to prevent double-sends.
Challenge

Add a system prompt input at the top of the UI that lets users customize Claude's behavior for the conversation. When set, include it as a system parameter in the API call.

Finished this lesson?