Course Overview All Courses Blog Reserve Bootcamp Seat
LangChain · Day 1 of 5 ~60 minutes

LangChain Basics — Chains, Prompts, Output Parsers

Install LangChain, understand the core abstractions, and build your first chains with LCEL — the modern LangChain syntax used in production.

1
Day 1
2
Day 2
3
Day 3
4
Day 4
5
Day 5
What You'll Build Today

A content generation pipeline — a chain that takes a topic, generates a blog post title, then expands it into an outline. Two chained LLM calls, properly composed with LCEL.

1
Setup

Installing LangChain

LangChain has a core package and separate model integrations. Install what you need:

bash
# Core LangChain + OpenAI integration
pip install langchain langchain-openai

# Or for Anthropic/Claude
pip install langchain langchain-anthropic

# Set your API key
export OPENAI_API_KEY="sk-..."
# or
export ANTHROPIC_API_KEY="sk-ant-..."

Which model to use: This course uses OpenAI's gpt-4o-mini and Anthropic's Claude Haiku interchangeably. They're both fast and cheap. The LangChain code is identical except the import and model name.

2
Core Concept

LCEL — LangChain Expression Language

LCEL is the modern way to compose LangChain components. It uses the pipe operator | to chain components together. Each component's output feeds into the next.

pythonfirst_chain.py
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

# 1. The model
model = ChatOpenAI(model="gpt-4o-mini", temperature=0.7)

# 2. A prompt template
prompt = ChatPromptTemplate.from_template(
    "Write a compelling blog post title about: {topic}"
)

# 3. An output parser (extracts text from the response)
parser = StrOutputParser()

# 4. Chain them together with LCEL pipe operator
chain = prompt | model | parser

# 5. Invoke the chain
result = chain.invoke({"topic": "building AI apps with LangChain"})
print(result)

The | operator connects components. The chain reads left to right: prompt formats the input → model generates a response → parser extracts the string.

3
Prompt Templates

Prompt Templates — Dynamic, Reusable Prompts

Hard-coding prompts as strings doesn't scale. Prompt templates let you define the structure once and fill in variables at runtime.

python
from langchain_core.prompts import ChatPromptTemplate

# System + human message template
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a {role}. Be {tone}."),
    ("human", "{question}")
])

# The template needs all variables when invoked
chain = prompt | model | StrOutputParser()

result = chain.invoke({
    "role": "senior software engineer",
    "tone": "direct and concise",
    "question": "What are the biggest mistakes in API design?"
})
print(result)
4
Chaining

Chaining Multiple LLM Calls

The power of LCEL is composing complex multi-step workflows. Here's the content pipeline — title generation feeds into outline generation:

pythoncontent_pipeline.py
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough

model = ChatOpenAI(model="gpt-4o-mini")
parser = StrOutputParser()

# Chain 1: topic → title
title_prompt = ChatPromptTemplate.from_template(
    "Write ONE compelling blog post title about: {topic}"
)
title_chain = title_prompt | model | parser

# Chain 2: title → outline
outline_prompt = ChatPromptTemplate.from_template(
    "Create a 5-section blog post outline for this title: {title}"
)
outline_chain = outline_prompt | model | parser

# Combine: topic → title → outline
full_pipeline = (
    {"title": title_chain}
    | outline_chain
)

result = full_pipeline.invoke({"topic": "LangChain for production AI"})
print(result)

What's RunnablePassthrough? It passes the input through unchanged — useful when you need to forward original values alongside transformed ones in a chain.

Day 1 Complete — What You Learned

  • Installed LangChain and connected to OpenAI or Anthropic
  • Built chains with LCEL using the | pipe operator
  • Created prompt templates with dynamic variables
  • Chained multiple LLM calls together
  • Used StrOutputParser to extract clean text responses
Course progress
20%
Day 1 Done

Tomorrow: memory and conversation

Day 2 shows how to build chatbots that remember context across multiple turns.

Day 2: Memory and Conversation
Finished this lesson?