DSPy

Programming LLMs instead of prompting

Agent Framework Free (OSS)
Visit Official Site →

What It Is

DSPy is a framework from Stanford NLP that takes a radically different approach to LLM programming. Instead of hand-crafting prompts, you define modules with typed signatures (inputs and outputs), and DSPy automatically compiles them into optimized prompts using techniques like bootstrap few-shot, random search, and MIPRO optimization. It treats prompting as a programming problem with gradient-like optimization.

How It Works

You write a Python program using DSPy modules (Predict, ChainOfThought, ReAct, etc.) with signatures that declare what the module takes and produces. You provide a small training set (even 5-20 examples) and a metric function. DSPy's optimizers then run search algorithms to find the best prompts, few-shot examples, and instructions for each module. The result is a compiled program that consistently produces better outputs than manually-tuned prompts.

Pricing Breakdown

Completely free and open source. Works with any LLM backend (OpenAI, Claude, local models via Ollama/vLLM). Optimization runs cost only the LLM calls needed for the bootstrapping process.

Who Uses It

Research labs, ML engineers, and teams tired of manual prompt engineering. Particularly popular in academia and in companies with clear task metrics.

Strengths & Weaknesses

✓ Strengths

  • Automatic prompt optimization
  • Composable modules
  • Strong research backing
  • Reproducible improvements

× Weaknesses

  • Steeper learning curve
  • Smaller community
  • Requires metric function for optimization

Best Use Cases

Complex reasoning pipelinesResearchProduction optimizationMulti-step programs

Alternatives

LangChain
Most-used LLM application framework
LlamaIndex
Data framework for LLM applications
← Back to AI Tools Database