LlamaIndex is a data framework purpose-built for RAG and LLM applications. Where LangChain is broad (chains, agents, memory, tools), LlamaIndex focuses specifically on the data layer: ingestion, indexing, retrieval, query engines, and response synthesis. For teams whose primary use case is 'connect an LLM to private data', LlamaIndex is usually the cleaner abstraction.
LlamaIndex provides three main abstractions: Data Connectors (300+ integrations for PDFs, Notion, Slack, databases, APIs), Indexes (vector, keyword, tree, knowledge graph), and Query Engines (which combine retrieval and response synthesis). For advanced use cases, LlamaIndex has a full agent framework with workflows and a built-in async runtime. Query transformations (step-back, HyDE, sub-question decomposition) are first-class.
Open source: free. LlamaCloud hosted offering: LlamaParse at $0.003/page, managed indexing with pay-as-you-go tiers. LlamaIndex Premium available for enterprise.
Salesforce, KPMG, Zerve, Barclays, and many enterprises building RAG over private data. Particularly popular in the financial services and legal industries.