10 Questions · Concrete Recommendation · No Hedging

RAG vs Fine-Tuning
Decision Tool

LLMs will hedge on this. A scoring framework won't. Answer 10 questions about your project and get a ranked, defensible recommendation you can take into any meeting.

Prompt Engineering RAG LoRA / QLoRA Full Fine-Tuning Hybrid
Question 1 of 10 0%

Your Recommendation

Estimated Costs by Approach

Based on your inputs. Ranges reflect self-hosted vs managed API usage. Setup = one-time. Monthly = ongoing inference + storage.

Approach Reference Cards

Option 1
Prompt Engineering
Zero training cost. Works today. Wins when base model is already capable. Setup: hours to days. Monthly: API tokens only. Failure mode: model lacks domain knowledge entirely. Accuracy lift: 10–40% over naive prompting with good system prompts + few-shot examples.
Option 2
RAG (Retrieval-Augmented Generation)
Adds a vector DB + retrieval layer to ground answers in your documents. Wins when knowledge is the bottleneck — model doesn't know your data. Setup: $500–$5K depending on doc volume. Monthly: $50–$500 (vector DB + embeddings). Failure modes: retrieval quality, chunking strategy, context window overflow. Accuracy lift: 40–80% on factual Q&A tasks versus base prompting.
Option 3
LoRA / QLoRA Fine-Tuning
Parameter-efficient fine-tuning. Trains ~1% of weights. Cheap relative to full FT. Wins when style, tone, or task format is the bottleneck. Setup: $100–$2K training on cloud GPU. Monthly: self-host $50–$300/mo or managed API. Needs 500–50K clean labeled examples minimum. Accuracy lift: 20–60% on style/format tasks; modest on factual tasks.
Option 4
Full Fine-Tuning
Updates all weights. Requires serious compute and a large clean dataset. Wins when the task requires fundamentally new capabilities not in the base model. Setup: $5K–$100K+ (GPU hours). Monthly: high-end hosting or managed fine-tune API. Needs 50K–1M+ labeled examples. Risk: catastrophic forgetting. Rarely the right answer for most production teams in 2026. LoRA first.
Option 5
Hybrid: RAG + Fine-Tuning
RAG handles knowledge retrieval. Fine-tuning handles behavior and style. Wins when you need both: proprietary knowledge AND specific output format/tone. Setup: $2K–$20K combined. Monthly: $200–$1K. Failure mode: complexity — two systems to debug and maintain. Best for: enterprise Q&A assistants, domain-specific coding agents, regulated industries.

Bo's Practitioner Rule of Thumb

Default to prompt engineering. It's free, fast, and underrated — most teams skip it and jump to RAG or fine-tuning because it sounds more impressive.

Upgrade to RAG when knowledge is the bottleneck — when the model doesn't know your documents, policies, or proprietary data, and hallucination is a real risk.

Fine-tune only when style or behavior is the bottleneck — when the model knows how to do the task but its tone, format, or reasoning pattern consistently misses the mark, and you've verified that prompting + RAG aren't enough.

The question to ask every time: "Is this a knowledge problem or a behavior problem?" Knowledge → RAG. Behavior → fine-tune. Neither → fix your prompt.