Course Overview All Courses Blog Reserve Bootcamp Seat
AWS for AI · Day 3 of 5 ~80 minutes

S3 + Lambda — Serverless AI Pipelines

Upload a file to S3, Lambda triggers automatically, Bedrock analyzes it, result is saved back to S3. Zero servers to manage.

Day 1
Day 2
3
Day 3
4
Day 4
5
Day 5
What You'll Build Today

A serverless document analyzer: upload a .txt file to an S3 bucket, Lambda fires automatically, reads the file, sends it to Claude via Bedrock for a summary, and saves the result as filename_summary.txt back to S3.

1
Create S3 Buckets

Set up input and output buckets

In the AWS console, go to S3 → Create bucket. Create two buckets:

ai-pipeline-input-[your-name] — where you'll drop documents
ai-pipeline-output-[your-name] — where summaries get saved

Keep all defaults. Don't enable public access.

Bucket names must be globally unique. Add your initials or a random number at the end. If it says "bucket already exists," just pick a different name.

2
Lambda Function

Write the Lambda function

In the console, go to Lambda → Create function. Choose "Author from scratch," Python 3.12, and attach the bedrock-lambda-role you created on Day 1.

Paste this code into the function editor:

pythonlambda_function.py
import boto3
import json
import os

s3 = boto3.client("s3")
bedrock = boto3.client("bedrock-runtime", region_name="us-east-1")
OUTPUT_BUCKET = os.environ["OUTPUT_BUCKET"]

def lambda_handler(event, context):
    # Get the uploaded file info from the S3 event
    bucket = event["Records"][0]["s3"]["bucket"]["name"]
    key = event["Records"][0]["s3"]["object"]["key"]

    # Read the file content from S3
    obj = s3.get_object(Bucket=bucket, Key=key)
    text = obj["Body"].read().decode("utf-8")[:4000]  # cap at 4K chars

    # Send to Claude via Bedrock
    response = bedrock.invoke_model(
        modelId="anthropic.claude-opus-4-5-20251101-v1:0",
        body=json.dumps({
            "anthropic_version": "bedrock-2023-05-31",
            "max_tokens": 512,
            "messages": [{
                "role": "user",
                "content": f"Summarize this document in 3 bullet points:\n\n{text}"
            }]
        }),
        contentType="application/json",
        accept="application/json"
    )
    summary = json.loads(response["body"].read())["content"][0]["text"]

    # Save summary to output bucket
    output_key = key.replace(".txt", "_summary.txt")
    s3.put_object(
        Bucket=OUTPUT_BUCKET,
        Key=output_key,
        Body=summary.encode("utf-8")
    )
    return {"statusCode": 200, "body": f"Saved summary to {output_key}"}

In the Lambda configuration, add an environment variable: OUTPUT_BUCKET = your output bucket name.

Increase the timeout to 30 seconds (Bedrock calls can take a few seconds). Default 3s will timeout.

3
S3 Trigger

Wire the S3 trigger

In Lambda, click Add trigger → S3. Select your input bucket. Event type: PUT. Suffix: .txt (so only text files trigger it).

Now upload any .txt file to your input bucket and watch it work. Check your output bucket for the _summary.txt file. Check CloudWatch Logs for the Lambda execution log.

This is the core serverless pattern for AI processing: event triggers function, function calls AI, result is stored. Swap S3 for SQS, SNS, API Gateway, or EventBridge and you have every major AWS AI pipeline pattern.

Day 3 Complete

  • Created S3 input and output buckets
  • Built a Lambda function that reads from S3 and calls Bedrock
  • Wired S3 PUT event as a Lambda trigger
  • Tested the full pipeline with a real document
Day 3 Done

Next: Knowledge Bases — Managed RAG

Day 4 builds a document Q&A system using Bedrock Knowledge Bases. Upload PDFs, ask questions, get answers grounded in your documents.

Go to Day 4
Finished this lesson?