Day 1 of 5
⏱ ~60 minutes
Edge Computing in 5 Days — Day 1

Edge vs Cloud

Edge computing isn't a replacement for cloud — it's a layer. Today you'll understand when each is right and how they work together.

Why Edge Exists

Cloud is unlimited compute with unbounded latency. Edge is constrained compute with bounded latency. Four drivers: Latency — a factory robot can't wait 50ms for a cloud round-trip to decide if a part is defective. Bandwidth — a factory with 1000 cameras generating 500GB/hour can't send it all to the cloud. Privacy — medical images or financial data cannot leave the facility. Reliability — a remote wind turbine needs control even when the internet is down.

The Edge Spectrum

Not all edge is the same: Micro-edge/TinyML: microcontrollers (Arduino Nano 33, STM32), 256KB–1MB, µW power, keyword detection, anomaly detection. Device edge: Raspberry Pi, NVIDIA Jetson Nano, Google Coral, 1–10W, basic vision/NLP inference. Edge server: Intel NUC, NVIDIA Jetson Orin, 15–100W, complex models, local database, multiple camera streams. Edge data center: telco MEC (Multi-access Edge Computing), 5G base station colocation, near-real-time at scale.

Edge-Cloud Collaboration Pattern

The practical pattern: edge handles real-time decisions (< 10ms), aggregates data, and sends summarized events to cloud. Cloud handles training, complex analytics, fleet management, and model updates. Example: edge camera detects motion → runs face detection locally → if unknown face, send image to cloud for identification → cloud returns ID → edge logs event. Real-time decision at edge, expensive computation in cloud.

python
# Edge vs cloud decision framework
import time, random

def simulate_latency(location):
    # Simulated round-trip latencies
    latencies = {
        'local_cpu':  0.5,    # 0.5ms — local inference
        'edge_device': 2,     # 2ms — Raspberry Pi on LAN
        'edge_server': 5,     # 5ms — edge server in building
        'cloud_nearby': 50,   # 50ms — cloud same region
        'cloud_far':  200,    # 200ms — cross-region cloud
    }
    # Add jitter
    base = latencies[location]
    return base + random.gauss(0, base * 0.1)

def should_process_at_edge(task):
    # Decision framework
    latency_req = task.get('latency_ms')
    data_size_mb = task.get('data_size_mb', 0)
    privacy_sensitive = task.get('private', False)
    
    reasons = []
    
    if latency_req and latency_req < 20:
        reasons.append(f"latency={latency_req}ms < 20ms threshold")
    if data_size_mb > 10:
        cost_per_gb = 0.09  # AWS data transfer
        monthly_gb = data_size_mb * 60 * 24 * 30 / 1024
        reasons.append(f"bandwidth: ${monthly_gb*cost_per_gb:.0f}/month to cloud")
    if privacy_sensitive:
        reasons.append("privacy: cannot leave facility")
    
    decision = 'EDGE' if reasons else 'CLOUD'
    return decision, reasons

# Evaluate different workloads
tasks = [
    {"name": "Conveyor defect detection", "latency_ms": 5,   "data_size_mb": 100, "private": False},
    {"name": "Patient vital signs",        "latency_ms": 100, "data_size_mb": 0.1, "private": True},
    {"name": "Monthly report generation",  "latency_ms": None,"data_size_mb": 0.5, "private": False},
    {"name": "Security camera stream",     "latency_ms": 50,  "data_size_mb": 500, "private": True},
    {"name": "Monthly ML model training",  "latency_ms": None,"data_size_mb": 10,  "private": False},
]

for task in tasks:
    decision, reasons = should_process_at_edge(task)
    print(f"{decision}: {task['name']}")
    for r in reasons: print(f"  - {r}")
    print()
💡
Bandwidth cost is often the strongest argument for edge processing. At AWS data transfer rates ($0.09/GB), a factory with 100 cameras at 720p generates ~50TB/month of video — $4,500/month in transfer costs alone. Edge inference and sending only events (a few KB each) reduces this to near zero.
📝 Day 1 Exercise
Analyze an Edge Architecture
  1. Run the decision framework script. Add two more workloads of your own: what decision does the framework recommend?
  2. Pick a real IoT product (Nest thermostat, Tesla FSD, Peloton, industrial robot). Research what it processes locally vs in the cloud.
  3. Calculate the bandwidth cost for your use case: N cameras × bitrate × hours/day × days/month × $0.09/GB.
  4. Research AWS Greengrass, Azure IoT Edge, and GCP Cloud IoT Edge. What components do they provide at the edge? What stays in the cloud?
  5. Draw a data flow diagram for a smart manufacturing line: sensors → edge inference → event stream → cloud dashboard → model retraining → edge update.

Day 1 Summary

  • Edge = constrained compute, bounded latency; cloud = unlimited compute, variable latency
  • Four drivers for edge: latency (real-time decisions), bandwidth (cost), privacy (regulatory), reliability (offline)
  • The practical pattern: edge decides in real-time, cloud trains, analyzes, and manages the fleet
  • Bandwidth cost is often the decisive factor — calculate it before choosing cloud-only architecture
Challenge

Research 5G Multi-access Edge Computing (MEC). Telecom operators colocate servers in 5G base stations — effectively moving cloud compute to within 1km of the device. What latency does MEC achieve? What use cases does it enable that regular cloud cannot? Compare to a Raspberry Pi approach: cost, latency, scalability. Write a 1-page technical brief.

Finished this lesson?