Edge computing isn't a replacement for cloud — it's a layer. Today you'll understand when each is right and how they work together.
Cloud is unlimited compute with unbounded latency. Edge is constrained compute with bounded latency. Four drivers: Latency — a factory robot can't wait 50ms for a cloud round-trip to decide if a part is defective. Bandwidth — a factory with 1000 cameras generating 500GB/hour can't send it all to the cloud. Privacy — medical images or financial data cannot leave the facility. Reliability — a remote wind turbine needs control even when the internet is down.
Not all edge is the same: Micro-edge/TinyML: microcontrollers (Arduino Nano 33, STM32), 256KB–1MB, µW power, keyword detection, anomaly detection. Device edge: Raspberry Pi, NVIDIA Jetson Nano, Google Coral, 1–10W, basic vision/NLP inference. Edge server: Intel NUC, NVIDIA Jetson Orin, 15–100W, complex models, local database, multiple camera streams. Edge data center: telco MEC (Multi-access Edge Computing), 5G base station colocation, near-real-time at scale.
The practical pattern: edge handles real-time decisions (< 10ms), aggregates data, and sends summarized events to cloud. Cloud handles training, complex analytics, fleet management, and model updates. Example: edge camera detects motion → runs face detection locally → if unknown face, send image to cloud for identification → cloud returns ID → edge logs event. Real-time decision at edge, expensive computation in cloud.
# Edge vs cloud decision framework
import time, random
def simulate_latency(location):
# Simulated round-trip latencies
latencies = {
'local_cpu': 0.5, # 0.5ms — local inference
'edge_device': 2, # 2ms — Raspberry Pi on LAN
'edge_server': 5, # 5ms — edge server in building
'cloud_nearby': 50, # 50ms — cloud same region
'cloud_far': 200, # 200ms — cross-region cloud
}
# Add jitter
base = latencies[location]
return base + random.gauss(0, base * 0.1)
def should_process_at_edge(task):
# Decision framework
latency_req = task.get('latency_ms')
data_size_mb = task.get('data_size_mb', 0)
privacy_sensitive = task.get('private', False)
reasons = []
if latency_req and latency_req < 20:
reasons.append(f"latency={latency_req}ms < 20ms threshold")
if data_size_mb > 10:
cost_per_gb = 0.09 # AWS data transfer
monthly_gb = data_size_mb * 60 * 24 * 30 / 1024
reasons.append(f"bandwidth: ${monthly_gb*cost_per_gb:.0f}/month to cloud")
if privacy_sensitive:
reasons.append("privacy: cannot leave facility")
decision = 'EDGE' if reasons else 'CLOUD'
return decision, reasons
# Evaluate different workloads
tasks = [
{"name": "Conveyor defect detection", "latency_ms": 5, "data_size_mb": 100, "private": False},
{"name": "Patient vital signs", "latency_ms": 100, "data_size_mb": 0.1, "private": True},
{"name": "Monthly report generation", "latency_ms": None,"data_size_mb": 0.5, "private": False},
{"name": "Security camera stream", "latency_ms": 50, "data_size_mb": 500, "private": True},
{"name": "Monthly ML model training", "latency_ms": None,"data_size_mb": 10, "private": False},
]
for task in tasks:
decision, reasons = should_process_at_edge(task)
print(f"{decision}: {task['name']}")
for r in reasons: print(f" - {r}")
print()
Research 5G Multi-access Edge Computing (MEC). Telecom operators colocate servers in 5G base stations — effectively moving cloud compute to within 1km of the device. What latency does MEC achieve? What use cases does it enable that regular cloud cannot? Compare to a Raspberry Pi approach: cost, latency, scalability. Write a 1-page technical brief.