Platform Observability
Real-time metrics for orchestration, compute, and LLM inference.
Status: Operational
Last deployed: 14 mins ago
Active Execution Runs
1,420
↑ 12% vs last hour
Live Micro-VMs (Sandboxes)
1,455
Avg Boot: 450ms
Tool Error Rate (Global)
2.4%
Elevated: Timeout on Exsecute/GitHub
LLM Inference Spend (Daily)
$4,210
Avg Cost / Run: $0.14
Temporal Orchestrator Load
Job Queue Depth
34 pending
Worker Node CPU
64%
KMS Decryption Latency
45 ms
System Alerts
⚠
Anthropic API Rate Limiting
Cluster US-East-1 experiencing 429s. Automatically failing over to fallback tier.
✖
Pod Eviction Spikes
Node group `sandbox-workers-b` hit max memory capacity. Auto-scaler triggered.