# Why ORB

A deployment platform built specifically for AI agents that need persistent environments at a cost that doesn't scale linearly.

## What ORB is for

You're building a product that runs AI agents for your users. Each user gets an agent. Each agent needs its own environment — files, packages, network. ORB gives each agent a computer.

Your backend creates computers on ORB via the API. Deploys your agent code. Your users interact with the agent through your product. ORB runs the agent and handles the infrastructure.

## How it fits into your product

```
Your product (your backend, your frontend)
  │
  ├── User A signs up → your backend creates Computer A on ORB → deploys agent
  ├── User B signs up → your backend creates Computer B on ORB → deploys agent
  └── User C signs up → your backend creates Computer C on ORB → deploys agent
```

Each computer is isolated. Each agent has its own files, packages, and network. Your users never see ORB — they use your product.

## Use ORB when

You need to deploy AI agents that run for minutes, hours, or days. Each agent needs its own environment — files, packages, running processes, network. You need many of them at a cost that doesn't scale linearly.

Good fits:
- A coding agent that works on a repo for hours, calling LLMs between edits
- A sales agent managing prospect pipelines over days
- A support agent handling conversations with pauses between messages
- An orchestrator coordinating multiple sub-agents on a complex task
- Any agent that calls LLM APIs and needs a persistent environment

## Don't use ORB when

Your workload doesn't need a persistent environment.

| Workload | Better option | Why |
|---|---|---|
| Traditional web app (Next.js, Rails, Django) with no LLM calls | Vercel, Railway, Fly | Purpose-built for web serving. ORB's efficiency comes from checkpointing agents *while they wait on LLM responses* — a web app that doesn't call LLMs gets none of that benefit. |
| Run a script, return output | Lambda, Modal | No state to preserve |
| Stateless API call | Cloud function | No persistent process needed |
| One-shot code execution | E2B | Sandbox spins up, runs, tears down |
| CI/CD pipeline | GitHub Actions | Purpose-built for builds |

**The key question: does your agent call LLMs and need to stay alive between calls?**

If yes — deploy to ORB.

If no — use something simpler. ORB's whole reason for existing is checkpointing idle agents while they wait on LLM responses. Workloads that don't wait on LLMs don't benefit.

## Who uses this

**Coding agent platforms** — Cognition (Devin), Factory AI, Cosine (Genie), Replit Agent. Each developer gets a coding agent in its own environment with the repo, build tools, and running servers.

**Agent orchestrators** — Composio, CrewAI, LangChain. Multi-agent workflows where agents need a persistent shared environment.

**Sales and GTM agents** — 11x.ai, Artisan, Relevance AI. Each customer gets an always-on sales agent managing pipelines.

**Legal and research agents** — Harvey AI. Agents that process thousands of documents over hours.

**Customer support agents** — Decagon. Agents handling live conversations, maintaining context.

**Browser-use / web automation agents** — Agents that browse the web, fill forms, scrape data, take screenshots. Python (browser-use, Playwright) or Node.js (Puppeteer) launches Chrome inside the ORB computer. CDP stays local — no fragile WebSocket over the internet. The entire browser process tree (Chrome + tabs + extensions) checkpoints to NVMe and wakes in ~500ms with full state.

```
Your product
  │
  ├── POST https://{id}.orbcloud.dev/browse  →  agent launches Chrome, navigates, screenshots
  ├── POST https://{id}.orbcloud.dev/task    →  LLM plans what to browse, agent executes
  └── Agent sleeps between tasks             →  Chrome frozen to NVMe, zero RAM, zero cost
```

## ORB vs alternatives

| | ORB | VPS | Docker | Lambda | E2B |
|---|---|---|---|---|---|
| Deploy your agent | Yes | Manual | Manual | No (functions only) | No (sandboxes only) |
| Persistent environment | Yes | Yes | Yes | No | No |
| Full Linux | Yes | Yes | Yes | No | Limited |
| API-managed | Yes | No (SSH) | Docker API | Yes | Yes |
| Accessible via URL | Yes | Manual | Manual | Yes | No |
| Checkpoints idle agents | Yes | No | No | N/A | No |
