For agents
Compute Grid is an open, MCP-native price feed for the GPU spot market. Agents can find, compare, and delegate to the cheapest available GPU as easily as they list files. Everything below is free and unauthenticated.
Connect via MCP
One command. No keys, no accounts. Streamable HTTP transport, session-less, JSON responses by default.
claude mcp add --transport http computegrid https://[host]/mcpThe same endpoint works in Cursor, custom Anthropic SDK clients, and any MCP-compatible host.
Tools
list_gpus(filters)
Returns matching GPU rows sorted by price ascending. Every filter is optional.
{
gpu_model?: string, // substring match
provider?: string[], // e.g. ["runpod", "lambda-labs"]
tier?: string[], // ["secure","community","verified",...]
max_price_per_gpu_hour?: number,
min_vram_gb?: number,
available_only?: boolean, // default true
region?: string, // substring match
limit?: number // default 100
}find_cheapest(input)
One call, full context. Returns the cheapest matching offer plus 5 alternatives plus market context (median, total available, cheapest 24h ago).
{
gpu_model: string, // required
gpu_count?: number, // default 1
min_vram_gb?: number,
tier?: string[],
region?: string
} -> {
cheapest: GpuRow,
alternatives: GpuRow[],
market_context: {
median_price_per_gpu_hour_usd: number,
total_available_count: number,
cheapest_24h_ago: number | null
}
}HTTP endpoints
For agents that don't speak MCP, the same data is open over plain HTTP.
GET /api/snapshot.json·full current snapshot,{ fetched_at, rows: GpuRow[] }, cached 60sGET /api/stream·Server-Sent Events;snapshoton connect,diffon every refresh (~120s)GET /api/health·liveness + last snapshot ageGET /llms.txt·agent-discoverable site description (per llmstxt.org)
Schema (GpuRow)
{
id: string, // stable across refreshes
provider: string, // "runpod", "vast", "lambda-labs", ...
tier: "secure" | "community" | "verified" | "unverified" | "standard",
gpu_model: string, // canonical: "H100 SXM", "A100 80GB"
vram_gb: number, // per single GPU
gpu_count: number, // 1, 2, 4, 8
price_per_gpu_hour_usd: number,
price_per_instance_hour_usd: number,
available: boolean,
offer_count: number, // > 1 for marketplace dedup groups
regions: string[],
metadata: {
raw_provider_id: string,
source?: "getdeploying", // present when sourced via aggregator
source_url?: string,
... // provider-specific extras
},
fetched_at: string // ISO 8601
}Data sources
Three direct integrations refreshed every 120 seconds: RunPod (GraphQL), Vast.ai (REST bundles), Vultr (REST plans).
Plus 30+ additional providers ·Lambda Labs, CoreWeave, AWS, GCP, Azure, OVH, Scaleway, Crusoe, Hyperstack, Cudo Compute, TensorDock, Paperspace, Fluidstack, and more ·sourced via getdeploying.com, which updates daily upstream. Aggregator-sourced rows are tagged metadata.source = "getdeploying".
Tier semantics
- Managed cloud (
secure,standard): listed by managed cloud providers with SLAs and support. Higher prices, predictable. - Marketplace (
community,verified,unverified): peer-to-peer listings (Vast.ai, RunPod Community). Lower prices, more variance, no SLA.
Example: agent shopping for compute
agent → find_cheapest({ gpu_model: "H100 SXM", gpu_count: 8 })
→ cheapest: $1.33/hr/GPU · Vast.ai Unverified
alternatives: 5 within +20%
cloud price: $2.99/hr/GPU · RunPod Secure
market_context: 32 listings, median $2.49, no 24h history yetOpen and verifiable
Every row links back to its source. No markup, no platform fee, no API key. Source code is on GitHub.