Tensorlake provides dynamic, persistent sandboxes for AI agents. Run untrusted code securely, fan out parallel execution across clusters, and checkpoint agents to resume them on demand — from the CLI or the Python SDK.
pip install tensorlake
tl login
tl sbx new my-sandbox --cpus 1.0 --memory 1024
tl sbx exec <sandbox-id> python -c 'print("Hello from Tensorlake")'
tl sbx snapshot <sandbox-id>from tensorlake.sandbox import SandboxClient
client = SandboxClient()
sandbox = client.create_and_connect(name="my-sandbox", cpus=1.0, memory_mb=1024)
result = sandbox.run("python", ["-c", "print('Hello from Tensorlake')"])
print(result.stdout)
# Checkpoint and snapshot
snapshot = client.snapshot_and_wait(sandbox.sandbox_id)- Distributed Fan-out — Go from one sandbox to hundreds per second for deep research and parallel tool use
- Stateful Suspend & Resume — Checkpoint agents to cold storage, wake them up instantly when needed
- Dynamic Resources — Allocate CPU, memory, and GPU on the fly — no static templates
- SSD-native I/O — Built for workloads that read and write heavily, not just fast boot
- BYOC / On-Prem — Run on your infrastructure for cost control, security, and compliance
- Install Anything — Docker, Kubernetes, systemd — full Linux environments, not stripped-down containers
| Repository | Description |
|---|---|
| tensorlake/sdk | CLI and Python SDK |
| tensorlake/cookbooks | Example projects and integrations |
| tensorlake/skills | Skills for coding agents to use Tensorlake |
| benchmarks | Sandbox File System I/O Benchmarks |
