Files API Guide
The Files API gives you direct, authenticated access to a container’s workspace. It is backed by Google Cloud Storage, so every change is persistent and versionable outside the container lifecycle.
Authentication
All requests require a TestBase API key in the Authorization: Bearer tb_{env}_{id} header. The same key you use to create containers grants access to that account’s container files.
REST endpoints
List files
GET /api/v1/containers/:id/files?path=/workspace
Authorization: Bearer tb_prod_xxxReturns directory metadata pulled from GCS:
{
"path": "/workspace",
"files": [
{ "name": "README.md", "type": "file", "size": 4287 },
{ "name": "src", "type": "directory", "size": 0 }
]
}Notes:
- Non-existent paths return an empty
filesarray. - Size is reported in bytes.
- Directories are inferred from object prefixes in GCS.
Download a file
GET /api/v1/containers/:id/files/download?path=/workspace/output.log
Authorization: Bearer tb_prod_xxx- Response body contains the raw file bytes.
Content-Disposition: attachment; filename="output.log"ensures predictable saves.404if the object does not exist.
Upload or overwrite
PUT /api/v1/containers/:id/files
Authorization: Bearer tb_prod_xxx
Content-Type: application/json
{
"path": "/workspace/config/app.json",
"content": "{ \"enabled\": true }"
}- Uploads write to GCS immediately.
- Binary payloads should be base64 encoded on the client and decoded before passing to the SDK (which accepts Buffers).
- Parent directories are created automatically as GCS prefixes.
Delete a file
DELETE /api/v1/containers/:id/files?path=/workspace/output.log
Authorization: Bearer tb_prod_xxx- A successful deletion returns
204 No Content. - Attempting to delete a non-existent file yields
404 Not Found.
Cloud SDK helpers
@testbase/cloud wraps these endpoints with ergonomic methods:
import { TestbaseCloud } from '@testbase/cloud';
const cloud = new TestbaseCloud({ apiKey: process.env.TESTBASE_API_KEY });
const agent = await cloud.createCloudAgent({
name: 'Cloud Worker',
agentType: 'worker',
workspace: './repo'
});
await cloud.uploadFile(agent.containerId, '/workspace/.env', 'SECRET=1');
const tree = await cloud.listFiles(agent.containerId, '/workspace');
const { content } = await cloud.downloadFile(agent.containerId, '/workspace/output.txt');
await cloud.deleteFile(agent.containerId, '/workspace/tmp.log');Helpers return typed results (Buffers for downloads, structured metadata for listings) and handle retries, timeouts, and authentication internally.
Working with large files
- The Cloud SDK streams downloads; if you hit memory limits, pipe the Buffer directly to
fs.createWriteStream. - For uploads over a few megabytes, consider chunking them or storing build artefacts elsewhere (artifacts bucket) and referencing them via MCP rather than placing them into the workspace.
Concurrency and consistency
- File operations are atomic at the object level. Uploads replace an entire file; there is no PATCH API.
- Multiple clients acting on the same path should coordinate externally to avoid racing writes.
- After an upload, agents running inside the container will see the new content once the sync loop flushes. For time-sensitive updates, upload before starting the task.
Troubleshooting
| Symptom | Likely cause | Fix |
|---|---|---|
401 Unauthorized | Missing or malformed API key | Verify Authorization: Bearer tb_* header |
404 Not Found during download | File never created or wrong path (remember /workspace/…) | Call listFiles first to confirm the path |
| Upload accepted but agent doesn’t see changes | Container cached old workspace | Restart the container or trigger a new sync by pausing/retrying the task |
| Large binary files corrupted | Sent as UTF-8 JSON string | Base64 encode binaries or use the SDK Buffer signature |
Use the Files API to seed dependencies, fetch artefacts, or prune workspaces—whether the agent lives locally or in the cloud, the workflow stays consistent.