v1.2.0
Release Date
February 3, 2026
SDK
New Features
Runtime Environment
Runtime Environment References Documentation
- NEW: Executable symlink support via
extra_symlink_dirandextra_symlink_executablesfor exposing runtime executables to system paths
Model Service (Experimental)
Model Service References Documentation
Handles AI model call communications between agents and LLM inference services:
-
Local mode: File-based IPC between Agent and Roll runtime, implementing request-response via file protocol
-
Proxy mode: Request forwarding to external LLM services with routing and retry support
-
SDK supports
anti_call_llmfor proxying Agent requests -
NEW: Complete CLI start command parameters (
--config-file,--host,--port,--proxy-base-url,--retryable-status-codes,--request-timeout) -
NEW: Trajectory (traj) logging - records LLM request/response to JSONL files (
ROCK_MODEL_SERVICE_DATA_DIR,ROCK_MODEL_SERVICE_TRAJ_APPEND_MODE) -
NEW: Server-side
ModelServiceConfigconfiguration (host,port,proxy_base_url,proxy_rules,retryable_status_codes,request_timeout)
Agent Examples
Practical examples demonstrating ROCK Agent integration with various AI frameworks:
-
claude_code/: Claude Code Agent integration example using Node.js runtime
- Command:
claude -p ${prompt} - Requires:
npm install -g @anthropic-ai/claude-code - Configuration via
rock_agent_config.yaml
- Command:
-
iflow_cli/: iFlow CLI Agent integration example using Node.js runtime
- Command:
iflow -p ${prompt} --yolo - Requires:
npm i -g @iflow-ai/iflow-cli@latest
- Command:
-
swe_agent/: SWE Agent integration example
- Demonstrates standard ROCK Agent setup pattern
-
iflow_cli/integration_with_model_service/: Model Service integration examples
- local/: Local mode example with custom LLM backend
- Shows
anti_call_llmusage and model service loop pattern
- Shows
- proxy/: Proxy mode example with iFlow CLI
- Demonstrates
rock model-service start --type proxy --proxy-base-urlusage
- Demonstrates
- local/: Local mode example with custom LLM backend
Admin
New Features
Task Scheduler
A flexible task scheduling system for managing periodic maintenance tasks across Ray workers.
Built-in Tasks:
- ImageCleanupTask: Docker image cleanup using docuum with configurable disk threshold
Configuration Example:
scheduler:
enabled: true
worker_cache_ttl: 3600
tasks:
- task_class: "rock.admin.scheduler.tasks.image_cleanup_task.ImageCleanupTask"
enabled: true
interval_seconds: 3600
params:
threshold: "1T"
Extensibility:
Create custom tasks by extending BaseTask and implementing the run_action(runtime: RemoteSandboxRuntime) method.
Entrypoints
-
NEW: Batch sandbox status query API (
POST /sandboxes/batch)- Efficiently retrieve status information for multiple sandboxes in a single request
- Accepts
BatchSandboxStatusRequestwithsandbox_idslist - Returns
BatchSandboxStatusResponsecontaining status details for all requested sandboxes - Supports up to configurable maximum count (default from
batch_get_status_max_count)
-
NEW: Sandbox listing and filtering API (
GET /sandboxes)- Query and filter sandboxes with flexible query parameters
- Supports pagination via
pageandpage_sizeparameters - Returns
SandboxListResponsewith items, total count, andhas_moreindicator - Enables filtering by sandbox attributes (e.g., deployment type, status, custom metadata)
- Maximum page size configurable via
batch_get_status_max_count
Sandbox
- Modify Sandbox log format: use iso 8601 format for timestamp, default time zone is Asia/Shanghai
- Enrich SandboxInfo: add create_time, start_time, stop_time for metrics
- Add Billing log when sandbox is closed
Enhancements
Performance Optimizations
- Decouple the
get_statusAPI logic from Ray, reducing API latency from 1s+ to 100ms+; support dynamically toggling between the new and legacy logic. - Sink ray operations from sandbox manager into ray service.