Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
53 changes: 53 additions & 0 deletions openhands/usage/advanced/configuration-options.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -413,6 +413,59 @@ All sandbox configuration options can be set as environment variables by prefixi
- Default: `""`
- Description: BrowserGym environment to use for evaluation

### GPU Support
- `enable_gpu`
- Type: `bool`
- Default: `false`
- Description: Enable GPU support in the runtime container
- Note: Requires NVIDIA Container Toolkit (nvidia-docker2) installed on the host

- `cuda_visible_devices`
- Type: `str`
- Default: `""`
- Description: Specify which GPU devices to make available to the container
- Examples:
- `""` (empty) - Mounts all available GPUs
- `"0"` - Mounts only GPU 0
- `"0,1"` - Mounts GPUs 0 and 1
- `"2,3,4"` - Mounts GPUs 2, 3, and 4

**Example GPU Configuration:**
```toml
[sandbox]
# Enable GPU support with all GPUs
enable_gpu = true

# Or enable GPU support with specific GPU IDs
enable_gpu = true
cuda_visible_devices = "0,1"

# Use a CUDA-enabled base image for GPU workloads
base_container_image = "nvidia/cuda:12.2.0-devel-ubuntu22.04"
```

**Prerequisites for GPU Support:**
1. NVIDIA GPU with drivers installed on the host
2. NVIDIA Container Toolkit (nvidia-docker2) installed:
```bash
# For Ubuntu/Debian
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update && sudo apt-get install -y nvidia-docker2
sudo systemctl restart docker
```

**Verifying GPU Access:**

After enabling GPU support, you can verify GPU access in OpenHands by asking the agent to run:
```bash
nvidia-smi
```

This should display your GPU information if GPU support is properly configured.

## Security Configuration

The security configuration options are defined in the `[security]` section of the `config.toml` file.
Expand Down
4 changes: 2 additions & 2 deletions openhands/usage/environment-variables.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -112,8 +112,8 @@ These variables correspond to the `[sandbox]` section in `config.toml`:
| `SANDBOX_PAUSE_CLOSED_RUNTIMES` | boolean | `false` | Pause instead of stopping closed runtimes |
| `SANDBOX_CLOSE_DELAY` | integer | `300` | Delay before closing idle runtimes (seconds) |
| `SANDBOX_RM_ALL_CONTAINERS` | boolean | `false` | Remove all containers when stopping |
| `SANDBOX_ENABLE_GPU` | boolean | `false` | Enable GPU support |
| `SANDBOX_CUDA_VISIBLE_DEVICES` | string | `""` | Specify GPU devices by ID |
| `SANDBOX_ENABLE_GPU` | boolean | `false` | Enable GPU support (requires NVIDIA Container Toolkit) |
| `SANDBOX_CUDA_VISIBLE_DEVICES` | string | `""` | Specify GPU devices by ID (e.g., `"0"`, `"0,1"`, `"2,3,4"`). Empty string mounts all GPUs |
| `SANDBOX_VSCODE_PORT` | integer | auto | Specific port for VSCode server |

### Sandbox Environment Variables
Expand Down
20 changes: 20 additions & 0 deletions openhands/usage/run-openhands/gui-mode.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,26 @@ openhands serve --gpu --mount-cwd
- NVIDIA GPU drivers must be installed on your host system
- [NVIDIA Container Toolkit (nvidia-docker2)](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html) must be installed and configured

**Advanced GPU Configuration:**

For more control over GPU access, you can use environment variables or a `config.toml` file:

```bash
# Enable GPU with specific GPUs via environment variable
export SANDBOX_ENABLE_GPU=true
export SANDBOX_CUDA_VISIBLE_DEVICES="0,1" # Only use GPUs 0 and 1
openhands serve
```

Or in your `config.toml`:
```toml
[sandbox]
enable_gpu = true
cuda_visible_devices = "0,1" # Specify which GPUs to use
```

See the [Configuration Options](/openhands/usage/advanced/configuration-options#gpu-support) for more details on GPU configuration.

#### Requirements

Before using the `openhands serve` command, ensure that:
Expand Down
140 changes: 140 additions & 0 deletions sdk/guides/agent-server/docker-sandbox.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -605,6 +605,146 @@ http://localhost:8012/vnc.html?autoconnect=1&resize=remote

---

## 4) GPU Support in Docker Sandbox

<Note>
GPU support requires NVIDIA Container Toolkit (nvidia-docker2) to be installed on the host system.
</Note>

The Docker sandbox supports GPU acceleration for compute-intensive tasks like machine learning, data processing, and GPU-accelerated applications. Enable GPU support by setting the `enable_gpu` parameter when creating a `DockerWorkspace` or `DockerDevWorkspace`.

### Basic GPU Configuration

```python
from openhands.workspace import DockerWorkspace

with DockerWorkspace(
server_image="ghcr.io/openhands/agent-server:latest-python",
host_port=8010,
platform="linux/amd64",
enable_gpu=True, # Enable GPU support
) as workspace:
# GPU is now available in the workspace
result = workspace.execute_command("nvidia-smi")
print(result.stdout)
```

### GPU with Custom Base Image

When using GPU-accelerated workloads, you may want to use a CUDA-enabled base image:

```python
from openhands.workspace import DockerDevWorkspace

with DockerDevWorkspace(
base_image="nvidia/cuda:12.2.0-devel-ubuntu22.04",
host_port=8010,
platform="linux/amd64",
enable_gpu=True,
target="source",
) as workspace:
# Workspace has CUDA toolkit and GPU access
result = workspace.execute_command("nvcc --version && nvidia-smi")
print(result.stdout)
```

### Prerequisites for GPU Support

1. **NVIDIA GPU** with drivers installed on the host system
2. **NVIDIA Container Toolkit** (nvidia-docker2) installed:

```bash
# For Ubuntu/Debian
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update && sudo apt-get install -y nvidia-docker2
sudo systemctl restart docker
```

3. **Docker runtime** configured with NVIDIA runtime support

### Verifying GPU Access

After enabling GPU support, verify GPU access in the workspace:

```python
from openhands.workspace import DockerWorkspace

with DockerWorkspace(
server_image="ghcr.io/openhands/agent-server:latest-python",
enable_gpu=True,
) as workspace:
# Check GPU availability
result = workspace.execute_command("nvidia-smi")

if result.exit_code == 0:
print("✅ GPU is accessible:")
print(result.stdout)
else:
print("❌ GPU not accessible:")
print(result.stderr)
```

### GPU-Enabled Use Cases

**Machine Learning Training:**
```python
from openhands.sdk import LLM, Conversation
from openhands.tools.preset.default import get_default_agent
from openhands.workspace import DockerWorkspace
from pydantic import SecretStr
import os

llm = LLM(
usage_id="agent",
model="anthropic/claude-sonnet-4-5-20250929",
api_key=SecretStr(os.getenv("LLM_API_KEY")),
)

with DockerWorkspace(
server_image="ghcr.io/openhands/agent-server:latest-python",
enable_gpu=True,
) as workspace:
agent = get_default_agent(llm=llm, cli_mode=True)
conversation = Conversation(agent=agent, workspace=workspace)

conversation.send_message(
"Install PyTorch with CUDA support and verify GPU is available. "
"Then create a simple neural network training script that uses GPU."
)
conversation.run()
conversation.close()
```

**GPU-Accelerated Data Processing:**
```python
with DockerWorkspace(
server_image="ghcr.io/openhands/agent-server:latest-python",
enable_gpu=True,
) as workspace:
agent = get_default_agent(llm=llm, cli_mode=True)
conversation = Conversation(agent=agent, workspace=workspace)

conversation.send_message(
"Install RAPIDS cuDF and process the CSV file using GPU acceleration. "
"Compare performance with pandas CPU processing."
)
conversation.run()
conversation.close()
```

### Notes

- When `enable_gpu=True`, the workspace mounts **all available GPUs** into the container
- Currently, the SDK does not support selective GPU mounting (e.g., mounting only specific GPU IDs)
- For selective GPU control, consider using the main OpenHands application with `SANDBOX_CUDA_VISIBLE_DEVICES` configuration
- GPU support adds the `--gpus all` flag to the Docker container runtime
- Ensure your Docker daemon has proper permissions to access NVIDIA devices

---

## Next Steps

- **[Local Agent Server](/sdk/guides/agent-server/local-server)**
Expand Down