dstack-cloud extends dstack to deploy containers on GCP Confidential VMs and AWS Nitro Enclaves. It provisions the VM, manages attestation, and handles networking. You get confidential computing on cloud infrastructure without running your own TDX hardware.
Your containers run with full security infrastructure out of the box: key management, remote attestation, hardened OS, and encrypted storage. Users can cryptographically verify exactly what's running.
| Platform | Status | Attestation |
|---|---|---|
| Phala Cloud | Available | TDX |
| GCP Confidential VMs | Available | TDX + TPM |
| AWS Nitro Enclaves | Available | NSM |
| Bare metal TDX | Available | TDX |
1. Create a project:
dstack-cloud new my-app
cd my-app2. Edit your docker-compose.yaml:
services:
vllm:
image: vllm/vllm-openai:latest
runtime: nvidia
command: --model Qwen/Qwen2.5-7B-Instruct
ports:
- "8000:8000"3. Deploy:
dstack-cloud deploy4. Check status:
dstack-cloud status
dstack-cloud logs --followFor the full walkthrough, see the Quickstart Guide.
Zero friction onboarding
- Docker Compose native: Bring your docker-compose.yaml as-is. No SDK, no code changes.
- Encrypted by default: Network traffic and disk storage encrypted out of the box.
Hardware-rooted security
- Private by hardware: Data encrypted in memory, inaccessible even to the host.
- Reproducible OS: Deterministic builds mean anyone can verify the OS image hash.
- Workload identity: Every app gets an attested identity users can verify cryptographically.
- Confidential GPUs: Native support for NVIDIA Confidential Computing (H100, Blackwell).
Trustless operations
- Isolated keys: Per-app keys derived in TEE. Survives hardware failure. Never exposed to operators.
- Code governance: Updates follow predefined rules (e.g., multi-party approval). Operators can't swap code or access secrets.
Your container runs inside a Confidential VM (Intel TDX on GCP, Nitro Enclave on AWS). GPU isolation is optional via NVIDIA Confidential Computing. The CPU TEE protects application logic. The GPU TEE protects model weights and inference data.
Core components:
-
Guest Agent: Runs inside each CVM. Generates attestation quotes so users can verify exactly what's running. Provisions per-app cryptographic keys from KMS. Encrypts local storage. Apps interact via
/var/run/dstack.sock. -
KMS: Runs in its own TEE. Verifies attestation quotes before releasing keys. Enforces authorization policies that operators cannot bypass. Derives deterministic keys bound to each app's attested identity.
-
Gateway: Terminates TLS at the edge. Provisions ACME certificates automatically. Routes traffic to CVMs. Internal communication uses RA-TLS for mutual attestation.
-
VMM: Parses docker-compose files directly — no app changes needed. Boots CVMs from a reproducible OS image. Allocates CPU, memory, and confidential GPU resources.
dstack-cloud new <name> # Create a new project
dstack-cloud config-edit # Edit global configuration
dstack-cloud deploy # Deploy to cloud
dstack-cloud status # Check deployment status
dstack-cloud logs [--follow] # View console logs
dstack-cloud stop # Stop the VM
dstack-cloud start # Start a stopped VM
dstack-cloud remove # Remove the VM and cleanup
dstack-cloud list # List all deployments
dstack-cloud fw allow <port> # Allow traffic on a port
dstack-cloud fw deny <port> # Block traffic on a port
dstack-cloud fw list # List firewall rules
Apps communicate with the guest agent via HTTP over /var/run/dstack.sock. Use the HTTP API directly with curl, or use a language SDK:
| Language | Install | Docs |
|---|---|---|
| Python | pip install dstack-sdk |
README |
| TypeScript | npm install @phala/dstack-sdk |
README |
| Rust | cargo add dstack-sdk |
README |
| Go | go get github.com/Dstack-TEE/dstack/sdk/go |
README |
Getting Started
- Quickstart - Deploy your first app on GCP or AWS
- Usage Guide - Deploying and managing apps
- Verification - How to verify TEE attestation
Cloud Platforms
- GCP Attestation - TDX + TPM attestation on GCP
- AWS Nitro Attestation - NSM attestation on AWS
For Developers
- Confidential AI - Inference, agents, and training with hardware privacy
- App Compose Format - Compose file specification
Self-Hosted / Bare Metal
- Deployment - Self-hosting on TDX hardware
- VMM CLI Guide - VMM command-line reference
- Gateway - Gateway configuration
- On-Chain Governance - Policy-based authorization
Reference
- Design Decisions - Architecture rationale
- FAQ - Frequently asked questions
- Security Overview - Security documentation and responsible disclosure
- Security Model - Threat model and trust boundaries
- Security Best Practices - Production hardening
- Security Audit - Third-party audit by zkSecurity
- CVM Boundaries - Information exchange and isolation
Why not use AWS Nitro / Azure Confidential VMs / GCP directly?
You can — but you'll build everything yourself: attestation verification, key management, Docker orchestration, certificate provisioning, and governance. dstack-cloud provides all of this out of the box.
| Approach | Docker native | GPU TEE | Key management | Attestation tooling | Open source |
|---|---|---|---|---|---|
| dstack-cloud | ✓ | ✓ | ✓ | ✓ | ✓ |
| AWS Nitro Enclaves | - | - | Manual | Manual | - |
| Azure Confidential VMs | - | Preview | Manual | Manual | - |
| GCP Confidential Computing | - | - | Manual | Manual | - |
Cloud providers give you the hardware primitive. dstack-cloud gives you the full stack: reproducible OS images, automatic attestation, per-app key derivation, and TLS certificates. No vendor lock-in.
How is this different from SGX/Gramine?
SGX requires porting applications to enclaves. dstack-cloud uses full-VM isolation (Intel TDX, AWS Nitro) — bring your Docker containers as-is. Plus GPU TEE support that SGX doesn't offer.
What's the performance overhead?
Minimal. Intel TDX adds ~2-5% overhead for CPU workloads. NVIDIA Confidential Computing has negligible impact on GPU inference. Memory encryption is the main cost, but it's hardware-accelerated on supported CPUs.
Is this production-ready?
Yes. dstack powers production AI at OpenRouter and NEAR AI. It's been audited by zkSecurity. It's a Linux Foundation Confidential Computing Consortium project.
Can I run this on my own hardware?
Yes. dstack-cloud runs on any Intel TDX-capable server. See the deployment guide for self-hosting instructions. You can also use Phala Cloud for managed infrastructure.
What TEE hardware is supported?
- GCP: Intel TDX (Confidential VMs)
- AWS: Nitro Enclaves (NSM attestation)
- Bare metal: Intel TDX (4th/5th Gen Xeon)
- GPUs: NVIDIA Confidential Computing (H100, Blackwell)
AMD SEV-SNP support is planned.
How do users verify my deployment?
Your app exposes attestation quotes via the SDK. Users verify these quotes using dstack-verifier, dcap-qvl, or the Trust Center. See the verification guide for details.
- OpenRouter - Confidential AI inference providers powered by dstack
- NEAR AI - Private AI infrastructure powered by dstack
dstack is a Linux Foundation Confidential Computing Consortium open source project.
Telegram · GitHub Discussions · Examples
For enterprise support and licensing, book a call or email us at [email protected].
If you use dstack in your research, please cite:
@article{zhou2025dstack,
title={Dstack: A Zero Trust Framework for Confidential Containers},
author={Zhou, Shunfan and Wang, Kevin and Yin, Hang},
journal={arXiv preprint arXiv:2509.11555},
year={2025}
}Logo and branding assets: dstack-logo-kit
This repository is licensed under the Business Source License 1.1 (BUSL-1.1). Per the terms in LICENSE, the Licensed Work is dstack-cloud.
BUSL-1.1 permits copying, modification, redistribution, and non-production use. Production use requires a commercial license. Book a call or email [email protected].
