Tenzro Testnet is live —request testnet TNZO
← Back to Tutorials

Confidential Computing with TEE

SecurityAdvanced35 min

Use Trusted Execution Environments to run confidential AI inference, seal sensitive data, and prove hardware integrity to remote parties. Tenzro supports four TEE providers — Intel TDX, AMD SEV-SNP, AWS Nitro Enclaves, and NVIDIA GPU Confidential Computing — with a unified API and X.509 certificate chain verification.

What You'll Learn

  • Detect TEE hardware at runtime (TDX, SEV-SNP, Nitro, NVIDIA GPU CC)
  • Generate hardware attestation reports bound to a challenge nonce
  • Verify remote attestation with X.509 certificate chain validation
  • Seal and unseal data using hardware-derived keys (AES-256-GCM)
  • Serve AI models inside a TEE enclave for private inference
  • Client-side attestation verification before sending sensitive data

Supported TEE Platforms

  • Intel TDX/dev/tdx-guest ioctl, TDREPORT to Quote pipeline, Intel PCS certificate chain
  • AMD SEV-SNP/dev/sev-guest ioctl, SNP_GET_REPORT, AMD KDS VCEK fetching, ARK to ASK to VCEK chain
  • AWS Nitro/dev/nsm NSM device, CBOR attestation documents, P-384 ECDSA, AWS root CA
  • NVIDIA GPU CC — NRAS HTTP API, GPU evidence collection, JWT verification, SPDM measurements

Prerequisites

[dependencies]
tenzro-sdk = "0.1"
tenzro-tee = { version = "0.1", features = ["intel-tdx", "nvidia-gpu"] }
tokio = { version = "1", features = ["full"] }

Enable the feature flags for your hardware. For development without TEE hardware, all operations work in simulation mode via environment variables.

Step 1: Detect TEE Hardware

# Detect TEE hardware on your machine
tenzro hardware

# Output (on an Intel TDX-enabled machine):
# Hardware Profile:
#   CPU:    Intel Xeon Platinum 8490H (60 cores)
#   Memory: 512 GB
#   GPU:    NVIDIA H100 80GB (CC-enabled)
#   TEE:    Intel TDX detected
#          NVIDIA GPU CC detected
#
# TEE Capabilities:
#   Intel TDX:  /dev/tdx-guest available
#   NVIDIA CC:  NRAS API reachable
#   Simulation: TENZRO_SIMULATE_TDX=false

Programmatic detection:

use tenzro_tee::{detect_tee, TeeType};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Auto-detect available TEE hardware
    let tee = detect_tee().await?;

    match tee.tee_type() {
        TeeType::IntelTdx => println!("Intel TDX detected"),
        TeeType::AmdSevSnp => println!("AMD SEV-SNP detected"),
        TeeType::AwsNitro => println!("AWS Nitro Enclaves detected"),
        TeeType::NvidiaGpu => println!("NVIDIA GPU CC detected"),
        TeeType::Simulated => println!("No TEE hardware -- using simulation"),
    }

    println!("Provider: {}", tee.provider_name());

Step 2: Generate Attestation Report

An attestation report is a hardware-signed proof that your code is running inside a genuine TEE. The user_data field binds a challenge nonce to the report, preventing replay attacks:

    // Generate an attestation report
    // The report binds user_data (e.g., a challenge nonce) to the hardware measurement
    let user_data = b"challenge-nonce-from-verifier-abc123";
    let report = tee.generate_attestation(user_data).await?;

    println!("Attestation Report:");
    println!("  Platform:    {}", report.platform);
    println!("  Firmware:    {}", report.firmware_version);
    println!("  Report data: {} bytes", report.report_data.len());
    println!("  Signature:   {} bytes", report.signature.len());
    println!("  Cert chain:  {} certificates", report.certificate_chain.len());
Attestation Report:
  Platform:    intel-tdx
  Firmware:    4.0.3
  Report data: 64 bytes
  Signature:   256 bytes
  Cert chain:  3 certificates

Step 3: Verify Remote Attestation

Verify that an attestation report is genuine. The verifier checks the platform, cryptographic signature, certificate chain, and report freshness (NVIDIA reports expire after 24 hours):

use tenzro_tee::{AttestationVerifier, verify_certificate_chain};

    // Verify the attestation report
    let verifier = AttestationVerifier::new();
    let result = verifier.verify(&report).await?;

    println!("Verification: {}", if result.valid { "PASSED" } else { "FAILED" });
    println!("  Platform match:  {}", result.platform_verified);
    println!("  Signature valid: {}", result.signature_verified);
    println!("  Cert chain OK:   {}", result.chain_verified);
    println!("  Fresh report:    {}", result.freshness_verified);
Verification: PASSED
  Platform match:  true
  Signature valid: true
  Cert chain OK:   true
  Fresh report:    true

Step 4: Certificate Chain Verification

Each vendor has a pinned root CA. The chain verifier checks validity periods, key usage extensions, and the full chain from leaf to root:

    // Certificate chain verification (X.509)
    // Each vendor has pinned root CAs:
    //   Intel: Intel SGX/TDX Root CA
    //   AMD:   AMD SEV Root Key (ARK -> ASK -> VCEK)
    //   AWS:   AWS Nitro Enclaves Root CA
    //   NVIDIA: NVIDIA GPU Attestation Root CA
    let chain_result = verify_certificate_chain(
        &report.certificate_chain,
        &report.platform,
    )?;

    println!("Certificate Chain:");
    for (i, cert) in chain_result.certificates.iter().enumerate() {
        println!("  [{}] Subject: {}", i, cert.subject);
        println!("      Issuer:  {}", cert.issuer);
        println!("      Valid:   {} to {}", cert.not_before, cert.not_after);
    }

Step 5: Seal and Unseal Data

Seal sensitive data so only this specific TEE instance can decrypt it. The key is derived via HKDF-SHA256(key_id, vendor_tag) with domain separation. The wire format is nonce(12) || ciphertext || tag(16):

use tenzro_tee::enclave_crypto::{seal, unseal};

    // Seal sensitive data inside the enclave
    // Key derivation: HKDF-SHA256(key_id, vendor_tag) with domain separation
    // Wire format: nonce(12) || ciphertext || tag(16)
    let secret_data = b"private-model-weights-or-api-keys";
    let key_id = "my-sealed-data-v1";

    let sealed = seal(key_id, secret_data, &tee).await?;
    println!("Sealed: {} bytes (includes 12-byte nonce + 16-byte tag)", sealed.len());

    // Data is now encrypted with a key derived from hardware measurements.
    // Only this specific TEE instance can unseal it.

    // Unseal the data
    let unsealed = unseal(key_id, &sealed, &tee).await?;
    assert_eq!(unsealed, secret_data);
    println!("Unsealed: {} bytes (matches original)", unsealed.len());
Sealed: 61 bytes (includes 12-byte nonce + 16-byte tag)
Unsealed: 33 bytes (matches original)

Key sealing. In production, keys are sealed by the hardware — MKTME for Intel TDX, VMSA for AMD SEV-SNP, KMS for AWS Nitro, and CC memory for NVIDIA GPU. In simulation mode, keys are derived from a UUID. The API is identical in both modes.

Step 6: Private AI Inference in TEE

The full flow: register as a TEE provider, submit attestation to the network, and serve a model inside the enclave. All inference runs in encrypted memory:

use tenzro_sdk::{TenzroClient, config::SdkConfig};
use tenzro_tee::detect_tee;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let config = SdkConfig::testnet();
    let client = TenzroClient::connect(config).await?;
    let tee = detect_tee().await?;

    // Register as a TEE provider
    let tx_hash = client.provider().register(vec!["qwen3.5-0.8b".to_string()], 2000).await?;
    println!("TEE Provider registered: {}", tx_hash);

    // Generate attestation for the network
    let report = tee.generate_attestation(b"tenzro-provider-registration").await?;

    // Serve a model inside the TEE enclave
    // All inference happens in encrypted memory:
    //   - Model weights are sealed at rest
    //   - Input prompts are encrypted in transit (TLS) and in the enclave
    //   - Output tokens are generated inside the enclave
    //   - Only the encrypted response leaves the TEE boundary
    client.provider().serve_model("qwen3.5-0.8b.Q4_K_M").await?;

    println!("Model serving inside TEE enclave");
    println!("TEE attestation: {}", report.platform);
    println!("Clients can verify attestation before sending sensitive prompts");

    tokio::signal::ctrl_c().await?;
    Ok(())
}

Step 7: Client-Side Attestation Verification

Before sending sensitive data to a provider, clients verify the attestation report to ensure the inference will run inside a genuine TEE:

// Client-side: verify a provider's TEE attestation before sending data
let tee_client = client.tee();
let attestation = tee_client.get_attestation("intel-tdx").await?;

// Verify the attestation report
let verify_result = tee_client.verify_attestation(
    &attestation.report,
    "intel-tdx",
).await?;

if !verify_result.valid {
    panic!("Provider TEE attestation failed verification!");
}

// Now safe to send sensitive inference requests
let inference = client.inference();
let response = inference.request(
    "qwen3.5-0.8b",
    "Analyze this confidential contract...",
    None,
).await?;
println!("Response (from verified TEE): {}", response.output);

Web API Verification

Attestations can also be verified via the REST API at /api/verify/tee-attestation:

# Verify TEE attestation via the Web API
curl -X POST https://api.tenzro.network/verify/tee-attestation \
  -H "Content-Type: application/json" \
  -d '{
    "report": "<base64-encoded-attestation-report>",
    "platform": "intel-tdx",
    "expected_user_data": "<base64-encoded-challenge>"
  }'

# Response:
# {
#   "valid": true,
#   "platform": "intel-tdx",
#   "firmware_version": "4.0.3",
#   "signature_verified": true,
#   "certificate_chain_verified": true,
#   "freshness_verified": true
# }

Development: Simulation Mode

For development without TEE hardware, enable simulation via environment variables:

# For development without TEE hardware, use simulation mode
export TENZRO_SIMULATE_TDX=true
# or
export TENZRO_SIMULATE_SEV=true
# or
export TENZRO_SIMULATE_NITRO=true
# or
export TENZRO_SIMULATE_NVIDIA=true

# All TEE operations work identically in simulation mode.
# Keys are derived from a UUID instead of hardware measurements.
# Attestation reports are generated but marked as simulated.

What You Learned

Next Steps