AMD SEV-SNP, NVIDIA H100 Tensor Core GPU confidential computing Confidential AI/ML training or inference with secure offload of data, models, and computation to H100 GPUs. Specific VM regions; availability via Azure VM products by region; Ubuntu 22.04 LTS qualified for confidential GPU VMs. https://learn.microsoft.com/en-us/azure/confidential-computing/gpu-options Confidential VM with GPU (a3-highgpu-1g) Intel TDX, NVIDIA H100 Confidential Computing mode AI/ML workloads on H100 GPUs with confidential VM isolation and confidential GPU mode for sensitive training or inference. Supported zones only; requires spot or flex-start provisioning on a3-highgpu-1g and preemptible/global H100 quota. https://docs.cloud.google.com/confidential-computing/confidential-vm/docs/create-a-confidential-vm-instance-with-gpu Confidential Computing Container Runtime for Red Hat Virtualization Solutions IBM Secure Execution for Linux Confidential containerized services for sensitive AI-adjacent data processing or inference control planes; model training not clearly stated. Runs on IBM z17 and IBM LinuxONE V systems; public, private, or hybrid cloud deployment. https://www.ibm.com/docs/en/ccrv/1.1.x?topic=solutions-confidential-computing-linuxone Remote Attestation Service for confidential computing instances Intel SGX, Intel TDX, Enclave, NVIDIA GPU-accelerated instance Sensitive AI workloads on Alibaba confidential instances, including combined confidential-computing GPU instances for protected data processing or inference. Attestation endpoint shown for cn-beijing; exact confidential instance regions not clearly listed on consulted page. https://www.alibabacloud.com/help/en/ecs/user-guide/remote-attestation-service Confidential Services Platform IBM Secure Execution for Linux Protects AI training, test, and inference on IBM Cloud LinuxONE / Hyper Protect; built for confidential model customization and execution. IBM Cloud confidential computing on LinuxONE / Hyper Protect services; specific regions not stated on consulted pages. https://cloud.ibm.com/docs/confidential-computing?topic=confidential-computing-conf-ai Confidential Computing on Bare Metal Servers Intel SGX, AMD Secure Encrypted Virtualization, AMD Secure Memory Encryption Sensitive machine learning and federated learning workloads on enclave-capable bare metal; protected data processing across parties. Bare metal servers in Asia Pacific, North America, and Europe; server families include Advance and Scale ranges. https://www.ovhcloud.com/en/bare-metal/uc-confidential-computing/ Confidential Virtual Machines, remote attestation, Constellation Confidential containerized workloads for sensitive AI data processing, inference services, or regulated ML pipelines on Kubernetes. Self-managed clusters in STACKIT availability zones eu01-1, eu01-2, or eu01-3. https://docs.stackit.cloud/de/products/confidential-computing/confidential-kubernetes/basics/introduction/ Confidential GKE Nodes with H100 Intel TDX, NVIDIA H100 Confidential Computing Confidential Kubernetes nodes for AI/ML GPU workloads using one H100; suitable for sensitive inference or training jobs on GKE. Supported zones only; Spot, preemptible, or flex-start; one H100 80GB on a3-highgpu-1g. https://docs.cloud.google.com/kubernetes-engine/docs/how-to/gpus-confidential-nodes QingTian Enclave, QingTian Security Module (QTSM), QingTian TPM Confidential VM/container isolation for sensitive AI data processing or inference-adjacent services; GPU access is not supported inside the enclave. Huawei Cloud ECS / CCE environment; x86 and Arm supported; region availability not clearly stated on consulted docs. https://support.huaweicloud.com/intl/en-us/twp-ecs/twp-ecs-pdf.pdf Confidential Space with H100 Intel TDX, NVIDIA H100 Confidential Computing Confidential containerized GPU workloads on H100 for sensitive AI data processing or inference; training not clearly stated on consulted page. Supported Confidential VM zones only; GPU mode uses spot or flex-start on a3-highgpu-1g with H100. https://docs.cloud.google.com/confidential-computing/confidential-space/docs/deploy-workloads