API How-Tos

The Cloud API lets you manage Cloud GPU VMs programmatically using conventional HTTP requests. You can use the API to create, delete, and retrieve information about your Cloud GPU VMs.

Furthermore, you need templates to provision Cloud GPU VMs, but templates are not compatible with servers that support full flex configuration.

Cloud GPU VM workflow

1

Request access

To begin using the Cloud GPU VM, send a request to IONOS Cloud Support. For more information about the process, see Cloud GPU VM Access.

2

Discovery and selection

3

Create a Cloud GPU VM

Initiate a Cloud GPU VM creation request through API with the following:

  • The selected GPU ID

  • Linux-based operating system

Note: The product supports only Linux-based operating systems during the launch.

4

Access and setup

  • Install required framework dependencies. Use a package manager like pip or conda to install your chosen framework, ensuring you select the GPU-enabled version.

Install the version that matches your CUDA toolkit. For example, CUDA 12.1.

pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
  • Use the server for your use case

5

Usage

  • Run GPU workloads, such as model training (finetuning), inference, or graphics rendering

  • Monitor GPU utilization and performance metrics

6

Management

  • Start, restart, or delete the Cloud GPU VM as needed.

  • Monitor costs and usage. For more information, see Cost Alert and Cost & Usage.

7

Cleanup

Last updated

Was this helpful?