Skip to main content

Before you begin

If you want to manage Pods using the Runpod CLI, you’ll need to install Runpod CLI, and set your API key in the configuration. Run the following command, replacing RUNPOD_API_KEY with your API key:
runpodctl config --apiKey RUNPOD_API_KEY

Deploy a Pod

  • Web
  • Command line
  • REST API
To create a Pod using the Runpod console:
  1. Open the Pods page in the Runpod console and click the Deploy button.
  2. (Optional) Specify a network volume if you need to share data between multiple Pods, or to save data for later use.
  3. Select GPU or CPU using the buttons in the top-left corner of the window, and follow the configuration steps below.
GPU configuration:
  1. Select a graphics card (e.g., A40, RTX 4090, H100 SXM).
  2. Give your Pod a name using the Pod Name field.
  3. (Optional) Choose a Pod Template such as Runpod Pytorch 2.1 or Runpod Stable Diffusion.
  4. Specify your GPU count if you need multiple GPUs.
  5. Click Deploy On-Demand to deploy and start your Pod.
CUDA Version CompatibilityWhen using templates (especially community templates like runpod/pytorch:2.8.0-py3.11-cuda12.8.1-cudnn-devel-ubuntu22.04), ensure the host machine’s CUDA driver version matches or exceeds the template’s requirements.If you encounter errors like “OCI runtime create failed” or “unsatisfied condition: cuda>=X.X”, you need to filter for compatible machines:
  1. Click Additional filters in the Pod creation interface
  2. Click CUDA Versions filter dropdown
  3. Select a CUDA version that matches or exceeds your template’s requirements (e.g., if the template requires CUDA 12.8, select 12.8 or higher)
Note: Check the template name or documentation for CUDA requirements. When in doubt, select the latest CUDA version as newer drivers are backward compatible.
CPU configuration:
  1. Select a CPU type (e.g., CPU3/CPU5, Compute Optimized, General Purpose, Memory-Optimized).
  2. Specify the number of CPUs and quantity of RAM for your Pod by selecting an Instance Configuration.
  3. Give your Pod a name using the Pod Name field.
  4. Click Deploy On-Demand to deploy and start your Pod.

Custom templates

Runpod supports custom Pod templates that let you define your environment using a Dockerfile. With custom templates, you can:
  • Install specific dependencies and packages.
  • Configure your development environment.
  • Create portable Docker images that work consistently across deployments.
  • Share environments with team members for collaborative work.

Stop a Pod

If your Pod has a network volume attached, it cannot be stopped, only terminated. When you terminate the Pod, data in the /workspace directory will be preserved in the network volume, and you can regain access by deploying a new Pod with the same network volume attached.
When a Pod is stopped, data in the container volume is cleared, but data in the /workspace directory is preserved. To learn more about how Pod storage works, see Storage overview. By stopping a Pod you are effectively releasing the GPU on the machine, and your original GPU may become unavailable when you restart the Pod. Runpod provides automatic migration options to help you get back to work quickly. For more info, see Pod migration.
After a Pod is stopped, you will still be charged for its disk volume storage. If you don’t need to retain your Pod environment, you should terminate it completely.
  • Web
  • Command line
To stop a Pod:
  1. Open the Pods page.
  2. Find the Pod you want to stop and expand it.
  3. Click the Stop button (square icon).
  4. Confirm by clicking the Stop Pod button.

Stop a Pod after a period of time

You can also stop a Pod after a specified period of time. The examples below show how to use the CLI or web terminal to schedule a Pod to stop after 2 hours of runtime.
  • Command line
  • Web terminal
Use the following command to stop a Pod after 2 hours:
sleep 2h; runpodctl stop pod $RUNPOD_POD_ID &
This command uses sleep to wait for 2 hours before executing the runpodctl stop pod command to stop the Pod. The & at the end runs the command in the background, allowing you to continue using the SSH session.

Start a Pod

Pods start as soon as they are created, but you can resume a Pod that has been stopped.
  • Web
  • Command line
To start a Pod:
  1. Open the Pods page.
  2. Find the Pod you want to start and expand it.
  3. Click the Start button (play icon).

Terminate a Pod

Terminating a Pod permanently deletes all associated data that isn’t stored in a network volume. Be sure to export or download any data that you’ll need to access again.
  • Web
  • Command line
To terminate a Pod:
  1. Open the Pods page.
  2. Find the Pod you want to terminate and expand it.
  3. Stop the Pod if it’s running.
  4. Click the Terminate button (trash icon).
  5. Confirm by clicking the Yes button.

View Pod details

You can find a list of all your Pods on the Pods page of the web interface. If you’re using the CLI, use the following command to list your Pods:
runpodctl get pod
Or use this command to get the details of a single Pod:
runpodctl get pod [POD_ID]

Access logs

Pods provide two types of logs to help you monitor and troubleshoot your workloads:
  • Container logs capture all output sent to your console standard output, including application logs and print statements.
  • System logs provide detailed information about your Pod’s lifecycle, such as container creation, image download, extraction, startup, and shutdown events.
To view your logs, open the Pods page, expand your Pod, and click the Logs button. This gives you real-time access to both container and system logs, making it easy to diagnose issues or monitor your Pod’s activity.

Pod migration

When you deploy a Pod, your Pod is locked to a single physical machine in a datacenter. As long as you keep your Pod running you’ll maintain access to it, and your instance charges will stay the same. However, if you stop your Pod, it immediately becomes available for other users to rent. If you try to start your Pod again, but your machine is now full (i.e. someone rented all 4-8 GPUs), you’ll be offered the option to migrate your Pod data to a new machine. When this happens, you have three options:
  1. Automatically migrate Pod data: This l spins-up a new Pod with the same specs as the current one and migrates user data automatically so users can get back to work quickly. This 1-click migration process will find a new machine with the requested GPU type, spin up the instance, and migrate your data automatically from your old Pod into a new Pod.
  2. Start Pod with CPUs: If you don’t require GPUs immediately, you can instead choose to start your Pod with CPUs only, so you can still access your data or even manually migrate your data yourself.
  3. Do nothing: If you don’t want to migrate your data, you can simply do nothing and wait for your Pod machine to become available again. There is no gaurantee for how long this might take—try waiting a few minutes before trying again.
If you migrate your Pod data, your new Pod will have a new IP address. This may affect your application if:
  • You have a pod ID hardcoded in an API call.
  • You have a proxy URL hardcoded: e.g. b63b243b47bd340becc72fbe9b3e642c.proxy.runpod.net
  • You have a firewall or VPN setup with a specific Pod ID in it.
  • You have a firewall or VPN setup with a specific Pod IP address in it.
  • You are using a specific URL for your server (when you start a new Pod, you will get a new URL for the UI or server you’ve setup, etc).