
radeontop & amdgpu_top: GPU Stats for AMD Users
Why Monitoring AMD GPU Stats Matters (Especially on Servers & Cloud)
If you’re running workloads on AMD GPUs—whether it’s a beefy dedicated box, a cloud instance with GPU passthrough, or even a Docker container on your home lab—knowing what your GPU is actually doing is critical. Are you bottlenecked on memory? Is the GPU core maxed out? Is your mining rig, ML training, or transcoding job actually using the hardware you’re paying for? Or is it just sitting there, sipping power and doing nothing?
For AMD users, the tools for real-time GPU monitoring have always lagged behind NVIDIA’s ecosystem (think nvidia-smi
). But that’s changed—thanks to radeontop and amdgpu_top. These tools give you the stats you need, right in your terminal, so you can make smart decisions about scaling, debugging, or just bragging about your setup.
This guide is for anyone who:
- Runs AMD GPUs on VPS, dedicated servers, or in the cloud (see VPS or dedicated options)
- Wants to monitor GPU usage in Docker containers or VMs
- Needs practical, no-nonsense advice to get these tools working now
- Is tired of “just guessing” if their GPU is actually being used
The Big Questions: What, How, and Why?
1. How do radeontop & amdgpu_top actually work?
Both tools are terminal-based utilities that read stats from the Linux kernel’s AMD GPU drivers. They show you:
- GPU core usage (percent busy)
- VRAM (video RAM) usage
- Power consumption
- Temperature
- Memory controller activity
- Engine stats (graphics, compute, video decode/encode)
They don’t require X11 or a desktop environment—perfect for headless servers and cloud setups.
2. How do you set them up quickly and easily?
It’s usually a one-liner install, but there are some gotchas (especially with permissions and kernel modules). We’ll cover those in detail.
3. What are the real-world pros and cons? How do they compare to other solutions?
We’ll look at where radeontop and amdgpu_top shine, where they fall short, and how they stack up against alternatives like nvidia-smi
, intel_gpu_top
, and generic monitoring tools.
How radeontop & amdgpu_top Work: Under the Hood
Both tools tap into the Linux kernel’s amdgpu driver (the open-source driver for modern AMD GPUs). Here’s the basic flow:
- The tool reads stats from
/sys/class/drm/card*/device/
and/proc
files. - It parses counters and sensor data exposed by the kernel module.
- It displays this info in a curses (text UI) interface, updating every second or so.
radeontop is older and works with both the legacy radeon
and newer amdgpu
drivers. amdgpu_top is newer, more detailed, and only works with the amdgpu
driver (which covers most modern AMD cards from GCN 1.2 and up).
Neither tool will work if:
- You’re using proprietary AMDGPU-PRO drivers (rare on servers, but possible)
- Your kernel is too old (pre-4.15 is dicey for amdgpu_top)
- Your card is too old (pre-GCN 1.2 for amdgpu_top)
Algorithms & Structure
- radeontop uses polling: it reads counters, waits a second, reads again, and calculates the difference.
- amdgpu_top is more sophisticated: it can show per-engine stats (graphics, compute, video, etc.), and even per-process usage if you run it as root.
Quick Setup: Get Monitoring in 60 Seconds
Step 1: Install the Tool
On most modern Linux distros, it’s in the default repos.
# For radeontop
sudo apt install radeontop # Debian/Ubuntu
sudo dnf install radeontop # Fedora
sudo pacman -S radeontop # Arch
# For amdgpu_top (part of libdrm-tests)
sudo apt install libdrm-tests # Debian/Ubuntu
sudo dnf install libdrm # Fedora (look for amdgpu_top binary)
sudo pacman -S libdrm # Arch (may need to build from AUR)
If you’re on a minimal cloud image, you might need to enable “universe” or “multiverse” repos, or build from source (see radeontop GitHub and amdgpu_top source).
Step 2: Run the Tool
# For radeontop
sudo radeontop
# For amdgpu_top
sudo amdgpu_top
Why sudo? Because only root can read some of the sensor files. If you want to avoid sudo, add your user to the video
group (and log out/in):
sudo usermod -aG video $USER
Step 3: Read the Output
Both tools show a curses UI with live stats. Here’s a sample:
radeontop output:
GPU 23.4% VRAM 1.2GB/8GB GFX 15% MEM 10% VCE 0% UVD 0%
amdgpu_top output:
GFX: 12% Compute: 0% Video Encode: 0% VRAM: 1.2GB/8GB Power: 45W Temp: 62C
Step 4: (Optional) Use in Scripts or Monitoring
Both tools can output in machine-readable formats (CSV, JSON). Example:
radeontop -l 1 -d stats.log
amdgpu_top --json
This is perfect for Prometheus exporters, custom dashboards, or alerting scripts.
Examples, Cases, and Comparison Table
Common Use Cases
- Cloud GPU VPS: Check if your ML job is actually using the GPU (or if you’re wasting money on idle hardware).
- Docker Containers: Monitor GPU usage from inside a container (requires passing through /dev/dri and permissions).
- Dedicated Servers: Debug performance bottlenecks—are you CPU, RAM, or GPU bound?
- Streaming/Transcoding: See if hardware video encode/decode blocks are being used.
Comparison Table: radeontop vs amdgpu_top vs nvidia-smi
Feature | radeontop | amdgpu_top | nvidia-smi |
---|---|---|---|
Supported GPUs | AMD (radeon & amdgpu) | AMD (amdgpu only, GCN 1.2+) | NVIDIA only |
Per-engine stats | Basic | Detailed (GFX, Compute, Video, etc.) | Yes |
Per-process usage | No | Yes (root only) | Yes |
Scriptable output | Yes (CSV) | Yes (JSON) | Yes |
Works in Docker/VM | Yes (with /dev/dri access) | Yes (with /dev/dri access) | Yes (with /dev/nvidia access) |
GUI required? | No | No | No |
Positive Example
Case: ML engineer rents a GPU VPS for PyTorch training. Job seems slow. Runs amdgpu_top
and sees GPU at 5% usage, CPU at 100%. Realizes data loading is bottlenecked on CPU, not GPU. Moves data to SSD, job runs 10x faster.
Negative Example
Case: Video transcoding on a dedicated server. User assumes ffmpeg is using hardware encode. Runs radeontop
and sees VCE (video encode) at 0%. Realizes ffmpeg wasn’t built with AMD hardware encode support. Rebuilds ffmpeg, now VCE shows 80% usage, and transcodes are 5x faster.
Beginner Mistakes & Myths
- “I installed the tool but it shows 0% usage!” — Check if your workload is actually using the GPU. Many ML/FFmpeg jobs default to CPU unless explicitly told to use GPU.
- “I don’t see VRAM stats!” — Some older cards or kernel versions don’t expose all stats. Try updating your kernel and drivers.
- “I need a GUI to monitor GPUs.” — Nope! Both tools are terminal-based and perfect for SSH/cloud use.
- “I can monitor AMD GPUs with nvidia-smi.” — Sorry, only works for NVIDIA.
- “I can’t run it in Docker.” — You can, but you must pass through
/dev/dri
and set permissions.
Similar Solutions & Alternatives
- nvidia-smi — NVIDIA-only, but gold standard for their GPUs.
- intel_gpu_top — For Intel integrated graphics (part of
intel-gpu-tools
). - glxinfo/DRI tools — Good for OpenGL info, but not real-time stats.
- lm_sensors — Can show GPU temps, but not usage.
- Prometheus node exporters — Some have plugins for AMD GPUs, but often just wrap radeontop/amdgpu_top.
Interesting Facts & Non-standard Usage
- You can run radeontop or amdgpu_top in a
tmux
orscreen
session for persistent monitoring on headless servers. - Combine with
watch
orcron
to log GPU stats over time for capacity planning. - Use
amdgpu_top --json
to build your own Grafana dashboards or trigger alerts if GPU temp exceeds a threshold. - Some users run these tools in a Docker container, mounting
/dev/dri
and exporting stats to a central monitoring system. - On multi-GPU systems, you can specify the card:
radeontop -c /dev/dri/card1
Automation & Scripting: New Opportunities
- Auto-scale cloud GPU instances based on real usage (e.g., spin down idle servers if GPU usage < 10% for 1 hour).
- Alert on overheating (e.g., send a Slack message if GPU temp > 85°C).
- Integrate with CI/CD pipelines to benchmark GPU workloads and catch regressions.
- Chargeback/billing — Track actual GPU usage per user/process for internal cost allocation.
Statistics: How Do These Tools Stack Up?
- radeontop is lightweight (<1MB RAM), runs on almost any AMD GPU, and is rock-solid for basic stats.
- amdgpu_top gives much more detailed info, but needs a newer kernel and GPU.
- Compared to
nvidia-smi
, both tools are less polished, but rapidly improving. For scripting and automation,amdgpu_top
is now on par with NVIDIA’s tools.
Conclusion & Recommendations
If you’re running AMD GPUs on a server, VPS, or in the cloud, radeontop and amdgpu_top are must-have tools. They’re easy to install, work over SSH, and give you the real-time stats you need to:
- Debug performance issues
- Optimize workloads (and save money!)
- Automate scaling, alerting, and billing
For most users, amdgpu_top is the better choice if your hardware supports it (newer AMD cards, kernel 5.x+). For older cards or maximum compatibility, radeontop still rocks.
Don’t fly blind—use these tools to get the most out of your AMD GPU investment, whether you’re running on a VPS, dedicated server, or your own home lab.
Official resources:
Happy monitoring, and may your GPUs always be busy (but not too hot)!

This article incorporates information and material from various online sources. We acknowledge and appreciate the work of all original authors, publishers, and websites. While every effort has been made to appropriately credit the source material, any unintentional oversight or omission does not constitute a copyright infringement. All trademarks, logos, and images mentioned are the property of their respective owners. If you believe that any content used in this article infringes upon your copyright, please contact us immediately for review and prompt action.
This article is intended for informational and educational purposes only and does not infringe on the rights of the copyright owners. If any copyrighted material has been used without proper credit or in violation of copyright laws, it is unintentional and we will rectify it promptly upon notification. Please note that the republishing, redistribution, or reproduction of part or all of the contents in any form is prohibited without express written permission from the author and website owner. For permissions or further inquiries, please contact us.