Run Ollama on AMD GPU ROCm with TuxedoOS

┏━┓╻  ╻  ┏━┓┏┳┓┏━┓    ╻    ┏━┓┏┳┓╺┳┓
┃ ┃┃  ┃  ┣━┫┃┃┃┣━┫   ╺╋╸   ┣━┫┃┃┃ ┃┃
┗━┛┗━╸┗━╸╹ ╹╹ ╹╹ ╹    ╹    ╹ ╹╹ ╹╺┻┛

If you’re like me, you might end up with an AMD machine wondering how to squeeze as many agents as possible onto it for the least amount of hassle possible. Fortunately, AMD ships amdgpu-install as the official way to put ROCm on Linux. Unfortunately, their handy script reads /etc/os-release, checks the ID field against a supported list, and if the distro is not there it…exits. I say it is unfortunate because it appears to be a lazy cop, far too strict for reality.

Take TuxedoOS 24.04 for example. It’s Ubuntu 24.04 (Noble) with a modified kernel and a few Tuxedo packages on top. Every AMD apt repository works. Every library installs cleanly. Nothing gets in the way until this amdgpu-install OS check shows up and falls over.

Challenge accepted. Here’s the happy path to a GPU-accelerated Ollama on a new TuxedoOS laptop that has the Kraken Point APU (Radeon 860M, gfx1152). You may find the same method works for other AMD APUs and dGPUs, and for other Ubuntu-derived distros.

It turned out to not be any problem at all, so I hope AMD reconsiders their lazy cop.

Step 1: Present “clean” credential

I know this is stupid, but it’s really the trick. You just bind-mount a temporary os-release that says you are running Ubuntu Noble. The mount is process-global, reverts on unmount, and does not touch the real file on disk.

sudo tee /tmp/os-release-ubuntu >/dev/null <<'EOF'
PRETTY_NAME="Ubuntu 24.04 LTS"
NAME="Ubuntu"
VERSION_ID="24.04"
VERSION="24.04 LTS (Noble Numbat)"
VERSION_CODENAME=noble
ID=ubuntu
ID_LIKE=debian
UBUNTU_CODENAME=noble
EOF
 
sudo mount --bind /tmp/os-release-ubuntu /etc/os-release

Nothing else on the system sees the change. When you unmount, the original os-release is back. A reboot also clears it.

AMD doesn’t care and doesn’t check anything else, which kind of goes to my point about how poorly their support is running right now. I would have expected them to check their own hardware first, distribution last. That’s better framing to get people to want to use the hardware.

Step 2: Install AMD repository and ROCm

Run the AMD installer script as they say.

ROCm 7.2.1 supports the latest Radeon 9000 Series (RDNA 4) and select 7000 Series (RDNA 3) GPUs, and introduces support for Ryzen APUs

sudo apt update
wget https://repo.radeon.com/amdgpu-install/7.2.1/ubuntu/noble/amdgpu-install_7.2.1.70201-1_all.deb
sudo apt install ./amdgpu-install_7.2.1.70201-1_all.deb
amdgpu-install -y --usecase=rocm --no-dkms

The --no-dkms flag has to be there. AMD ships ROCm for Ryzen APUs on top of the inbox amdgpu kernel driver. Installing their DKMS module on a non-Ubuntu kernel leads to mismatches. The inbox driver in any recent kernel (6.14 or later) works.

When the install completes, unmount the bind, since we don’t need to fool them anymore:

sudo umount /etc/os-release

Step 3: Join GPU group and reboot

ROCm requires the current user to be in the render and video groups. Without these, rocminfo will not see the GPU.

sudo usermod -aG render,video $USER
sudo reboot

Step 4: Verify GPU is recognized

After the reboot, confirm three things: group membership, GPU enumeration, and OpenCL platform.

groups
rocminfo | grep -A2 "Agent 2"
/opt/rocm/bin/clinfo | grep -E "Device Name|Platform Name"

Expected output for the Kraken Point system I am testing with:

Name: gfx1152
Marketing Name: AMD Radeon 860M Graphics
Platform Name: AMD Accelerated Parallel Processing

Step 5: Prove HIP compiles and runs

The ROCm 6+ API dropped gcnArch in favor of gcnArchName so I used this test:

cat > /tmp/hip_test.cpp <<'EOF'
#include <hip/hip_runtime.h>
#include <cstdio>
int main() {
  int n = 0;
  (void)hipGetDeviceCount(&n);
  printf("HIP devices: %d\n", n);
  for (int i = 0; i < n; i++) {
    hipDeviceProp_t p;
    (void)hipGetDeviceProperties(&p, i);
    printf("  %d: %s (%s)\n", i, p.name, p.gcnArchName);
  }
}
EOF
/opt/rocm/bin/hipcc /tmp/hip_test.cpp -o /tmp/hip_test
/tmp/hip_test

Successful output will look like this:

HIP devices: 1
  0: AMD Radeon Graphics (gfx1152)

At this point ROCm itself is complete. Every application that links against the system ROCm libraries will find the GPU.

WE’RE DONE! But wait, there’s more

Ollama now supports AMD graphics cards

Step 6: Strap Ollama to the GPU

Ollama bundles its own ROCm runtime in /usr/local/lib/ollama/rocm. The system ROCm install does not affect it. Ollama’s precompiled kernels target a specific list of GPU architectures, and gfx1152 is not currently on that list. Maybe it will be. But in the meantime the easy solution is to use HSA_OVERRIDE_GFX_VERSION, which tells the HSA runtime to treat the installed GPU as a different architecture. For RDNA 3.5 APUs (gfx1150, gfx1151, gfx1152), setting it to 11.0.0 loads gfx1100 kernels. RDNA 3 and RDNA 3.5 are close enough that gfx1100 code runs on RDNA 3.5 silicon for every op Ollama uses.

Create a systemd drop-in so the override persists across restarts:

sudo mkdir -p /etc/systemd/system/ollama.service.d
sudo tee /etc/systemd/system/ollama.service.d/override.conf >/dev/null <<'EOF'
[Service]
Environment="HSA_OVERRIDE_GFX_VERSION=11.0.0"
Environment="HIP_VISIBLE_DEVICES=0"
Environment="ROCR_VISIBLE_DEVICES=0"
EOF
 
sudo systemctl daemon-reload
sudo systemctl restart ollama

Confirm the environment actually reached the process:

sudo cat /proc/$(pgrep -f 'ollama serve')/environ | tr '\0' '\n' | grep -iE "hsa|hip|rocr"

Check the Ollama logs:

sudo journalctl -u ollama -n 80 --no-pager | grep -iE "rocm|gpu|inference compute"

Success will look something like this:

library=ROCm compute=gfx1100 name=ROCm0 description="AMD Radeon 860M Graphics" total="15.7 GiB" type=iGPU

Ollama reports the 860M is gfx1100 and it is ready to offload model layers to it instead of soaking up your CPU cores. For example, before I wired the GPU my 16 cores were pegged 100% for five minutes or more. After, CPU was running 5% while the GPU was pegged.

Step 7: GPU spotting during inference

Open up the system monitor (preferred if you like cool visuals) or just start the rocm-smi in a loop in one terminal:

watch -n 0.5 rocm-smi

Then in another terminal run inference:

ollama run llama3.2:3b "explain the Bauhaus movement in detail"

GPU utilization shoots above 90% during generation. VRAM used jumps to roughly the model size.

Once you see jumps, it’s tuning time

Figuring out fast and stable under real workloads is a bigger post. To quickly get started, there are four AMD APU knobs to turn:

  1. Shared memory ceiling
  2. Ollama runtime flags
  3. CPU governor
  4. Power profile

First, with shared memory ceiling the AMD APUs have no dedicated VRAM. It’s kind of their cost-saving thing. The kernel caps how much system RAM the GPU can address via a Translation Table Manager (TTM) pages limit. The default is half the system RAM. Raising it costs nothing when the GPU is idle. On a 32GB system, I figure we should roughly estimate just below 24GB.

sudo apt install -y pipx
pipx ensurepath
pipx install amd-debug-tools
 
amd-ttm           # show current
amd-ttm --set 22  # raise to 22GB

The 22 GiB leaves enough headroom for the OS, a browser, and KDE, as absurd as that sounds. I remember back in the day… nevermind. On 64 GB, 48GB would be my starting point. On 128 GB you can use AMD’s own recommendation of 96GB, which is kind of like saying the people who have the most money and least need for tuning get the AMD team’s attention.

The setting persists in /etc/modprobe.d/ttm.conf and takes effect after reboot.

Second, Ollama has four flags that affect iGPU inference:

sudo tee /etc/systemd/system/ollama.service.d/override.conf >/dev/null <<'EOF'
[Service]
Environment="HSA_OVERRIDE_GFX_VERSION=11.0.0"
Environment="HIP_VISIBLE_DEVICES=0"
Environment="ROCR_VISIBLE_DEVICES=0"
Environment="OLLAMA_FLASH_ATTENTION=1"
Environment="OLLAMA_KV_CACHE_TYPE=q8_0"
Environment="OLLAMA_NUM_PARALLEL=1"
Environment="OLLAMA_MAX_LOADED_MODELS=1"
Environment="OLLAMA_KEEP_ALIVE=30m"
EOF
sudo systemctl daemon-reload
sudo systemctl restart ollama

OLLAMA_FLASH_ATTENTION=1 cuts KV cache memory by roughly half on most modern models. OLLAMA_KV_CACHE_TYPE=q8_0 quantizes the KV cache to 8-bit, which saves significant memory for long contexts with negligible quality cost. OLLAMA_NUM_PARALLEL=1 and OLLAMA_MAX_LOADED_MODELS=1 prevent Ollama from thrashing the shared memory pool with concurrent requests, which can be truly painful to the user experience on an iGPU. OLLAMA_KEEP_ALIVE=30m holds the model in GPU memory for half an hour instead of the default five minutes, because cold-starts are the slowest part of inference when using memory that isn’t dedicated.

Third, the CPU governor. Are you on a laptop? I sure am. For obvious reasons a laptop setting is usually powersave or schedutil, both of which clock down the CPU during the token-decode phase that runs between GPU kernels.

cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
sudo cpupower frequency-set -g performance

Fourth, power profile. TuxedoOS is very proud of their widget and app for power management. It’s a bit annoying, really, but it is what it is and it can override governor decisions. Their Tuxedo Control Center (TCC) also handles fan curves and hardware-specific quirks. TCC masks power-profiles-daemon on purpose, and so we use TCC.

tuxedo-control-center &

I chose a performance-oriented profile in the GUI, which seems weird because it’s literally just a toggle. Why have a UI for a toggle? Maybe I’ll create a custom one with the CPU governor set to performance and the fan curve ramped up for sustained load. On non-Tuxedo distros that use power-profiles-daemon, the equivalent is powerprofilesctl set performance. I will say this, when I was hammering the CPU before the GPU was recognized, the fans were so loud I couldn’t hear myself think and my USB hub literally started screaming and shutdown from the power conflicts. Anker, we need to have a word.

Reboot after the TTM change, and everything should be in place. Verify like this:

amd-ttm
sudo journalctl -u ollama --since "2 minutes ago" --no-pager | grep "inference compute"

The Ollama log line should show the new total VRAM ceiling matching your TTM setting.

Benchmark

When it’s good to go, you can send a generation through the API and check timing fields:

curl -s http://127.0.0.1:11434/api/generate -d '{
  "model": "qwen2.5:7b",
  "prompt": "Write a 300-word analysis of the Bauhaus Dessau period",
  "stream": false
}' > /tmp/ollama_result.json
 
python3 <<'EOF'
import json
d = json.load(open("/tmp/ollama_result.json"))
eval_s = d["eval_duration"] / 1e9
prompt_s = d["prompt_eval_duration"] / 1e9
print(f"prompt eval:  {d['prompt_eval_count']} tokens in {prompt_s:.2f}s = {d['prompt_eval_count']/prompt_s:.1f} tok/s")
print(f"generation:   {d['eval_count']} tokens in {eval_s:.2f}s = {d['eval_count']/eval_s:.1f} tok/s")
EOF

On my Radeon 860M (gfx1152, 8 CU RDNA 3.5) with 22GB TTM, performance governor, and flash attention enabled I posted these numbers:

llama3.2:3b Q4 → 31 tok/s generation, 360 tok/s prompt eval
qwen2.5:7b Q4 → 15 tok/s generation, 187 tok/s prompt eval

They are bandwidth-bound. Kraken Point has a 128-bit LPDDR5X memory bus at roughly 120 GB/s. Generation speed scales inversely with model size. Each token streams the full weights through memory. The 2.1x speed ratio between 3B and 7B tracks the 2.4x size ratio, consistent with a memory-bandwidth ceiling.

Then we can confirm the model was being fully offloaded to GPU:

curl -s http://127.0.0.1:11434/api/ps | python3 -m json.tool | grep -E "size|vram"

size_vram equals size. The entire model is in GPU memory.

Fiddle context length

Since Ollama defaults to 4096-token context on every model, I figure it’s worth a change. I tend to live in a world of longer files, and that means more memory is needed. With q8_0 KV cache, qwen2.5:7b at 8K adds roughly 500MB over the 4K default, and 16K adds about 1GB. On our 22GB ceiling this is still reasonable. Generation speed drops about a quarter at 16K versus 4K because more KV cache streams through memory per token. There is no CLI flag, so it will be set per request, per model, or globally.

Per request via the API:

curl -s http://127.0.0.1:11434/api/generate -d '{
  "model": "qwen2.5:7b",
  "prompt": "summarize this long document ...",
  "stream": false,
  "options": { "num_ctx": 16384 }
}'

Per model via a Modelfile, creating a named variant:

cat > /tmp/qwen-16k.modelfile <<'EOF'
FROM qwen2.5:7b
PARAMETER num_ctx 16384
EOF
ollama create qwen2.5:7b-16k -f /tmp/qwen-16k.modelfile
ollama run qwen2.5:7b-16k

Globally for every model, add to the systemd drop-in:

Environment="OLLAMA_CONTEXT_LENGTH=8192"

What else can I tell you?

A new Tuxedo Computer running TuxedoOS on an AMD APU can feed Ollama the GPU. System ROCm 7.2.1 is available for any application that wants it. The HIP toolchain works on the actual architecture. With a tuned Ollama service, models fit GPU memory, flash attention gets used, and the KV cache gets “quantized” for comfortable context lengths.

Really this absurdly long post is a nothing-burger. There were two workarounds: a bind-mount for the installer’s OS check, and an HSA version override for Ollama’s bundled runtime. Neither touches the hardware, neither modifies any vendor code, and both revert cleanly.

Come on AMD, this post really doesn’t even need to exist, but you forced me to write it because of your lazy “are you on the list” cop.

:~$ ollama run qwen2.5:7b
>>> write a haiku
秋叶落无声,
风过知时节,
静待冬来临。
+-----------------------------+
|           prompt            |
+--------------+--------------+
               |
+--------------v--------------+
|           Ollama            |
+--------------+--------------+
               |
+--------------v--------------+
|  HSA_OVERRIDE_GFX_VERSION   |
|         = 11.0.0            |
|   (gfx1100 kernels load)    |
+--------------+--------------+
               |
+--------------v--------------+
|      ROCm 7.2.1 / HIP       |
+--------------+--------------+
               |
+--------------v--------------+
|   amdgpu (inbox driver)     |
|   Linux 6.17 (TuxedoOS)     |
+--------------+--------------+
               |
+--------------v--------------+
|        Radeon 860M          |
|          gfx1152            |
+-----------------------------+

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.