Can I run Gemma 2 27B (Q4_K_M (GGUF 4-bit)) on AMD RX 7900 XTX?

check_circle
Perfect
Yes, you can run this model!
GPU VRAM
24.0GB
Required
13.5GB
Headroom
+10.5GB

VRAM Usage

0GB 56% used 24.0GB

Performance Estimate

Tokens/sec ~42.0
Batch size 1
Context 8192K

info Technical Analysis

AMD RX 7900 XTX provides excellent compatibility with Gemma 2 27B (27.00B). With 24.0GB of VRAM and only 13.5GB required, you have 10.5GB of headroom for comfortable inference. This allows for extended context lengths, batch processing, and smooth operation.

lightbulb Recommendation

You can run Gemma 2 27B (27.00B) on AMD RX 7900 XTX without any compromises. Consider using full context length and larger batch sizes for optimal throughput.

tune Recommended Settings

Batch_Size
1
Context_Length
8192
Inference_Framework
llama.cpp or vLLM

help Frequently Asked Questions

Can I run Gemma 2 27B (27.00B) on AMD RX 7900 XTX? expand_more
AMD RX 7900 XTX has 24.0GB VRAM, which provides 10.5GB of headroom beyond the 13.5GB required by Gemma 2 27B (27.00B). This is plenty of room for comfortable inference with room for KV cache, batching, and extended context lengths.
How much VRAM does Gemma 2 27B (27.00B) need? expand_more
Gemma 2 27B (27.00B) requires approximately 13.5GB of VRAM.
What performance can I expect? expand_more
Estimated 42 tokens per second.