I suspect that you’re running a very large model like DeepSeek in coherent memory?
Keep in mind that this little DGX only has 128GB which means it can run fairly small models such as qwen3 coder where prompt processing is not an issue.
I’m not doubting your experience with GH200 but it doesn’t seem relevant here because the bandwidth for Spark is the bottleneck well before the prompt processing.
I suspect that you’re running a very large model like DeepSeek in coherent memory?
Keep in mind that this little DGX only has 128GB which means it can run fairly small models such as qwen3 coder where prompt processing is not an issue.
I’m not doubting your experience with GH200 but it doesn’t seem relevant here because the bandwidth for Spark is the bottleneck well before the prompt processing.