Hard to say how much it will affect their bottom line, but AMD's new "Halo Strix" processor is now the best system for anyone wanting to run a local LLM.



Can get one of these with 128GB of RAM for like $1.5k all together and it will run mid/large models like gpt-oss 120B perfectly.

For reference a single Nvidia 5090 (without the rest of the PC) is $2-3k and cannot run that full model.

A mac mini with 64GB of RAM is $2k and also can't run that full model

A mac studio with 128GB of RAM can run it but costs $3.7k
SAY1.28%
GET-0.04%
RAM0.66%
GPT-5.72%
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)