• brokenlcd@feddit.it
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    The problem is… How do we run it if rocm is still a mess for most of their gpus? Cpu time?

    • swelter_spark@reddthat.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      There are ROCm versions of llama.cpp, ollama, and kobold.cpp that work well, although they’ll have to add support for this model before they could run it.