I took a practice test (math) and would like to have it be graded by a LLM since I can’t find the key online. I have 20GB VRAM, but I’m on intel Arc so I can’t do gemma3. I would prefer models from ollama.com 'cause I’m not deep enough down the rabbit hole to try huggingface stuff yet and don’t have time to right now.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 day ago

      Oh yeah, presumably through SYCL or Vulcan splitting.

      Id try Qwen3 30B, maybe a custom quantization if it doesn’t quite fit in your vram pool (as it should be very close). It should be very fast and quite smart.

      Qwen3 32B would fit too (a fully dense model), but you would definitely need to tweak the settings without it being really slow.

      • HumanPerson@sh.itjust.worksOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Qwen3 also doesn’t work because I’m using the ipex llm docker container which has ollama 5.8 or something. It doesn’t matter now because I have taken the test I was practicing for since posting this. Playing with qwen3 on CPU, it seems good but the reasoning feels like most open reasoning models where it gets the right answer then goes “wait that’s not right…”

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 day ago

          Yeah it does that, heh.

          The Qwen team recommend a fairly high temperature, but I find it’s better with modified sampling (lower temperature, 0.1 MinP, a bit of rep penalty or DRY). Then it tends to not “second guess” itself and take the lower probability choice of continuing to reason.

          If you’re looking for alternatives, Koboldcpp does support Vulkan. It may not be as fast as the (SYCL?) docker container, but supports new models and more features. It’s also precompiled as a one click exe: https://github.com/LostRuins/koboldcpp