I took a practice test (math) and would like to have it be graded by a LLM since I can’t find the key online. I have 20GB VRAM, but I’m on intel Arc so I can’t do gemma3. I would prefer models from ollama.com 'cause I’m not deep enough down the rabbit hole to try huggingface stuff yet and don’t have time to right now.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        21 hours ago

        Oh yeah, presumably through SYCL or Vulcan splitting.

        Id try Qwen3 30B, maybe a custom quantization if it doesn’t quite fit in your vram pool (as it should be very close). It should be very fast and quite smart.

        Qwen3 32B would fit too (a fully dense model), but you would definitely need to tweak the settings without it being really slow.

        • HumanPerson@sh.itjust.worksOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          20 hours ago

          Qwen3 also doesn’t work because I’m using the ipex llm docker container which has ollama 5.8 or something. It doesn’t matter now because I have taken the test I was practicing for since posting this. Playing with qwen3 on CPU, it seems good but the reasoning feels like most open reasoning models where it gets the right answer then goes “wait that’s not right…”

          • brucethemoose@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            20 hours ago

            Yeah it does that, heh.

            The Qwen team recommend a fairly high temperature, but I find it’s better with modified sampling (lower temperature, 0.1 MinP, a bit of rep penalty or DRY). Then it tends to not “second guess” itself and take the lower probability choice of continuing to reason.

            If you’re looking for alternatives, Koboldcpp does support Vulkan. It may not be as fast as the (SYCL?) docker container, but supports new models and more features. It’s also precompiled as a one click exe: https://github.com/LostRuins/koboldcpp

  • SmokeyDope@lemmy.worldM
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 day ago

    Models running on gguf should all work with your gpu assuming its set up correctly and properly loaded into the vram. It shouldnt matter if its qwen or mistral or gemma or llama or llava or stable diffusion. Maybe the engine you are using isnt properly configured to use your arc card so its all just running on your regular ram which limits things? Idk.

    Intel arc gpu might work with kobold and vulcan without any extra technical setup. Its not as deep in the rabbit hole as you may think, a lot of work was put in to making one click executables with nice guis that the average person can work with…

    Models

    Find a bartowlski made quantized gguf of the model you want to use. Q4_km is recommended average quant to try first. Try to make sure it all can fit within your card size wise for speed. Shouldnt be a big problem for you with 20gb vram to play with. Hugging face gives the size in gb next to each quant.

    Start small with like high quant of qwen 3 8b. Then a gemma 12b, then work your way up to a medium quant of deephermes 24b.

    Thinking models are better at math and logical problem solving. But you need to know how to communicate and work with llms to get good results no matter what. Ask it to break down a problem you already solved and test it for comprehension.

    kobold engine

    Download kobold.cpp, execute it like a regular program and adjust settings in graphical interface that pops up. Or make a startup script with flags.

    For input processing library, see if Vulcan processing works with Intel arc. Make sure flash attention is enabled too. Offload all layers of the model I make note of exactly how many layers each model has during startup and specify it but it should figure it out smartly even if not.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    2 days ago

    I don’t have any good recommendations. I just upload such one-off requests to AIstudio, ChatGPT and the like. But keep in mind AI isn’t perfect at math. They sure make a lot of mistakes with my assignments. I don’t know what level your maths test was, AI does an acceptable job at elementary school maths. With higher level maths, it’ll give both correct and wrong results by chance. Might be good enough, I don’t really know.

    I’d recommend Wolfram Alpha. That’s not local, nor is it AI. But it solves equations, calculates and transforms and draws graphs with precision and there isn’t any guessing involved.

    • SmokeyDope@lemmy.worldM
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 day ago

      Wolfam alpha actually has an LLM api so your local models can call its factual database for information when doing calculations through tool calling. I thought you might find that cool. Its a shame there is no open alternative to WA they know their dataset is one of a kind and worth its weight in gold. Maybe ond day a hero will leak it 🤪

  • pebbles@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    If you were down to use hugging face DeepHeremes is a reasoning model built on top of Mistral Small 24b. It’d fit decently well in 20GB.

    Maybe the ollama run hf.co/{username}/{repository} command would make it easy enough for you.

    Reasoning models usually are better for math.

  • hedgehog@ttrpg.network
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    Assuming you’re using ollama (is there another reason to use ollama.com?), you can use compatible files from huggingface directly in ollama. The model page will give you the instructions for the command to run; I always change ollama run to ollama pull , though. Instructions: https://huggingface.co/docs/hub/ollama

    You should be able to fit Qwen3 32B at Q4_K_M with an acceptable context, and it did very well on math benchmarks (with thinking enabled). You can disable thinking by including /no_think at the end of your prompt to speed up responses, but I’m not sure how well it handles math under those circumstances. I wouldn’t even consider disabling thinking unless you were grading one question per prompt.

    The ollama Qwen3 page is https://ollama.com/library/qwen3:32b and the default 32B quant is Q4_K_M. I personally am using the Q6_K quant by unsloth, and their quants have been great (when supported by ollama), often being the first to fix bugs impacting other quantizations.

    I’m not sure if Q4_K_M is the optimal quant style for Intel Arc, but the others that might be better are not supported by ollama, anyway, as far as I know.

    Qwen3’s real world knowledge is bad, so if there are questions that rely on that you may need to include the relevant facts as part of the prompt or use an ollama frontend that supports web searches.

    Other options: This does seem like something Gemma3 27B would be good at, so it’s too bad you can’t use it. Older Gemmas may be good, but I’m not sure. Llama3.3 70B is also out, unless you have a decent amount of system RAM and are okay with offloading less than half to GPU. I could see it outperforming my recommendation below but I would be very surprised for the 8B version to outperform it. Older Qwen2.5 is decent at math but unless you grab QwQ doesn’t include thinking.

    • HumanPerson@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Unfortunately I can’t run qwen3 with intel either. I’m just doing gemma3:12b on CPU for now. I might try qwq as I think it runs on older ollama versions.