

There are a few. There’s Private AI. It is free (as in beer) but it’s not libre (or open source). The app is a bit sketchy too, so I would still recommend doing as the tutorial says.
Out of curiosity, why do you not want to use a terminal for that?
There are a few. There’s Private AI. It is free (as in beer) but it’s not libre (or open source). The app is a bit sketchy too, so I would still recommend doing as the tutorial says.
Out of curiosity, why do you not want to use a terminal for that?
Though apparently I didn’t need step 6 as it started running after I downloaded it
Hahahha. It really is a little redundant, now that you mention it. I’ll remove it from the post. Thank you!
Good fun. Got me interested in running local LLM for the first time.
I’m very happy to hear my post motivated you to run an LLM locally for the first time! Did you manage to run any other models? How was your experience? Let us know!
What type of performance increase should I expect when I spin this up on my 3070 ti?
That really depends on the model, to be completely honest. Make sure to check the model requirements. For llama3.2:2b you can expect a significant performance increase, at least.
I didn’t use an LLM to make the post. I did, however, use Claude to make it clearer since English is not my first language. I hope that answers your question.
I see. I don’t think there there are many solutions on that front for Android. For PC there are a few, such as LM Studio.