

1·
10 months agoI just wanted to update this to mention that there are a lot of custom low level performance improvements for CPU based inferencing in Llamafile: https://justine.lol/matmul/
I just wanted to update this to mention that there are a lot of custom low level performance improvements for CPU based inferencing in Llamafile: https://justine.lol/matmul/
It’s just a different use case to create a single-file large language model engine that automatically chooses the “best” parameters to run under. It uses llama.cpp under the hood.
The intent is to make it as easy as double clicking a binary to get up and running.
The client is open source and can be administered using the open source Headscale server. I use it with Keycloak as an auth gateway.