Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I realize it does not address the OP security concerns, but I'm having success running rocm containers[0] on alpine linux specifically for llama.cpp. I also got vLLM to run in a rocm container, but I didn't have time to to diagnose perf problems, and llama.cpp is working well for my needs.

[0] https://github.com/kyuz0/amd-strix-halo-toolboxes

 help



FWIW, Alpine now has native packages for llama.cpp (using Vulkan).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: