-
Notifications
You must be signed in to change notification settings - Fork 811
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llamafile as LLM server for Mantella mod and Skyrim, is working nice but there is a little problem. #415
Comments
Rewritten for clarity, please confirm if correct. Bug ReportIssue: Steps to Reproduce:
Expected Behavior: BackgroundThe Mantella mod introduces the ability to talk to NPCs in Skyrim, revolutionizing the RPG experience. The mod author uses koboldcpp for the LLM server, but it crashes on my PC due to lack of AVX2 support. Other accelerants like "blas" are too slow. As an alternative, I am using llamafile, which is efficient and performs well on my NVIDIA 3060 GPU with 12GB VRAM. However, the need to "alt+tab" to trigger responses is the primary issue I need to resolve. |
@mofosyne thank you very much, I'm not very good at reporting bug reports, next time I'll try to force myself to do my best. |
This is Windows correct? llamafile is a CLI application. How would the state of the Window manager impact its operation? |
Is it possible that it's a bug in the mod? Maybe give the mod writer a poke and link this issue to them and see if they reply. |
yes is windows, llamafile as a cli application: Mantella LLM is form here but it is irrelevant it happens with any llm model.
To be honest I don't even know if it's a llamafile issue. I also reported it to the author of the mod when I wrote the issue here. |
in case sameone want to try this is the quick tutorial (is not exhaustive, refer to the official tutorials for correct step by step) you need :
play the game (go to mods configuration in the game for mantella spell customizations and shortcuts ) |
Mantella mod introduces the possibility of talking to Skyrim NPCs, revolutionizing the way of playing this RPG, making it a unique experience ever seen.
Officially the author of the mod relies on the llm server koboldcpp.
Unfortunately koboldcpp with cuda crashes on my pc because my processor doesn't support avx2, while the other "blas" accelerant are too slow. So as an alternative i use llamafile, is working nice and smart, is very light and very performing on my 3060 with 12GB. The only problem is that every time I have to start a conversation, in order for the llm to generate the response, I have to briefly "alt+tab" to "exit and re-enter the game" so that llamafile generates the response and it triggers the loop with the voice speech, it also works for multiple comments, but then after it asks a new question to the npc, I have to "alt+tab" again to trigger the llm server. I was wondering what it could be and if there is a way to overcome this problem.
The text was updated successfully, but these errors were encountered: