Empty Error When Running Llama with llama-cpp

When running multiple open source Llama Large Language Models (LLMs) in the command line with llama-cpp and the command line llm command, we may encounter an empty error such as:

$ llm -m modelName "test"
Error:

The empty error provides no clues, but this can happen if we have the incorrect version of llama-cpp installed for the model we are using.
Different models may use different incompatible file formats internally, so we must ensure we have the correct version of llama-cpp for the given model.

For example, for LLama-2 Uncensored, we can use llama-cpp-python version 0.1.78.

For Llama-2, we can use version 0.2.11.

The following installed versions work at the time of writing:

For Llama-2 Uncensored, install using:

$ pip install llama-cpp-python==0.1.78

For Llama-2, use:

$ pip install llama-cpp-python==0.2.11

We can check which version of llama-cpp is installed using:

$ llm --version

To see all the models installed use:

$ llm models

To run a test again after switching versions:

$ llm -m modelName "test prompt"

 

Leave a Reply

Your email address will not be published. Required fields are marked *