Run the Mistral 7B LLM Locally

We can run the Mistral 7B (seven billion parameter) Large Language Model locally easily using Ollama. In this example we assume running on macOS.

First, install Ollama.

Download the installer from:

https://github.com/jmorganca/ollama

Double-click the app to install the binary command.

Now, in a terminal, run:

$ ollama --version

The output should be similar to:

ollama version 0.1.13

If the command is successfully installed, we can download the Mistral 7B model with:

$ ollama run mistral

This will download and start the model.

Once loaded, we should see:

>>> Send a message (/? for help)

Now, try test a prompt:

>>> What is the capital of Estonia?

The capital of Estonia is Tallinn.