Running DeepSeek LLM Localy Inside Zeus

Find Tips and tricks on how to better use the Zeus IDE. Feel free to post your own tips but please do not post bug reports, feature requests or questions here.
Post Reply
jussij
Site Admin
Posts: 2652
Joined: Fri Aug 13, 2004 5:10 pm

Running DeepSeek LLM Localy Inside Zeus

Post by jussij »

The Zeus IDE has scripts for AI 'code explain' and 'ask a question' prompts as described here: https://www.zeusedit.com/zBB3/viewtopic.php?t=8402

IMPORTANT: The standard Zeus 'autocomplete' script can't be used by most DeepSeek models as they don't support Fill-in-the-Middle (FIM) operation.

To use DeepSeek for autocomplete refer to the Zeus deepseek_complete.py script which uses a specific DeepSeek model to implement autocomplete.

NOTE: These Zeus scripts can also be used as templates, making it easy to create other types of AI prompts.

One set of those Zeus scripts run the Ollama LLM localy, and using those scripts it is possible to run the DeepSeek model.

Here are the instructions on how to do this:

1. Download and install Ollama found here: https://ollama.com/download/windows

NOTE: The Ollama installer configures the Ollama application to autostart. If this is not desired, turn off autostart by first killing the Ollama server
using Task Manager and then using Windows search looking for 'Startup Apps' and turning off the Ollama application.

2. Pip will need to be installed, and this can be done from inside Zeus by using the Tools, DOS Shell menu and running the following commands:

b]a.[/b] Run this command to install pip:

Code: Select all

python.exe -m ensurepip
b]b.[/b] Upgrade pip to the latest version using this command line:

Code: Select all

python -m pip install --upgrade --force-reinstall --no-cache-dir pip
3. With pip installed use the Zeus Tools, DOS Command Line menu and use the following command line to install the Ollama Python package.

Code: Select all

pip install ollama
4. Copy the details of the DeepSeek model to be installed from this page: https://ollama.com/library/deepseek-r1

5. The model is installed using the Ollama pull command by specifying the model's name and version number.

So for example, this would be the command used to install the DeepSeek model:

Code: Select all

ollama pull deepseek-r1:7b
NOTE: The OLLAMA_MODELS environment variable can be used to specify the folder used to store these models:

Code: Select all

set OLLAMA_MODELS = c:\ollama\models
6. Check that the model has been installed using the list command:

Code: Select all

ollama list
7. Open the Zeus ollama_zeus.py file found in the Zeus zScript folder and update the model details. The model details are found in a few locations which can be found using a search for the text:

Code: Select all

model='codellama:7b-instruct'):
Assuming the deepseek-r1:7b was downloaded, change the code in the locations found to use the new model details:

Code: Select all

model='deepseek-r1:7b'):
This line of code defines the model used by the script, and it can be modified to suit.

8. Start the local Ollama service:

Code: Select all

Ollama serve
8. Use the following page for instructions on how to use these Zeus AI scripts: https://www.zeusedit.com/zBB3/viewtopic.php?t=8402
Post Reply