IMPORTANT: The standard Zeus 'autocomplete' script can't be used by most DeepSeek models as they don't support Fill-in-the-Middle (FIM) operation.
To use DeepSeek for autocomplete refer to the Zeus deepseek_complete.py script which uses a specific DeepSeek model to implement autocomplete.
NOTE: These Zeus scripts can also be used as templates, making it easy to create other types of AI prompts.
One set of those Zeus scripts run the Ollama LLM localy, and using those scripts it is possible to run the DeepSeek model.
Here are the instructions on how to do this:
1. Download and install Ollama found here: https://ollama.com/download/windows
NOTE: The Ollama installer configures the Ollama application to autostart. If this is not desired, turn off autostart by first killing the Ollama server
using Task Manager and then using Windows search looking for 'Startup Apps' and turning off the Ollama application.
2. Pip will need to be installed, and this can be done from inside Zeus by using the Tools, DOS Shell menu and running the following commands:
b]a.[/b] Run this command to install pip:
Code: Select all
python.exe -m ensurepip
Code: Select all
python -m pip install --upgrade --force-reinstall --no-cache-dir pip
Code: Select all
pip install ollama
5. The model is installed using the Ollama pull command by specifying the model's name and version number.
So for example, this would be the command used to install the DeepSeek model:
Code: Select all
ollama pull deepseek-r1:7b
Code: Select all
set OLLAMA_MODELS = c:\ollama\models
Code: Select all
ollama list
Code: Select all
model='codellama:7b-instruct'):
Code: Select all
model='deepseek-r1:7b'):
8. Start the local Ollama service:
Code: Select all
Ollama serve