Some users ask us if it's possible to run on an external server or, in this case, on WSL (Windows Subsystem for Linux) and connect it to use in CodeGPT. First, I must say that the key to everything is ensuring the service is accessible from localhost. However, there are many system requirements to guarantee it.
This guide will help you set up an Ollama service in WSL and connect it with the CodeGPT extension in VSCode. Make sure to follow each step carefully to ensure the service is accessible from localhost
.
wsl --install
wsl
in the Command Prompt.curl -fsSL https://ollama.com/install.sh | sh
3. Install GPU requirements (if necessary):
localhost
127.0.0.1
and port 11434
.2. Configure port forwarding in Windows. Open Command Prompt as Administrator and run: netsh interface portproxy add v4tov4 listenport=80 listenaddress=0.0.0.0 connectport=11434 connectaddress=127.0.0.1
3. Enable the rule in Windows Defender Firewall. Open Windows Defender Firewall and allow connections on the port 11434
.
2. Open in the browser from Windows, open a web browser and navigate to http://127.0.0.1:11434
to verify the service is accessible.
You should now be able to test CodeGPT from other remote servers as long as you verify that the server where Ollama is running is accessible from localhost
.
Following these steps, you can connect an Ollama service running in WSL with the CodeGPT extension in VSCode, ensuring a smooth and efficient integration.