How to Setup Ollama with Clawdbot (The Easy Way)

If you’re looking to run powerful AI models like Llama 3.2 or DeepSeek R1 locally on your machine while maintaining the security of a sandboxed environment, combining Ollama with Clawdbot is the ultimate power move.

Step 1: Get the Foundations Ready

First, ensure you have both Ollama and Clawdbot installed on your host machine. Once Ollama is running, you’ll want to pull your models (e.g., ollama pull llama3.2).

Step 2: Bridging the Sandbox

When running Clawdbot in a \”docked\” (sandboxed) state, it lives in its own world. To let it talk to your local Ollama, you need to point it to the host’s network. Run this in your terminal:

clawdbot config set models.providers.ollama.baseUrl \"http://host.docker.internal:11434/v1\"

Step 3: Opening the Gates

For security, Clawdbot doesn’t use models unless they are explicitly allowed. You’ll need to approve your local models using the CLI:

clawdbot models set ollama/llama3.2:latest

Step 4: The Final Restart

To apply all your changes, restart the gateway:

clawdbot gateway restart

Now you have a local, private, and secure AI assistant running right on your hardware. Happy coding!

相关文章列表

  1. No comments yet.