What is Ollama?
Ollama lets you run, manage, and interact with LLMs on your own machine. It supports a variety of modern models (e.g., Qwen2.5vl, Gemma3, DeepSeek, Llama 4, etc.) and provides a simple HTTP API for integration.Why Use Ollama with Droidrun?
- Privacy: Run LLMs locally without sending data to the cloud.
- Performance: Low-latency inference on your own hardware.
- Flexibility: Choose and switch between different models easily.
- Cost: No API usage fees.
Prerequisites
- Ollama installed and running on your machine (installation guide).
- Python 3.10+
- droidrun framework installed (see Droidrun Quickstart).
Make sure you’ve set up and enabled the Droidrun Portal.
1. Install and Start Ollama
Download and install Ollama from the official website. Once installed, start the Ollama server:2. Install Required Python Packages
Make sure you have the required Python packages:3. Example: Using Droidrun with Ollama LLM
Here is a minimal example of using Droidrun with Ollama as the LLM backend (using a modern model, e.g., Qwen2.5vl):Limiting the llm’s context_window reduces memory usage, but degrades the agent’s performance too. For the best results try extending it as much as possible.
4. Troubleshooting
- Ollama not running: Make sure
ollama serve
is running and accessible athttp://localhost:11434
. - Model not found: Ensure you have pulled the desired model with
ollama pull <model>
. - Connection errors: Check firewall settings and that the endpoint URL is correct.
- Timeout: If Ollama is running behind a proxy like Cloudflare, make sure the request timeout is configured high enough
- Performance: Some models require significant RAM/CPU. Try smaller models if you encounter issues.
- Compatibility: Vision models do not run correctly on apple silicon chips. Check issue #55 (droidrun), issue @ ollama
5. Tips
- You can switch models by changing the
model
parameter in theOllama
constructor. - Explore different models available via
ollama list
. - For advanced configuration, see the DroidAgent documentation and Ollama API docs.
With this setup, you can harness the power of local, state-of-the-art LLMs for Android automation and agent-based workflows using Droidrun and Ollama!