Using Droidrun with Ollama
This guide explains how to use the Droidrun framework with Ollama, an open-source platform for running large language models (LLMs) locally. By integrating Ollama with Droidrun, you can leverage powerful local LLMs to automate Android devices, build intelligent agents, and experiment with advanced workflows.
What is Ollama?
Ollama lets you run, manage, and interact with LLMs on your own machine. It supports a variety of modern models (e.g., Qwen2.5vl, Gemma3, DeepSeek, Llama 4, etc.) and provides a simple HTTP API for integration.
Why Use Ollama with Droidrun?
- Privacy: Run LLMs locally without sending data to the cloud.
- Performance: Low-latency inference on your own hardware.
- Flexibility: Choose and switch between different models easily.
- Cost: No API usage fees.
Prerequisites
- Ollama installed and running on your machine (installation guide).
- Python 3.10+
- droidrun framework installed (see Droidrun Quickstart).
Make sure you’ve set up and enabled the Droidrun Portal.
1. Install and Start Ollama
Download and install Ollama from the official website. Once installed, start the Ollama server:
Pull a modern model you want to use (e.g., Qwen2.5vl, Gemma3, DeepSeek, Llama 4):
2. Install Required Python Packages
Make sure you have the required Python packages:
3. Example: Using Droidrun with Ollama LLM
Here is a minimal example of using Droidrun with Ollama as the LLM backend (using a modern model, e.g., Qwen2.5vl):
4. Troubleshooting
- Ollama not running: Make sure
ollama serve
is running and accessible athttp://localhost:11434
. - Model not found: Ensure you have pulled the desired model with
ollama pull <model>
. - Connection errors: Check firewall settings and that the endpoint URL is correct.
- Timeout: If Ollama is running behind a proxy like Cloudflare, make sure the request timeout is configured high enough
- Performance: Some models require significant RAM/CPU. Try smaller models if you encounter issues.
- Compatibility: Vision models do not run correctly on apple silicon chips. Check issue #55 (droidrun), issue @ ollama
5. Tips
- You can switch models by changing the
model
parameter in theOllama
constructor. - Explore different models available via
ollama list
. - For advanced configuration, see the DroidAgent documentation and Ollama API docs.
With this setup, you can harness the power of local, state-of-the-art LLMs for Android automation and agent-based workflows using Droidrun and Ollama!