This guide explains how to use the Droidrun framework with Ollama, an open-source platform for running large language models (LLMs) locally. By integrating Ollama with Droidrun, you can leverage powerful local LLMs to automate Android devices, build intelligent agents, and experiment with advanced workflows.
Ollama lets you run, manage, and interact with LLMs on your own machine. It supports a variety of modern models (e.g., Qwen2.5vl, Gemma3, DeepSeek, Llama 4, etc.) and provides a simple HTTP API for integration.
Make sure you’ve set up and enabled the Droidrun Portal.
Download and install Ollama from the official website. Once installed, start the Ollama server:
Pull a modern model you want to use (e.g., Qwen2.5vl, Gemma3, DeepSeek, Llama 4):
Make sure you have the required Python packages:
Here is a minimal example of using Droidrun with Ollama as the LLM backend (using a modern model, e.g., Qwen2.5vl):
ollama serve
is running and accessible at http://localhost:11434
.ollama pull <model>
.model
parameter in the Ollama
constructor.ollama list
.With this setup, you can harness the power of local, state-of-the-art LLMs for Android automation and agent-based workflows using Droidrun and Ollama!
This guide explains how to use the Droidrun framework with Ollama, an open-source platform for running large language models (LLMs) locally. By integrating Ollama with Droidrun, you can leverage powerful local LLMs to automate Android devices, build intelligent agents, and experiment with advanced workflows.
Ollama lets you run, manage, and interact with LLMs on your own machine. It supports a variety of modern models (e.g., Qwen2.5vl, Gemma3, DeepSeek, Llama 4, etc.) and provides a simple HTTP API for integration.
Make sure you’ve set up and enabled the Droidrun Portal.
Download and install Ollama from the official website. Once installed, start the Ollama server:
Pull a modern model you want to use (e.g., Qwen2.5vl, Gemma3, DeepSeek, Llama 4):
Make sure you have the required Python packages:
Here is a minimal example of using Droidrun with Ollama as the LLM backend (using a modern model, e.g., Qwen2.5vl):
ollama serve
is running and accessible at http://localhost:11434
.ollama pull <model>
.model
parameter in the Ollama
constructor.ollama list
.With this setup, you can harness the power of local, state-of-the-art LLMs for Android automation and agent-based workflows using Droidrun and Ollama!