This guide explains how to use the Droidrun framework with Ollama, an open-source platform for running large language models (LLMs) locally. By integrating Ollama with Droidrun, you can leverage powerful local LLMs to automate Android devices, build intelligent agents, and experiment with advanced workflows.

What is Ollama?

Ollama lets you run, manage, and interact with LLMs on your own machine. It supports a variety of modern models (e.g., Qwen2.5vl, Gemma3, DeepSeek, Llama 4, etc.) and provides a simple HTTP API for integration.

Why Use Ollama with Droidrun?

  • Privacy: Run LLMs locally without sending data to the cloud.
  • Performance: Low-latency inference on your own hardware.
  • Flexibility: Choose and switch between different models easily.
  • Cost: No API usage fees.

Prerequisites

Make sure you’ve set up and enabled the Droidrun Portal.

1. Install and Start Ollama

Download and install Ollama from the official website. Once installed, start the Ollama server:

ollama serve

Pull a modern model you want to use (e.g., Qwen2.5vl, Gemma3, DeepSeek, Llama 4):

ollama pull qwen2.5vl
ollama pull gemma3
ollama pull deepseek-r1 # no vision capabilities. can only be used with vision disabled
ollama pull llama4

2. Install Required Python Packages

Make sure you have the required Python packages:

pip install droidrun llama-index-llms-ollama

3. Example: Using Droidrun with Ollama LLM

Here is a minimal example of using Droidrun with Ollama as the LLM backend (using a modern model, e.g., Qwen2.5vl):

import asyncio
from llama_index.llms.ollama import Ollama
from droidrun import DroidAgent, AdbTools

async def main():
    # load adb tools for the first connected device
    tools = await AdbTools.create()

    # Set up the Ollama LLM with a modern model
    llm = Ollama(
        model="qwen2.5vl",  # or "gemma3", "deepseek", "llama4", etc.
        base_url="http://localhost:11434"  # default Ollama endpoint
    )

    # Create the DroidAgent
    agent = DroidAgent(
        goal="Open Settings and check battery level",
        llm=llm,
        tools=tools,
        vision=True,         # Optional: enable vision. use vision=False for deepseek models
        reasoning=True,       # Optional: enable planning/reasoning. Read more about the agent configuration in Core-Concepts/Agent
    )

    # Run the agent
    result = await agent.run()
    print(f"Success: {result['success']}")
    if result.get('output'):
        print(f"Output: {result['output']}")

if __name__ == "__main__":
    asyncio.run(main())

4. Troubleshooting

  • Ollama not running: Make sure ollama serve is running and accessible at http://localhost:11434.
  • Model not found: Ensure you have pulled the desired model with ollama pull <model>.
  • Connection errors: Check firewall settings and that the endpoint URL is correct.
  • Timeout: If Ollama is running behind a proxy like Cloudflare, make sure the request timeout is configured high enough
  • Performance: Some models require significant RAM/CPU. Try smaller models if you encounter issues.
  • Compatibility: Vision models do not run correctly on apple silicon chips. Check issue #55 (droidrun), issue @ ollama

5. Tips

  • You can switch models by changing the model parameter in the Ollama constructor.
  • Explore different models available via ollama list.
  • For advanced configuration, see the DroidAgent documentation and Ollama API docs.

With this setup, you can harness the power of local, state-of-the-art LLMs for Android automation and agent-based workflows using Droidrun and Ollama!

This guide explains how to use the Droidrun framework with Ollama, an open-source platform for running large language models (LLMs) locally. By integrating Ollama with Droidrun, you can leverage powerful local LLMs to automate Android devices, build intelligent agents, and experiment with advanced workflows.

What is Ollama?

Ollama lets you run, manage, and interact with LLMs on your own machine. It supports a variety of modern models (e.g., Qwen2.5vl, Gemma3, DeepSeek, Llama 4, etc.) and provides a simple HTTP API for integration.

Why Use Ollama with Droidrun?

  • Privacy: Run LLMs locally without sending data to the cloud.
  • Performance: Low-latency inference on your own hardware.
  • Flexibility: Choose and switch between different models easily.
  • Cost: No API usage fees.

Prerequisites

Make sure you’ve set up and enabled the Droidrun Portal.

1. Install and Start Ollama

Download and install Ollama from the official website. Once installed, start the Ollama server:

ollama serve

Pull a modern model you want to use (e.g., Qwen2.5vl, Gemma3, DeepSeek, Llama 4):

ollama pull qwen2.5vl
ollama pull gemma3
ollama pull deepseek-r1 # no vision capabilities. can only be used with vision disabled
ollama pull llama4

2. Install Required Python Packages

Make sure you have the required Python packages:

pip install droidrun llama-index-llms-ollama

3. Example: Using Droidrun with Ollama LLM

Here is a minimal example of using Droidrun with Ollama as the LLM backend (using a modern model, e.g., Qwen2.5vl):

import asyncio
from llama_index.llms.ollama import Ollama
from droidrun import DroidAgent, AdbTools

async def main():
    # load adb tools for the first connected device
    tools = await AdbTools.create()

    # Set up the Ollama LLM with a modern model
    llm = Ollama(
        model="qwen2.5vl",  # or "gemma3", "deepseek", "llama4", etc.
        base_url="http://localhost:11434"  # default Ollama endpoint
    )

    # Create the DroidAgent
    agent = DroidAgent(
        goal="Open Settings and check battery level",
        llm=llm,
        tools=tools,
        vision=True,         # Optional: enable vision. use vision=False for deepseek models
        reasoning=True,       # Optional: enable planning/reasoning. Read more about the agent configuration in Core-Concepts/Agent
    )

    # Run the agent
    result = await agent.run()
    print(f"Success: {result['success']}")
    if result.get('output'):
        print(f"Output: {result['output']}")

if __name__ == "__main__":
    asyncio.run(main())

4. Troubleshooting

  • Ollama not running: Make sure ollama serve is running and accessible at http://localhost:11434.
  • Model not found: Ensure you have pulled the desired model with ollama pull <model>.
  • Connection errors: Check firewall settings and that the endpoint URL is correct.
  • Timeout: If Ollama is running behind a proxy like Cloudflare, make sure the request timeout is configured high enough
  • Performance: Some models require significant RAM/CPU. Try smaller models if you encounter issues.
  • Compatibility: Vision models do not run correctly on apple silicon chips. Check issue #55 (droidrun), issue @ ollama

5. Tips

  • You can switch models by changing the model parameter in the Ollama constructor.
  • Explore different models available via ollama list.
  • For advanced configuration, see the DroidAgent documentation and Ollama API docs.

With this setup, you can harness the power of local, state-of-the-art LLMs for Android automation and agent-based workflows using Droidrun and Ollama!