This guide explains how to use the Droidrun framework with OpenAI-compatible APIs (such as OpenAI, Azure OpenAI, OpenRouter, LM Studio, etc.). By integrating these LLMs with Droidrun, you can automate Android devices, build intelligent agents, and experiment with advanced workflows using any OpenAI-like endpoint.
Here is a minimal example of using Droidrun with an OpenAI-compatible LLM backend:
Copy
Ask AI
import asynciofrom llama_index.llms.openai_like import OpenAILikefrom droidrun import DroidAgent, AdbToolsasync def main(): # Load adb tools for the first connected device tools = AdbTools() # Set up the OpenAI-like LLM (uses env vars for API key and base by default) llm = OpenAILike( model="gpt-3.5-turbo", # or "gpt-4o", "gpt-4", etc. api_base="http://localhost:1234/v1", # For local endpoints is_chat_model=True, # droidrun requires chat model support api_key="YOUR API KEY" ) # Create the DroidAgent agent = DroidAgent( goal="Open Settings and check battery level", llm=llm, tools=tools, vision=False, # Set to True if your model supports vision reasoning=False, # Optional: enable planning/reasoning ) # Run the agent result = await agent.run() print(f"Success: {result['success']}") if result.get('output'): print(f"Output: {result['output']}")if __name__ == "__main__": asyncio.run(main())