An LLM agent that lives on the mesh
meshclaw is an autonomous LLM agent that runs as a node on a MeshCore LoRa mesh network. It answers direct messages, tracks contacts, and uses mesh-native tools over radio.
What is meshclaw?
At its core, meshclaw is a full agent runtime: an iterative tool-calling loop, plan management, background sub-agents, and persistent conversation context with auto-summarization. Right now the agent handles direct messages only — channel and room support is planned for later.
LoRa constraints shape the design: non-streaming LLM calls, careful chunking onto 160-byte packets, and inter-packet delays so the network can breathe.
Features
Direct messages
Every incoming DM is processed — no activation phrase needed. The agent always responds.
Mesh-native tools
Built-in tools let the agent query mesh status, list contacts, ping nodes, and more.
Agent runtime
Iterative tool calling (up to 30 steps), plans, background sub-agents, and scheduled tasks.
Persistent context
Conversation history stored in PostgreSQL, auto-summarized when context grows. Messages expire after 6 hours.
Any OpenAI-compatible LLM
Works with OpenAI, OpenRouter, local models, or any API that speaks the OpenAI protocol.
MCP integration
Extensible via Model Context Protocol servers. Tool access is filtered per user role.
Architecture
The system is event-driven and layered. Each layer has a single clear responsibility:
- LoRa radio → MeshTransport Wraps the MeshCore Python library. Subscribes to contact-message events; supports serial, TCP and BLE with auto-reconnect.
- MeshTransport → MeshMessageProcessor Central coordinator. Manages one active asyncio task per conversation and registers mesh + MCP tools.
- MeshMessageProcessor → AgentRuntime Transport-agnostic iterative agent loop (up to 30 iterations). Drains background notifications, calls the LLM, executes tool calls, loops.
- AgentRuntime → MeshRuntimeAdapter Turns runtime events into LoRa side effects: chunking, inter-packet delays, 160-byte-per-packet limit.
Mesh tools
Tools the LLM can call to interact with the mesh. The registry lives in tools/registry.py.
| Tool | Description | Status |
|---|---|---|
GetMeshStatus |
Node count, own identity, channel utilization. | active |
GetContactInfo |
Full details about a specific contact. | planned |
ListContacts |
Formatted list of all known contacts. | planned |
PingContact |
Check whether a contact is reachable (with RTT). | planned |
SendMeshMessage |
Send a message to a specific contact. | disabled |
GetPosition |
Last known GPS position of a contact. | disabled |
TraceRoute |
Trace the route to a destination contact. | disabled |
Quick Start
1. Install dependencies
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
2. Start PostgreSQL
Starts a PostgreSQL 15 instance and runs the schema migration automatically.
docker compose up -d postgres
3. Configure settings_local.py
POSTGRES_HOST = 'localhost'
MODEL_NAME = 'gpt-4o'
MODEL_API_KEY = 'sk-...'
MODEL_BASE_URL = 'https://api.openai.com/v1'
4. Run meshclaw
Local development against a MeshCore device (serial, TCP, or BLE):
./run_local.sh
./run_local.sh --tcp 192.168.1.100:5000
./run_local.sh --serial /dev/ttyUSB0
./run_local.sh --ble
Or the full stack via Docker:
docker compose build
docker compose up -d
docker compose logs -f meshclaw
Requirements
- Python 3.11+
- PostgreSQL 15+
- MeshCore device connected via USB serial, TCP/IP, or BLE
- OpenAI-compatible API key