Mesh Intelligence: When Your Sensors Form a Distributed Nervous System
Each sensor has its own brain. They share neural states over a 24-byte mesh protocol. Together, they classify what no single node can — with zero cloud dependency.
The Problem with Centralized Sensor Networks
Traditional IoT architectures follow a simple pattern: sensors collect data, send it to a central server (or cloud), and the server runs the AI model. This works, but it introduces three fundamental weaknesses:
- Single point of failure — if the server goes down, every sensor becomes blind
- Latency — round-trip to the cloud takes 50-200ms, too slow for real-time control
- Bandwidth costs — streaming raw sensor data is expensive at scale
We asked ourselves: what if every sensor had its own brain, and the brains could talk to each other?
The Idea: A Distributed Nervous System
In biology, intelligence does not live in a single neuron. It emerges from networks of neurons that share signals. A worm with 302 neurons can navigate, find food, and avoid danger — not because any single neuron is smart, but because they collaborate.
We applied the same principle to sensor networks. Each sensor node runs its own neural network locally, on a microcontroller costing as little as 2 EUR. The nodes don’t share raw data — they share a compact subset of their neural states. This is the key insight: you don’t need to transmit sensor readings. You transmit what the brain thinks.
The Protocol: 24 Bytes Per Message
Each node shares 8 neural state values with its neighbors. The message format is minimal:
- Header: 8 bytes (magic, source ID, destination ID, sequence number, state count)
- Payload: 16 bytes (8 states in integer arithmetic)
- Total: 24 bytes per message
For comparison, sending a single MQTT message with raw sensor data typically requires 100-500 bytes. Our protocol is 4-20x more efficient, and it carries processed intelligence rather than raw numbers.
The protocol is transport-agnostic: it works over ESP-NOW (WiFi direct, no router needed), BLE mesh, or even simple UART/SPI between co-located chips.
The Benchmark: Environmental Monitoring
We tested with a realistic scenario: two sensor nodes monitoring environmental conditions.
- Node A: temperature sensor (3 derived features)
- Node B: humidity sensor (3 derived features)
- Task: classify into 4 conditions — Normal, Frost Risk, Drought Risk, Mold Risk
The key challenge: no single sensor has enough information to classify correctly. Frost risk requires both low temperature AND high humidity. Mold risk requires moderate temperature AND very high humidity. You need both sensors working together.
Results
| Node A alone (temperature only) | 78.3% |
| Node B alone (humidity only) | 81.7% |
| Average solo performance | 80.0% |
| Mesh (2 nodes collaborating) | 100.0% |
| Centralized oracle (all features) | 100.0% |
The mesh achieves the same accuracy as a centralized model, while being fully distributed, fault-tolerant, and cloud-free.
Scaling: More Nodes, More Advantage
We tested with 2, 3, and 4 nodes, distributing 8 total features across them:
- 2 nodes: 99.2% mesh vs 88.8% solo (+10.4%)
- 3 nodes: 100.0% mesh vs 81.1% solo (+18.9%)
- 4 nodes: 97.5% mesh vs 72.3% solo (+25.2%)
The pattern is clear: the less information each node has individually, the greater the advantage of mesh collaboration. This scales naturally to real-world deployments where dozens of sensors need to work together.
Why This Matters
No existing platform offers distributed neural intelligence on commodity microcontrollers. Edge Impulse, TFLite Micro, and STM32Cube.AI all deploy models to individual devices — they do not enable devices to collaborate neurally.
Mesh intelligence opens entirely new use cases:
- Smart agriculture: distributed soil/weather sensors that collectively decide irrigation
- Industrial monitoring: vibration, temperature, and current sensors on a machine that collectively detect faults
- Smart buildings: HVAC, occupancy, and air quality sensors that self-regulate without a central controller
What’s Next
We are currently porting the mesh protocol to ESP32 hardware using ESP-NOW for peer-to-peer communication. The first hardware demo is expected in Q2 2026.
If you are building a distributed sensor system and want your sensors to be smarter together, get in touch.