Every number on this page comes from reproducible benchmarks on public datasets. Tested on industry-standard benchmarks including MNIST, MIT-BIH ECG, and Speech Commands.
Tested on 4 standard scientific datasets (Iris, Wine, Breast Cancer, Digits) with 70/30 train/test split. All models trained from scratch, no pre-training.
| Dataset | Luviner V3 | MLP float64 | Decision Tree | Random Forest |
|---|---|---|---|---|
| Iris (150 / 4 feat / 3 cls) | 100.0% | 73.3% | 100.0% | 100.0% |
| Wine (178 / 13 feat / 3 cls) | 98.2% | 92.6% | 96.3% | 100.0% |
| Breast Cancer (569 / 30 feat / 2 cls) | 97.7% | 91.8% | 94.2% | 97.1% |
| Digits (1797 / 64 feat / 10 cls) | 97.0% | 95.6% | 85.7% | 97.4% |
| Average | 98.2% | 88.3% | 94.0% | 98.6% |
| Dataset | Luviner V3 | MLP float64 | Decision Tree | Random Forest |
|---|---|---|---|---|
| Flash Size | ~14 KB | ~82 KB | ~2.4 KB | ~175 KB |
| MCU-Ready | Yes | No | Yes | No |
Industrial machine state detection: 6 vibration sensors, 4 states (Normal, Warming, Stressed, Failing). The challenge: Warming and Normal look identical in a single reading — only temporal context reveals the difference.
| Method | Accuracy | RAM | Feature Eng. |
|---|---|---|---|
| Luviner Streaming | 88.5% | 102 float | None |
| Windowed Features | 83.8% | 60 float | Manual (4 stats) |
| Stateless | 83.7% | 6 float | None |
Industry-standard benchmark: 7,352 training + 2,947 test samples from 30 volunteers, 561 pre-computed features, 6 activities. Official train/test split used by all published papers for direct comparison.
| Method | Accuracy | Flash (MCU) | MCU-Ready |
|---|---|---|---|
| Luviner EdgeV3 | 95.0% | 137 KB | Yes |
| SVM (Anguita 2013) | 96.0% | ~500 KB+ | No |
| CNN 1D + TFLite | ~94-95% | ~300 KB+ | Partial |
| LSTM | ~93-95% | ~1 MB+ | No |
A harder scenario: multi-activity sequences with transitions, classifying every single timestep in a continuous stream. Raw 9-channel accelerometer/gyroscope data, no pre-computed features. This is the real-world smartwatch scenario.
| Method | Accuracy | RAM | Feature Eng. |
|---|---|---|---|
| Full-Sequence | 79.6% | 105 float | None |
| Luviner Streaming | 75.5% | 105 float | None |
| Stateless | 68.6% | 9 float | None |
| Windowed Features | 30.6% | 324 float | Manual (4 stats) |
Models can improve on-device without cloud connectivity. Few-shot adaptation and model compression enable real-world deployment where conditions change.
Deployed model adapts to new conditions with ~50 samples, directly on the MCU. No retraining from scratch, no cloud needed.
Transfer knowledge from a large model to a tiny one. The small model achieves accuracy impossible with direct training alone.
Benchmark on tasks that matter for embedded deployment: voice commands, industrial monitoring, gesture recognition, and cardiac monitoring.
Always-on voice command detection from MFCC features
Predictive maintenance on industrial vibration sensors
IMU-based gesture classification for wearables
Cardiac rhythm classification for health devices
Integer-only quantization with zero runtime dependencies. The entire model fits in a few KB of flash — no framework, no runtime, no allocations.
Beyond toy datasets. Luviner is benchmarked on industry-standard datasets with tens of thousands of samples — the same benchmarks used to evaluate production ML systems.
The universal ML benchmark. 70,000 handwritten digits, 784 features. The standard test for any classification system.
Real cardiac arrhythmia detection. 109,000 heartbeats from PhysioNet, 5 clinical classes. Directly in our medical use case.
Keyword spotting with MFCC features. 10 keywords, always-on detection. Directly in our voice-command use case.
AutoML for MCU: specify your hardware constraints (Flash, RAM, chip model) and Luviner automatically searches for the neural network architecture that maximizes accuracy while fitting within your memory budget. No ML expertise needed.
Train your model on normal operation only — no labeled fault data needed. The model learns what "normal" looks like and flags anything that deviates. Perfect for predictive maintenance where collecting failure examples is expensive or impossible.
The model monitors incoming data distribution and signals when it no longer matches the training data. No cloud connection, no forward pass — runs directly on raw sensor inputs with just 24 bytes of extra RAM.
Multiple sensor nodes share neural states via a lightweight mesh protocol. Each node has its own brain — together they classify what no single node can. No cloud, no central server, no single point of failure.
| Configuration | Accuracy | vs Solo | Message Size |
|---|---|---|---|
| 2 nodes (3+3 features) | 100.0% | +20.0% | 24 bytes |
| 3 nodes (3+3+2 features) | 100.0% | +18.9% | 24 bytes |
| 4 nodes (2+2+2+2 features) | 97.5% | +25.2% | 24 bytes |
Six capabilities that make the mesh production-ready for industrial deployments. Every feature compiles to pure C firmware for the same 2 EUR microcontrollers — no extra hardware, no cloud dependency, no floating point.
Nodes vote on every decision. A compromised or faulty node is automatically detected and excluded — the network doesn't trust any single node blindly. Requires majority consensus for every classification.
When a node fails or a new one joins, the network reconfigures automatically. No manual intervention, no downtime. Heartbeat monitoring detects failures in seconds.
Information propagates beyond direct neighbors — through relay nodes, the network's nervous system extends across an entire facility. Automatic routing, distance-based prioritization.
Each node automatically learns which of its internal states carry the most useful information for neighbors. Only the most relevant data is shared — reducing bandwidth while improving accuracy.
Nodes adapt to local conditions and share their improvements with neighbors. The entire swarm gets smarter over time without centralized retraining — each node contributes what it learns.
All benchmarks use standard train/test splits on public datasets. No cherry-picking, no hidden configurations. Results are independently verifiable.
Go from CSV to compiled C in minutes. Start free.