LUVINER
Live Demo Benchmarks Docs Edge AI About Blog IT Log in
Reproducible Results

Tested on real data, not marketing slides.

Every number on this page comes from reproducible benchmarks on public datasets. Tested on industry-standard benchmarks including MNIST, MIT-BIH ECG, and Speech Commands.

Competitive Analysis

Luviner V3 vs Traditional ML

Tested on 4 standard scientific datasets (Iris, Wine, Breast Cancer, Digits) with 70/30 train/test split. All models trained from scratch, no pre-training.

Dataset Luviner V3 MLP float64 Decision Tree Random Forest
Iris (150 / 4 feat / 3 cls) 100.0% 73.3% 100.0% 100.0%
Wine (178 / 13 feat / 3 cls) 98.2% 92.6% 96.3% 100.0%
Breast Cancer (569 / 30 feat / 2 cls) 97.7% 91.8% 94.2% 97.1%
Digits (1797 / 64 feat / 10 cls) 97.0% 95.6% 85.7% 97.4%
Average 98.2% 88.3% 94.0% 98.6%
Dataset Luviner V3 MLP float64 Decision Tree Random Forest
Flash Size ~14 KB ~82 KB ~2.4 KB ~175 KB
MCU-Ready Yes No Yes No
V3 matches Random Forest accuracy (98.2% vs 98.6%) using 6x less memory and runs directly on MCU without runtime dependencies.
Temporal Advantage

Why Streaming Inference Wins on Temporal Data

Industrial machine state detection: 6 vibration sensors, 4 states (Normal, Warming, Stressed, Failing). The challenge: Warming and Normal look identical in a single reading — only temporal context reveals the difference.

Streaming
88.5%
Windowed
83.8%
Stateless
83.7%
Method Accuracy RAM Feature Eng.
Luviner Streaming 88.5% 102 float None
Windowed Features 83.8% 60 float Manual (4 stats)
Stateless 83.7% 6 float None
Streaming inference beats both alternatives without any buffer or manual feature engineering. The persistent model state naturally acts as temporal memory.
Real-World Benchmark

UCI HAR — Standard Benchmark (official split)

Industry-standard benchmark: 7,352 training + 2,947 test samples from 30 volunteers, 561 pre-computed features, 6 activities. Official train/test split used by all published papers for direct comparison.

Luviner V3
95.0%
SVM
96.0%
CNN 1D
~94-95%
LSTM
~93-95%
Random Forest
~91-93%
Method Accuracy Flash (MCU) MCU-Ready
Luviner EdgeV3 95.0% 137 KB Yes
SVM (Anguita 2013) 96.0% ~500 KB+ No
CNN 1D + TFLite ~94-95% ~300 KB+ Partial
LSTM ~93-95% ~1 MB+ No
Source: Anguita et al., 2013 — UCI Machine Learning Repository
Luviner achieves 95.0% on the standard UCI HAR benchmark — competitive with CNN and LSTM models, but in only 137 KB of pure C code with zero dependencies. Deployable on any microcontroller from a 2 EUR chip.
Continuous Monitoring

UCI HAR — Continuous Activity Monitoring (streaming)

A harder scenario: multi-activity sequences with transitions, classifying every single timestep in a continuous stream. Raw 9-channel accelerometer/gyroscope data, no pre-computed features. This is the real-world smartwatch scenario.

Full-Sequence
79.6%
Streaming
75.5%
Stateless
68.6%
Windowed
30.6%
Method Accuracy RAM Feature Eng.
Full-Sequence 79.6% 105 float None
Luviner Streaming 75.5% 105 float None
Stateless 68.6% 9 float None
Windowed Features 30.6% 324 float Manual (4 stats)
Source: Anguita et al., 2013 — UCI Machine Learning Repository
On continuous monitoring with activity transitions, full-sequence training reaches 79.6% — outperforming stateless by +11 points and windowed by +49 points, while using 12x less RAM and zero feature engineering.
On-Device Learning

Adapt After Deployment

Models can improve on-device without cloud connectivity. Few-shot adaptation and model compression enable real-world deployment where conditions change.

Few-Shot Adaptation

Deployed model adapts to new conditions with ~50 samples, directly on the MCU. No retraining from scratch, no cloud needed.

After
64%
Before
38%
50 samples +0.13 KB RAM On-device

Model Compression

Transfer knowledge from a large model to a tiny one. The small model achieves accuracy impossible with direct training alone.

Compressed
62%
Direct
28%
+34% improvement Large → Tiny
Few-shot adaptation adds only 0.13 KB RAM. Model compression achieves +34% accuracy over direct training. Both capabilities work on MCU with zero cloud dependency.
Edge IoT Tasks

4 Real-World Edge AI Scenarios

Benchmark on tasks that matter for embedded deployment: voice commands, industrial monitoring, gesture recognition, and cardiac monitoring.

Keyword Spotting

Always-on voice command detection from MFCC features

99.3%
28 feat · 4 cls · < 1 mW

Anomaly Detection

Predictive maintenance on industrial vibration sensors

100.0%
32 feat · 4 cls · < 0.5 mW

Gesture Recognition

IMU-based gesture classification for wearables

100.0%
36 feat · 5 cls · < 2 mW

ECG Monitoring

Cardiac rhythm classification for health devices

93.3%
8 feat · 4 cls · < 0.05 mW
Model Footprint

Fits Where Others Can't

Integer-only quantization with zero runtime dependencies. The entire model fits in a few KB of flash — no framework, no runtime, no allocations.

Luviner V3
14 KB
Decision Tree
2.4 KB
MLP float64
82 KB
Random Forest
175 KB
Industry-Grade Benchmarks

Tested on Real-World Scale Datasets

Beyond toy datasets. Luviner is benchmarked on industry-standard datasets with tens of thousands of samples — the same benchmarks used to evaluate production ML systems.

MNIST

The universal ML benchmark. 70,000 handwritten digits, 784 features. The standard test for any classification system.

70,000 samples 784 features 10 classes

MIT-BIH ECG

Real cardiac arrhythmia detection. 109,000 heartbeats from PhysioNet, 5 clinical classes. Directly in our medical use case.

109,000 samples 187 features 5 classes

Speech Commands

Keyword spotting with MFCC features. 10 keywords, always-on detection. Directly in our voice-command use case.

10,000 samples 637 features 10 classes
Automatic Optimization

Tell Us Your Chip. We Find the Best Model.

AutoML for MCU: specify your hardware constraints (Flash, RAM, chip model) and Luviner automatically searches for the neural network architecture that maximizes accuracy while fitting within your memory budget. No ML expertise needed.

How It Works

1 Select your target MCU or specify Flash/RAM limits
2 Luviner evaluates multiple architectures that fit your hardware
3 The best-performing model is fully trained and exported to C

Supported MCU Families

ARM ARM Cortex-M0, M0+, M3, M4, M7
ESP ESP32, ESP32-S3, ESP32-C3
RISC-V RISC-V (CH32V003, GD32VF103)
BLE Nordic nRF52832, nRF52840
12+ pre-configured hardware profiles Custom Flash/RAM targets supported
No competitor offers automatic architecture search for neural networks. You specify the chip — Luviner delivers the optimal model, ready to flash.
One-Class Anomaly Detection

Detect Faults Without Fault Data

Train your model on normal operation only — no labeled fault data needed. The model learns what "normal" looks like and flags anything that deviates. Perfect for predictive maintenance where collecting failure examples is expensive or impossible.

Detection rate

Anomaly score
1.10
Normal score
0.04
100% 27x Score separation

One-Class Anomaly Detection

Training data Normal samples only
Temporal Advantage Real-time on MCU
C export Yes
MCU-Ready Yes
27x separation between normal and anomaly scores. 100% detection rate. Works in streaming mode on MCU — your sensor continuously monitors for anomalies in real time, with zero cloud dependency.
On-Device Drift Detection

Your Device Knows When to Retrain

The model monitors incoming data distribution and signals when it no longer matches the training data. No cloud connection, no forward pass — runs directly on raw sensor inputs with just 24 bytes of extra RAM.

How It Works

1 After training, calibrate the reference distribution from your data
2 On-device, an EWMA tracker monitors incoming sensor readings
3 When distribution diverges beyond threshold, the device signals drift

Cost on MCU

RAM 24 bytes extra RAM
Compute No forward pass needed
Flash Minimal flash overhead
C export Yes
No competitor offers on-device drift detection. Your sensor autonomously monitors data quality and signals when retraining is needed — zero cloud dependency, zero inference cost.
Mesh Intelligence

Distributed Nervous System

Multiple sensor nodes share neural states via a lightweight mesh protocol. Each node has its own brain — together they classify what no single node can. No cloud, no central server, no single point of failure.

Mesh (2 nodes)
100.0%
Oracle (all data)
100.0%
Node B solo
81.7%
Node A solo
78.3%
Configuration Accuracy vs Solo Message Size
2 nodes (3+3 features) 100.0% +20.0% 24 bytes
3 nodes (3+3+2 features) 100.0% +18.9% 24 bytes
4 nodes (2+2+2+2 features) 97.5% +25.2% 24 bytes

How It Works

1 Each sensor node runs its own neural network locally
2 Nodes exchange a compact subset of their neural states (8 values, 24 bytes)
3 The shared states influence each node's processing — intelligence emerges from the network

Protocol Specs

Message size 24 bytes
Transport ESP-NOW / BLE / UART
Cloud dependency None
Topologies Ring / Star / Full / Dynamic
Cloud gateway Optional (MQTT/HTTP)
Mesh intelligence achieves the same accuracy as a centralized model while being fully distributed. Each node costs as little as 2 EUR and communicates with just 24 bytes per tick. No competitor offers anything remotely comparable on commodity microcontrollers.
Advanced Capabilities

Enterprise-Grade Mesh Intelligence

Six capabilities that make the mesh production-ready for industrial deployments. Every feature compiles to pure C firmware for the same 2 EUR microcontrollers — no extra hardware, no cloud dependency, no floating point.

🛡 Tamper-Resistant Decisions

Nodes vote on every decision. A compromised or faulty node is automatically detected and excluded — the network doesn't trust any single node blindly. Requires majority consensus for every classification.

♻ Self-Healing Network

When a node fails or a new one joins, the network reconfigures automatically. No manual intervention, no downtime. Heartbeat monitoring detects failures in seconds.

📡 Multi-Hop Reach

Information propagates beyond direct neighbors — through relay nodes, the network's nervous system extends across an entire facility. Automatic routing, distance-based prioritization.

🧠 Intelligent State Selection

Each node automatically learns which of its internal states carry the most useful information for neighbors. Only the most relevant data is shared — reducing bandwidth while improving accuracy.

🎓 On-Field Distributed Learning

Nodes adapt to local conditions and share their improvements with neighbors. The entire swarm gets smarter over time without centralized retraining — each node contributes what it learns.

Protocol Specs

Decision method Consensus / Majority / Average
Topology Dynamic (auto-heal)
Network reach Multi-hop (N hops)
State selection Learned (per-node)
On-field learning Federated (peer-to-peer)
Firmware generation Pure C (all features)
Feature selection Include only what you need
Test suite 199 tests passing
No existing product or published research combines tamper resistance, self-healing topology, multi-hop propagation, intelligent state selection, and distributed learning on commodity microcontrollers — all deployable as pure C firmware. This is a world-first.
Methodology

Transparent and reproducible.

All benchmarks use standard train/test splits on public datasets. No cherry-picking, no hidden configurations. Results are independently verifiable.

Ready to deploy edge AI?

Go from CSV to compiled C in minutes. Start free.

Start Building → Live Demo →
Pricing Contact Terms of Service Privacy Policy End User License Agreement

© 2026 Luviner. Edge AI for every device.

P.IVA / VAT ID: IT02880910340