From CSV to Chip in 4 Steps: How Luviner Works
A complete walkthrough of the Luviner workflow — from uploading raw sensor data to flashing a compiled AI binary on your microcontroller.
The Traditional Edge AI Workflow
Deploying AI on a microcontroller typically involves a long chain of specialized tools and expertise:
- Collect and label sensor data
- Choose a model architecture (CNN? LSTM? Transformer?)
- Train in Python using TensorFlow or PyTorch
- Optimize: pruning, knowledge distillation, architecture search
- Quantize from float32 to int8
- Convert to TFLite Micro or write custom C inference code
- Integrate with your firmware, manage memory, debug
- Profile on-device, iterate, re-train
This process requires ML expertise, embedded C skills, and weeks of iteration. For a hardware team without a data scientist, it is a non-starter.
Luviner's 4-Step Workflow
Luviner compresses this entire pipeline into four steps, with no ML knowledge required:
Step 1: Upload CSV
Prepare your sensor data as a CSV file. Each row is one sample. Each column is a sensor reading (e.g., accelerometer X, Y, Z, temperature). The last column is the label — the class you want the model to predict (e.g., "normal", "fault_bearing", "fault_misalignment").
That is the only input Luviner needs. No feature engineering. No data pipeline. No preprocessing code. Just raw sensor readings with labels.
Step 2: Train
Click "Start Training" and Luviner's Edge V3 engine takes over:
- Automatic feature extraction — statistical features, frequency domain analysis, and temporal patterns are extracted from your raw data
- Neural Network training — an ultra-compact model is trained using Luviner's proprietary engine
- Quantization — the model is automatically quantized to integer fixed-point arithmetic
- Validation — accuracy is measured on a held-out test set and reported
The entire training process takes seconds to minutes, depending on dataset size. You see the accuracy result immediately.
Step 3: Register Device UIDs
Enter the hardware Unique IDs of the microcontrollers you want to deploy on. You can type them manually or upload a CSV of UIDs. Each UID is a unique identifier burned into the chip's silicon at the factory — it cannot be changed or spoofed.
The compiled binary will only execute on chips with registered UIDs. This is your IP protection layer — read more about how UID binding works.
Step 4: Download Binary
Click "Compile" and Luviner generates a static library (.a) and header file (.h) for your target architecture. The output is pure C with zero dependencies:
- No malloc, no dynamic memory allocation
- No floating point — everything is integer fixed-point
- No external libraries — not even libc math functions
- Works on bare metal or any RTOS
Link the .a file into your firmware project, include the .h header, and call the inference function. That is it.
Supported Architectures
Luviner compiles for:
- ARM Cortex-M0/M0+ — the cheapest ARM cores (STM32F0, nRF51)
- ARM Cortex-M3 — mid-range (STM32F1, LPC1768)
- ARM Cortex-M4 — with DSP (STM32F4, nRF52)
- ARM Cortex-M7 — high performance (STM32H7, i.MX RT)
- ARM Cortex-M33 — TrustZone (STM32L5, nRF9160)
- ESP32 — Xtensa LX6 (ESP32, ESP32-S2, ESP32-S3)
- RISC-V — open ISA (ESP32-C3, GD32VF103)
What You Don't Need
- No Python environment
- No TensorFlow or PyTorch installation
- No GPU
- No ML expertise
- No cloud runtime or API dependency
The compiled binary is completely self-contained. Once flashed, it runs forever, offline, with zero external dependencies.
Try It Now
The Evaluation plan is free forever — 1 project, 10 device UIDs, full Edge V3 training. No credit card required.