AI Inference Benchmark

Real-Time JS Demo · 3-Stage Latency · QoE Metrics · TensorFlow.js

Model
Backend
REC
Starting…

Prediction ms

FPS

Detections

Responsiveness (runs/min)

Smoothness (fps)

JS Heap MB

3-Stage Latency Breakdown

Per model × backend: Setup (load) · Warmup (first inference) · Prediction (steady-state avg/min/max/σ).
Switch backends to populate multiple rows.
Setup — model load & framework init
Warmup — first inference, shader compile
Prediction — steady-state inference

Run a model to populate this table.

Comparison Summary

By Model (all backends combined)

Run each model to populate.

By Backend

Switch backends to populate.

Session Log 0 samples

# Time Model Task Backend Pred ms FPS Det Resp Smth IAcc Heap MB
Press ⏺ Record then switch models and backends.