Examples Guide (Production Usage)
What you will find here
A practical guide for Python scripts under examples/ (for example examples/classification/, examples/multilabel/) with:
- what each script demonstrates,
- exact run commands,
- which dashboard flow to use,
- smoke commands for fast verification.
These paths assume a cloned repository with installs from source (pip install -e ".[…]"). They are not shipped inside the PyPI wheel; see the BNNR repository on GitHub.
1) Classification showcase
Script:
examples/classification/showcase_stl10.py
What it demonstrates:
- iterative augmentation selection,
- XAI-driven candidates (ICD/AICD),
- full live dashboard flow.
Full showcase
PYTHONPATH=src python3 examples/classification/showcase_stl10.py --with-dashboardFast smoke (CI/dev machine)
PYTHONPATH=src python3 examples/classification/showcase_stl10.py \
--without-dashboard --no-dashboard-auto-open \
--max-train-samples 32 --max-val-samples 16 --batch-size 16 \
--m-epochs 1 --decisions 1First run: STL-10 is downloaded automatically (needs network). For a shorter interactive run, the script supports --quick; see the module docstring for timings and GPU/CPU notes.
2) Multi-label showcase
Script:
examples/multilabel/multilabel_demo.py
What it demonstrates:
- multi-label pipeline (
task="multilabel"), - F1-samples oriented selection,
- dashboard-compatible events and artifacts.
Full demo
PYTHONPATH=src python3 examples/multilabel/multilabel_demo.py --with-dashboardFast smoke
PYTHONPATH=src python3 examples/multilabel/multilabel_demo.py \
--without-dashboard --no-dashboard-auto-open \
--n-train 64 --n-val 32 --batch-size 16 --m-epochs 1 --decisions 13) Classification: full pipeline demo (synthetic)
Script:
examples/classification/demo_full_pipeline.py
What it demonstrates:
- all built-in BNNR augmentations and optional torchvision / Kornia / Albumentations wrappers (when those extras are installed),
- ICD/AICD, XAI cache, OptiCAM, and a short branch-selection loop on a tiny synthetic dataset (no dataset download).
Run from repository root (no live dashboard; typically well under a minute on CPU):
PYTHONPATH=src python3 examples/classification/demo_full_pipeline.pyOptional: install albumentations and/or kornia extras (pip install -e ".[albumentations]", pip install -e ".[gpu]") so the corresponding wrapper branches execute.
4) Lightning / Accelerate adapters (reference module)
File:
examples/classification/lightning_adapter.py
This file is reference code, not a main entrypoint: it defines LightningAdapter and AccelerateAdapter for use with BNNRTrainer. PyTorch Lightning and Hugging Face Accelerate are optional dependencies:
pip install pytorch-lightning accelerateSee the module docstring and inline comments for wiring; also API Reference.
5) Detection showcases
YOLO + COCO128
Script:
examples/detection/showcase_yolo_coco128.py
What it demonstrates:
- YOLOv8 via
UltralyticsDetectionAdapter, - COCO128 auto-download,
- detection-specific bbox augmentations + DetectionICD/AICD,
- mAP tracking and dashboard.
PYTHONPATH=src python3 examples/detection/showcase_yolo_coco128.py --with-dashboardFast smoke:
PYTHONPATH=src python3 examples/detection/showcase_yolo_coco128.py --quick --without-dashboardPascal VOC 2007
Script:
examples/detection/showcase_voc.py
What it demonstrates:
- torchvision-style detector with
DetectionAdapter, - full augmentation suite (all 9 augmentation families),
- XAI saliency generation,
- long training runs with dashboard.
PYTHONPATH=src python3 examples/detection/showcase_voc.py --with-dashboardDetection notebook
examples/detection/bnnr_detection_demo.ipynb— interactive Jupyter notebook covering adapter setup, augmentations, training, and XAI visualization.
6) Dashboard workflow for examples
For any example with --with-dashboard:
- Start script.
- Open Local URL on desktop.
- Scan QR for mobile view.
- Validate branch tree, KPI cards, samples/XAI sections.
- Stop server with
Ctrl+Cafter checks.
For offline sharing:
python3 -m bnnr dashboard export --run-dir <run_dir> --out exported_dashboard7) Example artifacts you should always verify
After each example run, verify:
report.jsonexists,events.jsonlexists,- metrics are present for task type,
- dashboard replay works (
bnnr dashboard serve --run-dir ...).