Python API Reference
What you will find here
User-facing Python API for integrating BNNR with your own model and dataloaders.
When to use this page
Use this when CLI presets are not enough and you need full control.
Source of truth
This page documents only symbols exported publicly from src/bnnr/__init__.py.
Core training API
BNNRConfigBNNRTrainerquick_runBNNRRunResultCheckpointInfo
Model adapter API
ModelAdapterXAICapableModelSimpleTorchAdapter
Reporting and events API
Reporterload_reportcompare_runsJsonlEventSinkEVENT_SCHEMA_VERSIONreplay_events
Config helpers
load_configsave_configvalidate_configmerge_configsapply_xai_presetget_xai_presetlist_xai_presets
Augmentation API
BaseAugmentationAugmentationRegistryAugmentationRunnerTorchvisionAugmentationKorniaAugmentationAlbumentationsAugmentationcreate_kornia_pipelinekornia_availablealbumentations_available
Built-in classification augmentations:
ChurchNoiseBasicAugmentationDifPresetsDrustLuxferGlassProCAMSmugsTeaStains
Preset helpers:
auto_select_augmentationsget_presetlist_presets
XAI API (classification)
Explainers and generation:
BaseExplainerOptiCAMExplainerNMFConceptExplainerCRAFTExplainerRealCRAFTExplainerRecursiveCRAFTExplainergenerate_saliency_mapsgenerate_craft_conceptsgenerate_nmf_conceptssave_xai_visualization
Analysis and scoring:
analyze_xai_batchanalyze_xai_batch_richcompute_xai_quality_scoregenerate_class_diagnosisgenerate_class_insightgenerate_epoch_summarygenerate_rich_epoch_summary
Cache:
XAICache
ICD variants:
ICDAICD
Dashboard helper
start_dashboard
Minimal classification integration
import torch
import torch.nn as nn
from bnnr import BNNRConfig, BNNRTrainer, SimpleTorchAdapter, auto_select_augmentations
model = ...
train_loader = ... # (image, label, index)
val_loader = ...
adapter = SimpleTorchAdapter(
model=model,
criterion=nn.CrossEntropyLoss(),
optimizer=torch.optim.Adam(model.parameters(), lr=1e-3),
target_layers=[...],
device="auto",
)
config = BNNRConfig(m_epochs=3, max_iterations=2, device="auto")
trainer = BNNRTrainer(adapter, train_loader, val_loader, auto_select_augmentations(), config)
result = trainer.run()
print(result.best_metrics)quick_run() helper
quick_run() builds SimpleTorchAdapter internally.
from bnnr import quick_run
result = quick_run(
model=model,
train_loader=train_loader,
val_loader=val_loader,
)Useful arguments include augmentations, config/overrides, criterion, optimizer, target_layers, and eval_metrics.
Detection
Model Adapters
DetectionAdapter(model, optimizer, target_layers=None, device="cuda", scheduler=None, use_amp=False, score_threshold=0.05)— wraps torchvision-style detectors (Faster R-CNN, RetinaNet, SSD, FCOS). In train mode callsmodel(images, targets)for losses; in eval mode callsmodel(images)for prediction dicts.UltralyticsDetectionAdapter(model_name="yolov8n.pt", device="cuda", score_threshold=0.05, num_classes=None, lr=1e-3, optimizer=None, use_amp=False)— wraps Ultralytics YOLO. Exposespredict_detection_dicts(batch_bchw)for XAI and probe snapshots.
Both adapters implement train_step, eval_step, epoch_end_eval, epoch_end, state_dict, load_state_dict, get_target_layers, and get_model.
Collate Functions
detection_collate_fn(batch)→(Tensor[B,C,H,W], list[dict])detection_collate_fn_with_index(batch)→(Tensor[B,C,H,W], list[dict], Tensor[B])
Detection Augmentations
Bbox-aware transforms (subclass BboxAwareAugmentation):
DetectionHorizontalFlip,DetectionVerticalFlip,DetectionRandomRotate90DetectionRandomScale(scale_range=(0.8, 1.2))MosaicAugmentation(output_size=(640, 640)),DetectionMixUp(alpha_range=(0.3, 0.7))AlbumentationsBboxAugmentation(transform)
XAI-driven: DetectionICD, DetectionAICD — saliency-based tile masking for detection.
Presets: get_detection_preset(name) with name ∈ {"light", "standard", "aggressive"}.
Detection Metrics
calculate_detection_metrics(predictions, targets, iou_thresholds=None, score_threshold=0.0)→{"map_50", "map_50_95"}calculate_per_class_ap(predictions, targets, iou_threshold=0.5, class_names=None)→ per-class AP dictcalculate_detection_confusion_matrix(predictions, targets, num_classes=None, iou_threshold=0.5)→{"labels", "matrix"}
Detection XAI
generate_detection_saliency(...)— backbone activation–based class-agnostic saliencycompute_detection_box_saliency_occlusion(...)— per-box occlusion grid saliencydraw_boxes_on_image(...)— draw xyxy boxes with labels and scoresoverlay_saliency_heatmap(...)— blend saliency with colormapsave_detection_xai_panels(...)— writes ground-truth, saliency, and prediction triptych
See Detection for the full guide with examples.