Skip to content

Establishing Quantified Uncertainty in Neural Networks¤

PyPi Build Status python_passing_tests python_coverage Code style: black Tested with Hypothesis DOI

Usage¤

Deep neural networks (DNNs) for supervised labeling problems are known to produce accurate results on a wide variety of learning tasks. However, when accuracy is the only objective, DNNs frequently make over-confident predictions, and they also always make a label prediction regardless of whether or not the test data belongs to any known labels.

EQUINE was created to simplify two kinds of uncertainty quantification for supervised labeling problems: 1) Calibrated probabilities for each predicted label 2) An in-distribution score, indicating whether any of the model's known labels should be trusted. Additionally, we provide a companion web application.

Installation¤

Users are recommended to install a virtual environment such as Anaconda, as is also recommended in the pytorch installation. EQUINE has relatively few dependencies beyond torch.

pip install equine
Users interested in contributing should refer to CONTRIBUTING.md for details.

Design¤

EQUINE extends pytorch's nn.Module interface using a predict method that returns both the class predictions and the extra OOD scores.

Disclaimer¤

DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited.

© 2024 MASSACHUSETTS INSTITUTE OF TECHNOLOGY

  • Subject to FAR 52.227-11 – Patent Rights – Ownership by the Contractor (May 2014)
  • SPDX-License-Identifier: MIT

This material is based upon work supported by the Under Secretary of Defense for Research and Engineering under Air Force Contract No. FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Under Secretary of Defense for Research and Engineering.

The software/firmware is provided to you on an As-Is basis.