KEY FEATURES & CAPABILITIES


Easy Installation

Easy to install and integrate AUROC, AUPRC training pipeline with popular deep learning frameworks like PyTorch and TensorFlow.

Large-scale Learning

Robust strategies to handle large-scale optimization on various types of data and make the optimization smoothly.

Distributed Training

Support for various distributed learning methods that accelerate training efficiency and secure data privacy.

ML Benchmarks

LibAUC provides a collection of imbalanced classification benchmarks on various applications with easy-to-use data pipeline.


WHY AUC OPTIMIZATION?



Deep AUC Maximization (DAM) is a paradigm for learning a deep neural network by maximizing the AUC score of the model on a dataset. There are several benefits of maximizing AUC score over minimizing the standard losses, e.g., cross-entropy.

    In many domains (e.g., medical diagonosis) the AUC score is the default metric for evaluating and comparing different methods. Directly maximizing AUC score can potentially lead to the largest improvement in the model’s performance. Many real-world datasets are usually imbalanced (e.g., the number of malignant cases is usually much less than benign cases). AUC is more suitable for handling imbalanced data distribution since maximizing AUC aims to rank the predication score of any positive data higher than any negative data






REAL-WORLD APPLICATIONS

Explore DAM with real-world medical imaging applications.



Card image cap
CheXpert

CheXpert is a large dataset of chest X-rays and competition for automated chest x-ray interpretation. Our Deep AUC (ROC) Maximization method has achieved the 1st place on Stanford CheXpert Competition organized by Andrew Ng’s ML group on August 2020. This competition is for automatically detecting related diseases based on Chest X-ray images.

Learn More ➞
Card image cap
Melanoma

Melanoma is a deadly skin cancer. Our Deep AUC (ROC) Maximization method performs much better than standard DL methods for optimizing class-weighted imbalanced loss for detecting Melanoma based on skin images. We achieved the SOTA performance on 2020 Kaggle Melanoma competition improving the winner’s performance by 0.2%.

Learn More ➞
Card image cap
Drug Discovery

COVID-19 presents many health challenges beyond the virus itself. Our LibAUC (AUROC, AUPRC) helped the team to achieve the 1st place at the MIT AI Cures Open Challenge, which is to predict antibacterial properties for fighting secondary effects of COVID19. Our AUC maximization algorithms improve the AUROC by 3%+ and AUPRC by 5%+ over the baseline models.

Learn More ➞
Card image cap
Stroke

Stroke is the 2nd leading cause for death globally, responsible for approximately 11% of total deaths. We collaborate with University of Iowa Hospitals & Clinics (UIHC) to build AI models for predicting Stroke based on CT perfusion data. Our deep AUC maximization method improves the baseline models by 4% for detecting Stroke on an internal data.

Learn More ➞
Card image cap
Tissue

Identifying metastatic tissue from a microscopic image is a challenging diagnosis task even for pathologists. Building an automated AI detection system is essential for places that are short of pathological diagnosis services. Our DeepAUC maximization methods achieve an improvement of 3% over baseline methods on PatchCamelyon dataset.

Learn More ➞

HANDS-ON TUTORIALS

Get started with our hands-on examples.



# install by pip
$ pip install libauc

# check latest version 
$ python -c "import libauc; print(libauc.__version__)"
				
>>> # import library
>>> from libauc.losses import AUCMLoss
>>> from libauc.optimizers import PESG
>>> ...
>>> #define loss
>>> model = model.cuda()
>>> Loss = AUCMLoss()
>>> optimizer = PESG()
>>> ...
>>> #training
>>> model.train()    
>>> for data, targets in trainloader:
>>>	data, targets  = data.cuda(), targets.cuda()
        preds = model(data)
        loss = Loss(preds, targets)
        optimizer.zero_grad()
        loss.backward(retain_graph=True)
        optimizer.step()
>>> ...
>>> #restart stage
>>> optimizer.update_regularizer()		
>>> ...   
>>> #evaluation
>>> model.eval()    
>>> for data, targets in testloader:
	data, targets  = data.cuda(), targets.cuda()
        preds = model(data)
Learn More
>>> # import library
>>> from libauc.losses import APLoss_SH
>>> from libauc.optimizers import SOAP_SGD
>>> ...
>>> #define loss
>>> model = model.cuda()
>>> Loss = APLoss_SH()
>>> optimizer = SOAP_SGD()
>>> ...
>>> #training
>>> model.train()    
>>> for index, data, targets in trainloader:
>>> 	data, targets  = data.cuda(), targets.cuda()
        preds = model(data)
        loss = Loss(preds, targets, index)
        optimizer.zero_grad()
        loss.backward(retain_graph=True)
        optimizer.step()	
>>> ...   
>>> #evaluation
>>> model.eval()    
>>> for data, targets in testloader:
	data, targets  = data.cuda(), targets.cuda()
        preds = model(data)
Learn More






RESEARCH

Check out our research paper in AUC optimization.





If you use LibAUC in your work, please cite the following papers for our library. If you have any questions, please reach out to Zhuoning Yuan and Prof. Tianbao Yang.

@inproceedings{yuan2021robust,
	title={Large-scale Robust Deep AUC Maximization: A New Surrogate Loss and Empirical Studies on Medical Image Classification},
	author={Yuan, Zhuoning and Yan, Yan and Sonka, Milan and Yang, Tianbao},
	booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
	year={2021}
	}

@article{qi2021stochastic,
	title={Stochastic Optimization of Area Under Precision-Recall Curve for Deep Learning with Provable Convergence},
	author={Qi, Qi and Luo, Youzhi and Xu, Zhao and Ji, Shuiwang and Yang, Tianbao},
	journal={Thirty-fifth Conference on Neural Information Processing Systems},
	year={2021}