News & Announcements

What's New: Latest Updates in LibAUC

Latest Updates

  • LibAUC @ v1.4.0

    We released LibAUC 1.4.0 at long last! In this version, we have brought some new features to our library and provided more tutorials on our documentation website at docs.libauc.org . Please check the latest release note for more details! Thank you!
  • LibAUC @ v1.3.0

    We are thrilled to release LibAUC 1.3.0! In this version, we have made improvements and brought new features to our library. We have released a new documentation website at docs.libauc.org where you can access our code and comments. We are also happy to announce that our LibAUC paper has been accepted by KDD2023! Please check the latest release note for more details! Thank you!
  • LibAUC @ v1.2.0

    LibAUC 1.2.0 is released! In this version, we've included more losses and optimizers as well as made some performance improvements. Please check release note and tutorials for more details! Thank you!

  • Tutorial @ CVPR2022!

    Prof. Yang gave a tutorial about Deep AUC Maximization at CVPR 2022! Please check the website and slide for more details!

  • 7 Papers @ ICML2022!

    Our group has 7 papers about optimization for ML/AI accepted to ICML2022, including partial AUC maximization, top-K NDCG maximization, constrastive loss optimization, etc.

  • Achievement @ 2022!

    LibAUC has been widely used in the world including researchers, engineers, students from US, China, Japan, South Korea, Germany, India, Canada, Turkey, Kenya, United Arab Emirates, to solve real-world challenging problems! We will continuously improve our library by fixing bugs and adding new features. Please stay tuned!

  • Spotlight Paper @ ICLR2022!

    Our paper "Compositional Training for End-to-End Deep AUC Maximization" is accepted by ICLR2022 as Spotlight! Code will be released in our package and tutorials are available here.

  • 1st place @ OGB Graph Property Prediction Challenge!

    Our DeepAUC algorithms achieve the 1st Place on OGB challenge (MolHIV)! The methods are based on AUC-Margin loss. The Leaderboard can be found here!

  • Paper @ NeurIPS2021!

    Our paper "Stochastic Optimization of Area Under Precision-Recall Curve for Deep Learning with Provable Convergence" is accepted by NeurIPS2021! Code has been released in our package and tutorials are available here.

  • LibAUC v1.1.6 is released!

    What's New:
    • Added Support for Multi-Label Training. Tutorial for training CheXpert is available here!
    • Fixed some bugs and improved the training stability
  • LibAUC v1.1.5 is released!

    What's New:
    • Added PyTorch dataloader for CheXpert dataset. Tutorial for training CheXpert is available here!
    • Added support for training AUC loss on CPU machines. Note that please remove lines with .cuda() from the code!
    • Fixed some bugs and improved the training stability
  • Paper @ ICCV2021!

    Our paper "Robust Deep AUC Maximization: A New Surrogate Loss and Empirical Studies on Medical Image Classification" is accepted by ICCV2021! Code has been released in our package and tutorials are available here.

  • Talk @ Google!

    Dr. Yang is invited to give a talk "Deep AUC Maximization" at Google! Check the slides here.

  • LibAUC v1.1.3 is released!

    What's New:
    • Add new SOAP optimizer (contributed by Qi Qi, Zhuoning Yuan) for optimizing AUPRC. Please check the tutorial here.
    • Update ResNet18, ResNet34 with pretrained models on ImageNet1K
    • Add new strategy for AUCM Loss: imratio is calculated over a mini-batch if initial value is not given
    • Fixed some bugs and improved the training stability
  • Talk @ VALSE

    Dr. Yang, Dr. Liu and Zhuoning are invited to give a talk "Deep AUC Maximization: Algorithms and Applications" at VALSE(Vision and Learner SEminar)! Please check here for details!

  • 1st Place @ MIT AI Cures open challenge for COVID-19

    In collaboration with the DIVE lab at TAMU led by Prof. Shuiwang Ji, our AUC maximization algorithms (including AUROC, AUPRC) helped the team to achieve the 1st place at the MIT AI Cures Challenge. Our AUC maximization algorithms improve the AUROC by 3%+ and AUPRC by 5%+ over the baseline models. The MIT AI Cures challenge is to improve machine learning models for predicting antibacterial properties, which can help fight secondary effects of COVID. Great efforts to the team members, especially Youzhi Luo@TAMU , Zhao Xu@TAMU, Qi Qi@UIowa and Zhuoning Yuan@UIowa. For MIT AI Cures Challenge, please check here. Code for AUPRC maximization will be soon released.

  • LibAUC v1.1.0 is released!

    What's New:
    • Fixed some bugs and improved the training stability
    • Changed default settings in loss function for binary labels to be 0 and 1
    • Added Pytorch dataloaders for CIFAR10, CIFAR100, CAT_vs_Dog, STL10
    • Enabled training DAM with Pytorch leanring scheduler, e.g., ReduceLROnPlateau, CosineAnnealingLR
  • Paper @ ICML2021!

    Our paper "Federated Deep AUC Maximization for Heterogeneous Data with a Constant Communication Complexity" is accepted by ICML2021!

  • Paper released

    Our paper "Stochastic Optimization of Area Under Precision-Recall Curve for Deep Learning with Provable Convergence" for AUPRC optimization is available on arXiv now.

  • Package released

    LibAUC V1.0.0 is now available! Check out our latest example in PyTorch!

  • Paper released

    Our "Robust Deep AUC Maximization: A New Surrogate Loss and Empirical Studies on Medical Image Classification" for AUC optimization is available on arXiv now.

  • Talk @ ISU

    Dr. Yang is invited to give a talk "Deep AUC Maximization and Applications in Medical Image Classification" at ISU in November!

  • Talk @ RPI

    Dr. Yang is invited to give a talk "Deep AUC Maximization and Applications in Medical Image Classification" at RPI !

  • Talk @ ICONP

    Dr. Yang is invited to give a talk "Deep AUC Maximization and Applications in Medical Image Classification" at ICONIP 2020!

  • Competition @ CheXpert

    Our deep AUC maximization method achieves the 1st place on Chexpert Competition (our team is named DeepAUC-v1 ensemble) organized by ML group at Stanford University!

  • Competition @ Kaggle

    Our deep AUC maximization method achieves the top 1% (33 of 3314) rank on SIIM Melanoma Classification Compeition organized by Kaggle!