Changelog¶
v0.2.8 (25/05/2023)¶
Bump version to V0.2.8 with new features in MMPreTrain.
New Features¶
Support more backbone architectures, including MobileNetV3, EfficientNetV2, HRNet, CSPNet, LeViT, MobileViT, DaViT, and MobileOne, etc.
Support CIFAR-100 benchmarks of Metaformer architectures and Mixup variants with Transformers, detailed in cifar100/advanced and cifar100/mixups. Models and logs of various CIFAR-100 mixup benchmarks are on updating.
Bug Fixes¶
Fix the
by_epoch
setting inCustomSchedulerHook
and updateDecoupleMix
insoft_mix_cross_entropy
to support label smoothing settings.
v0.2.7 (16/12/2022)¶
Bump version to V0.2.7 with new features as #35. Update new features of OpenMixup
v0.2.7 as issue #36.
Code Refactoring¶
Refactor
openmixup.core
(instead ofopenmixup.hooks
) andopenmixup.models.augments
(contains mixup augmentation methods which are originally implemented inopenmixup.models.utils
). After code refactoring, the macro design ofOpenMixup
is similar to most projects of MMLab.Support deployment of
ONNX
andTorchScript
inopenmixup.core.export
andtools/deployment
. We refactored the abstract classBaseModel
(implemented inopenmixup/models/classifiers/base_model.py
) to supportforward_inference
(for custom inference and visualization). We also refactoredopenmixup.models.heads
andopenmixup.models.losses
to supportforward_inference
. You can deploy the classification models inOpenMixup
according to deployment tutorials.Support testing API methods in
openmixup/apis/test.py
for evaluation and deployment of classification models.Refactor
openmixup.core.optimizers
to separate optimizers and builders and support the latest Adan optimizer.Refactor
mixup_classification.py
to support label mixup methods, addreturn_mask
for mixup methods inaugments
and addreturn_attn
in ViT backbone.Refactor
ValidateHook
to support new features asEvalHook
in mmcv, e.g.,save_best="auto"
during training.Refactor
ClsHead
withBaseClsHead
to support MLP classification head variants in modern network architectures.
New Features¶
Support detailed usage instructions in README of config files for image classification methods in
configs/classification
, e.g., mixups on ImageNet. READMEs of other methods inconfigs/selfsup
andconfigs/semisup
will also be updated.Refine the origianzation of README files according to README-Template.
Support the new mixup augmentation method (AlignMix) and provide the relevant config files in various datasets.
Refine the setup for the local installation and PyPi release in
setup.py
andsetup.cfg
. View PyPi project of OpenMixup.Support a new mixup method TransMix and provide config files in mixups/deit.
Update config files. Provide full config files of mixup methods based on ViT-T/S/B on ImageNet and update RSB A3 config files for popular backbones.
Update
target_generators
to support the latest MIM pre-training methods (fixed requirements).Update config files and scripts for SSL downstream tasks benchmarks (classification, detection, and segmentation).
Update and fix bugs in visualization tools (vis_loss_landscape). Fix model converters tools.
Support Semantic-Softmax loss and ImageNet-21K-P (Winter) pre-training.
Support more backbone architectures, including BEiT, MetaFormer, ConvNeXtV2, VanillaNet, and CoC.
Update Documents¶
Update documents of mixup benchmarks on ImageNet in Model_Zoo_sup.md. Update config files for supported mixup methods.
Update formats (figures, introductions and content tables) of awesome lists in Awesome Mixups and Awesome MIM and provide the latest methods (updated to 18/03/2023).
Update
api
that describes the overall code structures indocs/en/api
for the readthedocs page.Reorganize and update tutorials for SSL downstream tasks benchmarks (classification, detection, and segmentation).
v0.2.6 (41/09/2022)¶
Bump version to V0.2.6 with new features as #20. Update new features and documents of OpenMixup
v0.2.6 as issue #24, fix relevant issue #25, issue #26, issue #27, issue #31, and issue #33.
New Features¶
Support new backbone architectures (EdgeNeXt, EfficientFormer, HorNet, (MogaNet, MViT.V2, ShuffleNet.V1, DeiT-3), and provide relevant network modules in
models/utils/layers
. Config files and README.md are updated.Support new self-supervised method BEiT with ViT-Base on ImageNet-1K, and fix bugs of CAE, MaskFeat, and SimMIM in
Dataset
,Model
, andHead
. Note that we addedHOG
feature implementation borrowed from the original repo for MaskFeat. Update pre-training and fine-tuning config files, and documents for the relevant masked image modeling (MIM) methods (BEiT, MaskFeat, CAE, and A2MIM). Support more fine-tuning setting on ImageNet for MIM pre-training based on various backbones (e.g., ViTs, ResNets, ConvNeXts).Fix the updated arXiv.V2 version of VAN by adding architecture configurations.
Support ArcFace loss for metric learning and the relevant
NormLinearClsHead
. And support SeeSaw loss for long-tail classification tasks.Update the issue template with more relevant links and emojis.
Support Grad-CAM visualization tools vis_cam.py of supported architectures.
Update Documents¶
Update our
OpenMixup
tech report on arXiv, which provides more technical details and benchmark results.Update self-supervised learning Model_Zoo_selfsup.md. And update documents of the new backbone and self-supervised methods.
Update supervised learning Model_Zoo_sup.md as provided in AutoMix and support more mixup benchmark results.
Update the template and add the latest paper lists of mixup and MIM methods in Awesome Mixups and Awesome MIM. We provide teaser figures of most papers as illustrations.
Update documents of
tools
.
Bug Fixes¶
Fix raising error notification of
torch.fft
for PyTorch 1.6 or lower versions in backbones and heads.Fix
README.md
(new icons, fixing typos) and support pytest intests
.Fix the classification heads and update implementations and config files of AlexNet and InceptionV3.
v0.2.5 (21/07/2022)¶
Bump version to V0.2.5 with new features and updating documents as #10. Update features and fix bugs in V0.2.5 as #17. Update features and documents in V0.2.5 as #18 and #19.
New Features¶
Support new attention mechanisms in backbone architectures (Anti-Oversmoothing,
FlowAttention
in FlowFormer andPoolAttention
in MViTv2).Update code intergration testing in tests.
Update Documents¶
Recognize
README
andREADME
for various methods.Update Awesome Mixups and Awesome MIM.
Update get_started.md and Tutorials for better usage of
OpenMixup
.Update mixup benchmarks in model_zoos: providing configs, weights, and more details.
Update latest methods in Awesome Mixups and Awesome MIM.
Update
README.md
and fixauto_train_mixups.py
for various datasets.
Bug Fixes¶
Bug Fixes¶
Fix bugs that cause degenerate performances of pure Transformer backbones (DeiT and Swin) in
OpenMixup
. The main reason might be the old version ofauto_fp16
andDistOptimizerHook
implementations, sincePyTorch=>1.6.0
has better support of fp16 training thanmmcv
.Fix the bug of ViT fine-tuning for MIM methods (e.g., MAE, SimMIM). The original
MIMVisionTransformer
inopenmixup.models.mim_vit
has frozen all the backbone parameters during fine-tuning.Fix the initialization of Transformer-based architectures (e.g., ViT, Swin) to reproduce the train-from-scratch performances.
Fix the weight initialization of Transformer-based architectures (e.g., ViT, Swin) to reproduce the train-from-scratch performance. Update weight initialization, parameter-wise weight decay, and fp16 settings in relevant config files.
v0.2.3 (17/06/2022)¶
Support new features as #6.
New Features¶
Support the online document of OpenMixup (built on Read the Docs).
Provide README and update configs for self-supervised and supervised methods.
Support new Masked Image Modeling (MIM) methods (A2MIM, CAE).
Support new backbone networks (DenseNet, ResNeSt, PoolFormer, UniFormer).
Support new Fine-tuing method (HCR).
Support new mixup augmentation methods (SmoothMix, GridMix).
Support more regression losses (Focal L1/L2 loss, Balanced L1 loss, Balanced MSE loss).
Support more regression metrics (regression errors and correlations) and the regression dataset.
Support more reweight classification losses (Gradient Harmonized loss, Varifocal Focal Loss) from MMDetection.
Bug Fixes¶
Refactor code structures of
openmixup.models.utils
and support more network layers.Fix the bug of
DropPath
(using stochastic depth rule) inResNet
for RSB A1/A2 training settings.
v0.2.2 (24/05/2022)¶
Support new features and finish code refactoring as #5.
Highlight¶
Support more self-supervised methods (Barlow Twins and Masked Image Modeling methods).
Support popular backbones (ConvMixer, MLPMixer, VAN) based on MMClassification.
Support more regression losses (Charbonnier loss and Focal Frequency loss).
Bug Fixes¶
Fix bugs in self-supervised classification benchmarks (configs and implementations of VisionTransformer).
Update INSTALL.md. We suggest you install PyTorch 1.8 or higher and mmcv-full for better usage of this repo. PyTorch 1.8 has bugs in AdamW optimizer (do not use PyTorch 1.8 to fine-tune ViT-based methods).
Fix bugs in PreciseBNHook (update all BN stats) and RepeatSampler (set sync_random_seed).
Bug Fixes¶
Fix bugs in regression metrics, MIM dataset, and benchmark configs. Notice that only
l1_loss
is supported by FP16 training, other regression losses (e.g., MSE and Smooth_L1 losses) will cause NAN when the target and prediction are not normalized in FP16 training.We suggest you install PyTorch 1.8 or higher (required by some self-supervised methods) and
mmcv-full
for better usage of this repo. Do not use PyTorch 1.8 to fine-tune ViT-based methods, and you can still use PyTorch 1.6 for supervised classification methods.
v0.2.0 (31/03/2022)¶
Support new features and finish code refactoring as #3.
New Features¶
Support various popular backbones (ConvNets and ViTs), various image datasets, popular mixup methods, and benchmarks for supervised learning. Config files are available.
Support popular self-supervised methods (e.g., BYOL, MoCo.V3, MAE) on both large-scale and small-scale datasets, and self-supervised benchmarks (merged from MMSelfSup). Config files are available.
Support analyzing tools for self-supervised learning (kNN/SVM/linear metrics and t-SNE/UMAP visualization).
Convenient usage of configs: fast configs generation by ‘auto_train.py’ and configs inheriting (MMCV).
Support mixed-precision training (NVIDIA Apex or MMCV Apex) for all methods.
Model Zoos and lists of Awesome Mixups have been released.
Bug Fixes¶
Done code refactoring follows MMSelfSup and MMClassification.
v0.1.3 (25/03/2022)¶
Refactor code structures for vision transformers and self-supervised methods (e.g., MoCo.V3 and MAE).
Provide online analysis of self-supervised methods (knn metric and t-SNE/UMAP visualization).
More results are provided in Model Zoos.
Bug Fixes¶
Fix bugs of reusing of configs, ViTs, visualization tools, etc. It requires rebuilding of OpenMixup (install mmcv-full).
v0.1.2 (20/03/2022)¶
New Features¶
Refactor code structures according to MMSelfsup to fit high version of mmcv and PyTorch.
Support self-supervised methods and optimizes config structures.
v0.1.1 (15/03/2022)¶
New Features¶
Support various popular backbones (ConvNets and ViTs) and update config files.
Support various handcrafted methods and optimization-based methods (e.g., PuzzleMix, AutoMix, SAMix, DecoupleMix, etc.). Config files generation of mixup methods are supported.
Provide supervised image classification benchmarks in model_zoo and results (on updating).
Bug Fixes¶
Fix bugs of new mixup methods (e.g., gco for Puzzlemix, etc.).
v0.1.0 (22/01/2022)¶
New Features¶
Support various popular backbones (popular ConvNets and ViTs).
Support mixed precision training (NVIDIA Apex or MMCV Apex).
Support supervised, self- & semi-supervised learning methods and benchmarks.
Support fast configs generation from a basic config file by
auto_train.py
.
Bug Fixes¶
Fix bugs of code refactoring (backbones, fp16 training, etc.).
OpenSelfSup (v0.3.0, 14/10/2020) Supported Features¶
This repo is originally built on OpenSelfSup (the old version of MMSelfSup) and borrows some implementations from MMClassification.
Mixed Precision Training (based on NVIDIA Apex for PyTorch 1.6).
Improvement of GaussianBlur doubles the training speed of MoCo V2, SimCLR, and BYOL.
More benchmarking results, including benchmarks on Places, VOC, COCO, and linear/semi-supervised benchmarks.
Fix bugs in moco v2 and BYOL so that the reported results are reproducible.
Provide benchmarking results and model download links.
Support updating the network every several iterations (accumulation).
Support LARS and LAMB optimizer with Nesterov (LAMB from MMClassification).
Support excluding specific parameter-wise settings from the optimizer updating.