Shortcuts

Awesome Mixup Methods for Self- and Semi-supervised Learning

Awesome GitHub stars visitors

We summarize mixup methods proposed for self- and semi-supervised visual representation learning. We are working a survey of mixup methods. Current list is on updating.

Mixup Methods for Self-supervised Learning

  1. MixCo, [NIPSW 2020] [code] MixCo: Mix-up Contrastive Learning for Visual Representation.

  2. MoCHi, [NIPS 2020] [code] Hard Negative Mixing for Contrastive Learning.

  3. i-Mix, [ICLR 2021] [code] i-Mix A Domain-Agnostic Strategy for Contrastive Representation Learning.

  4. Un-Mix, [AAAI 2022] [code] Un-Mix: Rethinking Image Mixtures for Unsupervised Visual Representation.

  5. BSIM, [Arxiv 2020] Beyond Single Instance Multi-view Unsupervised Representation Learning.

  6. FT, [ICCV 2021] [code] Improving Contrastive Learning by Visualizing Feature Transformation.

  7. m-Mix, [Arxiv 2021] m-mix: Generating hard negatives via multiple samples mixing for contrastive learning.

  8. PCEA, [OpenReview 2021] Piecing and Chipping: An effective solution for the information-erasing view generation in Self-supervised Learning.

  9. SAMix, [Arxiv 2021] [code] Boosting Discriminative Visual Representation Learning with Scenario-Agnostic Mixup.

  10. MixSiam, [OpenReview 2021] MixSiam: A Mixture-based Approach to Self-supervised Representation Learning.

  11. MixSSL, [ICME 2021] Mix-up Self-Supervised Learning for Contrast-agnostic Applications.

  12. CLIM, [BMVC 2021] Center-wise Local Image Mixture For Contrastive Representation Learning.

Mixup Methods for Semi-supervised Learning

  1. MixMatch, [NIPS 2019] [code] MixMatch: A Holistic Approach to Semi-Supervised Learning.

  2. ReMixMatch, [ICLR 2020] [code] ReMixMatch: Semi-Supervised Learning with Distribution Matching and Augmentation Anchoring.

  3. Core-Tuning, [NIPS 2021] [code] Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning.

  4. DFixMatch, [Arxiv 2022] [code] Decoupled Mixup for Data-efficient Learning.

Contribution

Feel free to send pull requests to add more links! Current contributors include: Siyuan Li (@Lupin1998) and Zicheng Liu (@pone7).

Read the Docs v: stable
Versions
latest
stable
Downloads
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.