Awesome Mixup Methods for Self- and Semi-supervised Learning¶
We summarize mixup methods proposed for self- and semi-supervised visual representation learning. We are working on a survey of mixup methods. The list is on updating.
To find related papers and their relationships, check out Connected Papers, which visualizes the academic field in a graph representation.
To export BibTeX citations of papers, check out ArXiv or Semantic Scholar of the paper for professional reference formats.
Table of Contents¶
Mixup for Self-supervised Learning¶
MixCo: Mix-up Contrastive Learning for Visual Representation
Sungnyun Kim, Gihun Lee, Sangmin Bae, Se-Young Yun
NIPSW’2020 [Paper] [Code]MixCo Framework
Hard Negative Mixing for Contrastive Learning
Yannis Kalantidis, Mert Bulent Sariyildiz, Noe Pion, Philippe Weinzaepfel, Diane Larlus
NIPS’2020 [Paper] [Code]MoCHi Framework
i-Mix A Domain-Agnostic Strategy for Contrastive Representation Learning
Kibok Lee, Yian Zhu, Kihyuk Sohn, Chun-Liang Li, Jinwoo Shin, Honglak Lee
ICLR’2021 [Paper] [Code]i-Mix Framework
Un-Mix: Rethinking Image Mixtures for Unsupervised Visual Representation
Zhiqiang Shen, Zechun Liu, Zhuang Liu, Marios Savvides, Trevor Darrell, Eric Xing
AAAI’2022 [Paper] [Code]Un-Mix Framework
Beyond Single Instance Multi-view Unsupervised Representation Learning
Xiangxiang Chu, Xiaohang Zhan, Xiaolin Wei
BMVC’2022 [Paper]BSIM Framework
Improving Contrastive Learning by Visualizing Feature Transformation
Rui Zhu, Bingchen Zhao, Jingen Liu, Zhenglong Sun, Chang Wen Chen
ICCV’2021 [Paper] [Code]FT Framework
Piecing and Chipping: An effective solution for the information-erasing view generation in Self-supervised Learning
Jingwei Liu, Yi Gu, Shentong Mo, Zhun Sun, Shumin Han, Jiafeng Guo, Xueqi Cheng
OpenReview’2021 [Paper]PCEA Framework
Contrast and Mix: Temporal Contrastive Video Domain Adaptation with Background Mixing
Aadarsh Sahoo, Rutav Shah, Rameswar Panda, Kate Saenko, Abir Das
NIPS’2021 [Paper] [Code]CoMix Framework
Boosting Discriminative Visual Representation Learning with Scenario-Agnostic Mixup
Siyuan Li, Zicheng Liu, Di Wu, Zihan Liu, Stan Z. Li
Arxiv’2021 [Paper] [Code]SAMix Framework
MixSiam: A Mixture-based Approach to Self-supervised Representation Learning
Xiaoyang Guo, Tianhao Zhao, Yutian Lin, Bo Du
OpenReview’2021 [Paper]MixSiam Framework
Mix-up Self-Supervised Learning for Contrast-agnostic Applications
Yichen Zhang, Yifang Yin, Ying Zhang, Roger Zimmermann
ICME’2021 [Paper]MixSSL Framework
Towards Domain-Agnostic Contrastive Learning
Vikas Verma, Minh-Thang Luong, Kenji Kawaguchi, Hieu Pham, Quoc V. Le
ICML’2021 [Paper]DACL Framework
Center-wise Local Image Mixture For Contrastive Representation Learning
Hao Li, Xiaopeng Zhang, Hongkai Xiong
BMVC’2021 [Paper]CLIM Framework
Contrastive-mixup Learning for Improved Speaker Verification
Xin Zhang, Minho Jin, Roger Cheng, Ruirui Li, Eunjung Han, Andreas Stolcke
ICASSP’2022 [Paper]Mixup Framework
ProGCL: Rethinking Hard Negative Mining in Graph Contrastive Learning
Jun Xia, Lirong Wu, Ge Wang, Jintao Chen, Stan Z.Li
ICML’2022 [Paper] [Code]ProGCL Framework
M-Mix: Generating Hard Negatives via Multi-sample Mixing for Contrastive Learning
Shaofeng Zhang, Meng Liu, Junchi Yan, Hengrui Zhang, Lingxiao Huang, Pinyan Lu, Xiaokang Yang
KDD’2022 [Paper] [Code]M-Mix Framework
A Simple Data Mixing Prior for Improving Self-Supervised Learning
Sucheng Ren, Huiyu Wang, Zhengqi Gao, Shengfeng He, Alan Yuille, Yuyin Zhou, Cihang Xie
CVPR’2022 [Paper] [Code]SDMP Framework
On the Importance of Asymmetry for Siamese Representation Learning
Xiao Wang, Haoqi Fan, Yuandong Tian, Daisuke Kihara, Xinlei Chen
CVPR’2022 [Paper] [Code]ScaleMix Framework
VLMixer: Unpaired Vision-Language Pre-training via Cross-Modal CutMix
Teng Wang, Wenhao Jiang, Zhichao Lu, Feng Zheng, Ran Cheng, Chengguo Yin, Ping Luo
ICML’2022 [Paper]VLMixer Framework
CropMix: Sampling a Rich Input Distribution via Multi-Scale Cropping
Junlin Han, Lars Petersson, Hongdong Li, Ian Reid
ArXiv’2022 [Paper] [Code]CropMix Framework
- i-MAE: Are Latent Representations in Masked Autoencoders Linearly Separable
Kevin Zhang, Zhiqiang Shen
ArXiv’2022 [Paper] [Code]i-MAE Framework
MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers
Jihao Liu, Xin Huang, Jinliang Zheng, Yu Liu, Hongsheng Li
CVPR’2023 [Paper] [Code]MixMAE Framework
Mixed Autoencoder for Self-supervised Visual Representation Learning
Kai Chen, Zhili Liu, Lanqing Hong, Hang Xu, Zhenguo Li, Dit-Yan Yeung
CVPR’2023 [Paper]MixedAE Framework
Inter-Instance Similarity Modeling for Contrastive Learning
Chengchao Shen, Dawei Liu, Hao Tang, Zhe Qu, Jianxin Wang
ArXiv’2023 [Paper] [Code]PatchMix Framework
Mixup for Semi-supervised Learning¶
MixMatch: A Holistic Approach to Semi-Supervised Learning
David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, Colin Raffel
NIPS’2019 [Paper] [Code]MixMatch Framework
Patch-level Neighborhood Interpolation: A General and Effective Graph-based Regularization Strategy
Ke Sun, Bing Yu, Zhouchen Lin, Zhanxing Zhu
ArXiv’2019 [Paper]Pani VAT Framework
ReMixMatch: Semi-Supervised Learning with Distribution Matching and Augmentation Anchoring
David Berthelot, dberth@google.com, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, Colin Raffel
ICLR’2020 [Paper] [Code]ReMixMatch Framework
DivideMix: Learning with Noisy Labels as Semi-supervised Learning
Junnan Li, Richard Socher, Steven C.H. Hoi
ICLR’2020 [Paper] [Code]DivideMix Framework
Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning
Yifan Zhang, Bryan Hooi, Dapeng Hu, Jian Liang, Jiashi Feng
NIPS’2021 [Paper] [Code]Core-Tuning Framework
MUM : Mix Image Tiles and UnMix Feature Tiles for Semi-Supervised Object Detection
JongMok Kim, Jooyoung Jang, Seunghyeon Seo, Jisoo Jeong, Jongkeun Na, Nojun Kwak
CVPR’2022 [Paper] [Code]MUM Framework
Decoupled Mixup for Data-efficient Learning
Zicheng Liu, Siyuan Li, Ge Wang, Cheng Tan, Lirong Wu, Stan Z. Li
NIPS’2023 [Paper] [Code]DFixMatch Framework
Manifold DivideMix: A Semi-Supervised Contrastive Learning Framework for Severe Label Noise
Fahimeh Fooladgar, Minh Nguyen Nhat To, Parvin Mousavi, Purang Abolmaesumi
Arxiv’2023 [Paper] [Code]MixEMatch Framework
LaserMix for Semi-Supervised LiDAR Semantic Segmentation
Lingdong Kong, Jiawei Ren, Liang Pan, Ziwei Liu
CVPR’2023 [Paper] [Code] [project]LaserMix Framework
Dual-Decoder Consistency via Pseudo-Labels Guided Data Augmentation for Semi-Supervised Medical Image Segmentation
Yuanbin Chen, Tao Wang, Hui Tang, Longxuan Zhao, Ruige Zong, Tao Tan, Xinlin Zhang, Tong Tong
ArXiv’2023 [Paper]DCPA Framework
Contribution¶
Feel free to send pull requests to add more links with the following Markdown format. Notice that the Abbreviation, the code link, and the figure link are optional attributes. Current contributors include: Siyuan Li (@Lupin1998) and Zicheng Liu (@pone7).
* **TITLE**<br>
*AUTHER*<br>
PUBLISH'YEAR [[Paper](link)] [[Code](link)]
<details close>
<summary>ABBREVIATION Framework</summary>
<p align="center"><img width="90%" src="link_to_image" /></p>
</details>