본문 바로가기 주메뉴 바로가기
검색 검색영역닫기 검색 검색영역닫기 ENGLISH 메뉴 전체보기 메뉴 전체보기

학술행사

콜로퀴움

Contrastive Representation Learning with Renyi Divergence

등록일자 : 2022-10-31

https://icim.nims.re.kr/post/event/945

  • 발표자  신진우 교수(KAIST)
  • 기간  2022-10-20 ~ 2022-10-20
  • 장소  판교 테크노밸리 산업수학혁신센터 세미나실
  • 주최  산업수학혁신센터

1. 일시: 2022년 10월 20일(목), 14:00-16:00

2. 장소: 판교 테크노밸리 산업수학혁신센터 세미나실

  • 경기 성남시 수정구 대왕판교로 815, 기업지원허브 231호 국가수리과학연구소

    3. 발표자: 신진우 교수(KAIST)

    4. 주요내용: Contrastive Representation Learning with Rényi Divergence

    Contrastive representation learning seeks to acquire useful representations by estimating the shared information between multiple views of data. Here, the choice of data augmentation is sensitive to the quality of learned representations: as harder the data augmentations are applied, the views share more task-relevant information, but also task-irrelevant one that can hinder the generalization capability of representation. Motivated by this, we present a new robust contrastive learning scheme, coined RényiCL, which can effectively manage harder augmentations by utilizing Rényi divergence. Our method is built upon the variational lower bound of Rényi divergence, but a naïve usage of a variational method is impractical due to the large variance. To tackle this challenge, we propose a novel contrastive objective that conducts variational estimation of a skew Rényi divergence and provide a theoretical guarantee on how variational estimation of skew divergence leads to stable training. We show that Rényi contrastive learning objectives perform innate hard negative sampling and easy positive sampling simultaneously so that it can selectively learn useful features and ignore nuisance features. Through experiments on ImageNet, we show that Rényi contrastive learning with stronger augmentations outperforms other self-supervised methods without extra regularization or computational overhead. This is a joint work with Kyungmin Lee (KAIST).

1. 일시: 2022년 10월 20일(목), 14:00-16:00

2. 장소: 판교 테크노밸리 산업수학혁신센터 세미나실

  • 경기 성남시 수정구 대왕판교로 815, 기업지원허브 231호 국가수리과학연구소

    3. 발표자: 신진우 교수(KAIST)

    4. 주요내용: Contrastive Representation Learning with Rényi Divergence

    Contrastive representation learning seeks to acquire useful representations by estimating the shared information between multiple views of data. Here, the choice of data augmentation is sensitive to the quality of learned representations: as harder the data augmentations are applied, the views share more task-relevant information, but also task-irrelevant one that can hinder the generalization capability of representation. Motivated by this, we present a new robust contrastive learning scheme, coined RényiCL, which can effectively manage harder augmentations by utilizing Rényi divergence. Our method is built upon the variational lower bound of Rényi divergence, but a naïve usage of a variational method is impractical due to the large variance. To tackle this challenge, we propose a novel contrastive objective that conducts variational estimation of a skew Rényi divergence and provide a theoretical guarantee on how variational estimation of skew divergence leads to stable training. We show that Rényi contrastive learning objectives perform innate hard negative sampling and easy positive sampling simultaneously so that it can selectively learn useful features and ignore nuisance features. Through experiments on ImageNet, we show that Rényi contrastive learning with stronger augmentations outperforms other self-supervised methods without extra regularization or computational overhead. This is a joint work with Kyungmin Lee (KAIST).

이 페이지에서 제공하는 정보에 대해 만족하십니까?