논문
Neural Bootstrapper
Minsuk Shin, Hyungjoo Cho, Hyun-seok Min, Sungbin Lim
NeurlPS
2021
머신 러닝에서 불확실성을 추정하기 위해 사용되는 부트스트랩은 일반적으로 딥러닝에서 직접 활용되기에는 학습 시간 등의 문제로 비용이 상당히 큼. 이 논문은, 패러미터 개수의 증가 혹은 반복적인 학습 없이도 딥러닝에서 부트스트랩을 하는 효과를 근사(approximation)할 수 있도록 하는 기술이며, 이를 통해 효율/효과적인 불확실성 추정이 가능
Bootstrapping has been a primary tool for ensemble and uncertainty quantification in machine learning and statistics. However, due to its nature of multiple training and resampling, bootstrapping deep neural networks is computationally burdensome; hence it has difficulties in practical application to the uncertainty estimation and related tasks. To overcome this computational bottleneck, we propose a novel approach called Neural Bootstrapper (NeuBoots), which learns to generate bootstrapped neural networks through single model training. NeuBoots injects the bootstrap weights into the high-level feature layers of the backbone network and outputs the bootstrapped predictions of the target, without additional parameters and the repetitive computations from scratch. We apply NeuBoots to various machine learning tasks related to uncertainty quantification, including prediction calibrations in image classification and semantic segmentation, active learning, and detection of out-of-distribution samples. Our empirical results show that NeuBoots outperforms other bagging based methods under a much lower computational cost without losing the validity of bootstrapping.