A deep studying method with restricted computational overhead is proposed to enhance the generalization and robustness of deep supervised studying fashions.


Deep neural networks (DNNs) can obtain excessive accuracy when there may be considerable coaching information that has the identical distribution because the check information. In sensible purposes, information deficiency is commonly a priority. For classification duties, the dearth of sufficient labeled pictures within the coaching set typically leads to overfitting. One other concern is the mismatch between the coaching and the check domains, which ends up in poor mannequin efficiency. This requires the necessity to have sturdy and information environment friendly deep studying fashions. On this work, we suggest a deep studying method referred to as Multi-Knowledgeable Adversarial Regularization studying (MEAR) with restricted computational overhead to enhance the generalization and robustness of deep supervised studying fashions. The MEAR framework appends a number of classifier heads (consultants) to the function extractor of the legacy mannequin. MEAR goals to study the function extractor in an adversarial vogue by leveraging complementary info from the person consultants in addition to the ensemble of the consultants to be extra sturdy for an unseen check area. We prepare state-of-the-art networks with MEAR for 2 vital pc imaginative and prescient duties, picture classification and semantic segmentation. We evaluate MEAR to a wide range of baselines on a number of benchmarks. We present that MEAR is aggressive with different strategies and extra profitable at studying sturdy options.

By Behnam Gholami; Qingfeng Liu; Mostafa El-Khamy; and Jungwon Lee of Samsung Semiconductor.

Click on here to view this IEEE open entry article.

Source link


Please enter your comment!
Please enter your name here