[关闭]
@wuxin1994 2017-11-26T14:40:22.000000Z 字数 1519 阅读 249

吴帆1126

学习笔记17


整理对抗训练部分的内容和论文
方法的提出:Szegedy在《Intriguing properties of neural networks》中首次提出在训练集中注入带有完全标注的对抗样本,这种混合了合法样本和对抗样本训练出的模型有更强的鲁棒性。I.J. Goodfellow 在《Explaining And Harnessing Adversarial Examples》中也通过加入对抗样本来训练模型的方法在 MNIST 上的实验结果使模型对对抗样本识别的错误率从 89.4%降到17.9%。
发展:《Understanding Adversarial Training: Increasing Local Stability of Neural Nets through Robust Optimization》提出了a framework for robust optimization of neural nets对对抗训练的原理部分进行了完善。并且指出I.J. Goodfellow 在《Explaining And Harnessing Adversarial Examples》中提出的adversarial training中的目标函数是本文提出的对抗目标函数的一个特例。《Learning With A Strong Adversary》中采用随机梯度下降SGD来解决robustness optimization问题,并且取得了比《Explaining And Harnessing Adversarial Examples》好的实验结果, but they are often statistically non-significative.原始的对抗训练方法主要是在针对白盒攻击时取得较好的效果而对于黑盒攻击效果甚微,所以《Ensemble Adversarial Training: Attacks and Defenses》完善了这方面的不足,为了避免degenerate minima问题即产生的对抗样本过于简单,使得对抗损失对训练目标的影响非常有限,文中提出ensemble adversarial training,加入训练集的对抗样本不仅仅包括该模型的对抗样本,还包括其他提前训练模型的对抗样本,通过实验证明虽然这种方法虽然对白盒攻击没有特别好的鲁棒性但是能够提高系统对黑盒攻击的鲁棒性。原始的对抗训练研究的是小规模的训练场景,《Adversarial Machine Learning At Scale》运用batch normalization完善了在大规模模型和数据集中对抗训练的应用。
Conclusion: This technique, however, suffers from the high cost to generate adversarial examples and (at least) doubles the training cost of DNN models due to its iterative re-training procedure. It is essential to include adversarial examples produced by all known attacks in adversarial training, since this defensive training is non-adaptive. But, it is computationally expensive to find adversarial inputs by most known techniques, and there is no way to be confident the adversary is limited to techniques that are known to the trainer.

添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注