Disposed Watermarking to Attack Deep Neural Network
Pages : 221-226
Download PDF
Abstract
I have proposed in this paper it show that the generated adversarial samples are not only capable of fooling the Inception of theV3 model with high success rates, but also shared to the other models, such as the Recognition developed by Amazon. In this paper, we proposed a visible adversarial attack approach utilizing watermarks, with two types of attack to simulate the real-world cases of watermarks and have successfully interfered the judgment from some state-of-the-art deep learning models. partial adversarial samples that shows great transferability onto other models with including the Recognition.In conclusion, we believe this work suggests that the robustness of current object recognition models are yet to be further improved, and more defense approaches shall be employed. We provide a comprehensive list of the requirements that the empowers of quantitative and qualitative assessment of an current and pending the DL watermarking approaches.
Keywords: Adversarial attack, Watermark,OCR, DNN.