@Plams
2018-03-15T03:08:06.000000Z
字数 4703
阅读 311
It is about ten years since the BOSS competition, the first challenge on steganalysis in 2010, and many researchers conducted profound studies on steganalysis in this decade. Traditional steganalysis approaches mostly could be divided into two parts.
The first parts is the feature engineering. It is difficult to get sufficient information for learning without proper feature extract progress. In general there are two criterions of feature extraction, complete?completion “On Completeness of Feature Spaces in Blind Steganalysis,” which demand that the features should have differences between a cover image and a stego one and diverse?diversity Breaking HUGO - The Process Discovery
, which demand that those features should catch as many as stego information hidden in the image. Rich Models(RM) ~~Rich Models for Steganalysis of Digital Images~~is a powerful feature set proposed in 2012 and many other methods arose those years based on it.
The second parts is the learning step to discriminate the exacted feature between a stego and a cover. The best classifier and be widely used classifier was the EC, an ensemble classification for steganalysis until 2015. The RM+EC became the most powerful method when facing steganography problems.
But in the relative areas, such as computer version, the artificial neural network and further the deep learning network became the mainstream method those years. No longer the separated structure, feature extraction and learning step, this end-to-end architecture show its power on not only computer version and many other fields.
With the rapidly development of neural network and deep learning aiming at recent years, the steganalysis field arose a batch of method based on it. In 2015, Qian Deep learning for steganalysis via convolutional neural networks proposed a CNN network for steganalysis, in their paper, the GNCNN reached a detect error 4% higher than SRM, the ensemble classifier with Spatial Rich Model, which show us the prospect applying the neural network and the deep learning network on steganalysis. After this, we can see if we add some domain knowledge such as KV linear filter kernel, adjust network structure carefully, or use special components differ from empirical CV area(TLU), or use pair training set aiming at the BN in deep steganalysis network, we can reach or even better than the performance of RM+EC. The neural network and the deep learning network for steganalysis is a promising way for steganalysis.
Though deep neural networks are powerful and achieve surprising achievements not only in steganalysis field but also in other very extensive fields. But as a result of the leak of great interpretability and some counter-intuitive properties, adversarial examples, by adding some perturbations to a test image, aiming to a particular neural network structure could be produced easily.~~Intriguing properties of neural networks ~~ We can get the perturbations by backpropagation and it is not just caused by the overfitted model, it is a more general phenomenon in neural networks. So, steganalysis based on deep neural networks could also has same problems.
In this paper, first we discuss two different method to obtain adversarial examples, a BP-based (Back Propagation) attack and EA-based(Differential Evolution) attack. Second,
Since 2013, the deep neural network has been widely used in many different filed and in some particular work in could reach or even better than the performance of humanbeings. But Szegedy Intriguing properties of neural networks 2014 attend to that the whole function learned by neural network is not continuous, which reveals a possibility that we may just add a very slight perturbation to the image then this image can be misclassified, and even more this image can be classified to a specific label which we want. This attack method is 'Adversarial Example'.
One basic algorithm for generating adversarial sample is called 'Fast Gradient Sign Method' proposed in 2015. EXPLAINING AND HARNESSING ADVERSARIAL EXAMPLES
For a linear model, we have a inpurt and an adversarial input , is a pertubration which is very small , where is small enough that will not change the classification of normally if we donot select a proper factitiously. For the adversarial example :
Back to the neural networks, many of them are too 'linear' and can be easily attack.
Let be the weights of the neural network, the raw image, the right class of , and the loss function used to train the neural network. As the linear model, we have
这里插个图
FGM:快速梯度攻击算法(Goodfellow, Shlens, and Szegedy 2015)。
I-FGM:迭代的快速梯度攻击算法(Kurakin, Goodfellow, and Bengio 2016b)。
“快速梯度标志”(fast gradient sign)