Content Tags

There are no tags.

Digital Watermarking for Deep Neural Networks.

RSS Source
Authors
Yuki Nagai, Yusuke Uchida, Shigeyuki Sakazawa, Shin'ichi Satoh

Although deep neural networks have made tremendous progress in the area ofmultimedia representation, training neural models requires a large amount ofdata and time. It is well-known that utilizing trained models as initialweights often achieves lower training error than neural networks that are notpre-trained. A fine-tuning step helps to reduce both the computational cost andimprove performance. Therefore, sharing trained models has been very importantfor the rapid progress of research and development. In addition, trained modelscould be important assets for the owner(s) who trained them, hence we regardtrained models as intellectual property. In this paper, we propose a digitalwatermarking technology for ownership authorization of deep neural networks.First, we formulate a new problem: embedding watermarks into deep neuralnetworks. We also define requirements, embedding situations, and attack typeson watermarking in deep neural networks. Second, we propose a general frameworkfor embedding a watermark in model parameters, using a parameter regularizer.Our approach does not impair the performance of networks into which a watermarkis placed because the watermark is embedded while training the host network.Finally, we perform comprehensive experiments to reveal the potential ofwatermarking deep neural networks as the basis of this new research effort. Weshow that our framework can embed a watermark during the training of a deepneural network from scratch, and during fine-tuning and distilling, withoutimpairing its performance. The embedded watermark does not disappear even afterfine-tuning or parameter pruning; the watermark remains complete even after 65%of parameters are pruned.

Stay in the loop.

Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings.