Coupled Convolution Layer for
Convolutional Neural Network


Kazutaka Uchida, Masayuki Tanaka, Masatoshi Okutomi



We introduce a coupled convolution layer comprising two parallel convolutions with mutually constrained weights. Inspired by the human retina mechanism, we constrain the convolution weights such that one set of weights should be the negative of the other to mimic responses of on-center and off-center retinal ganglion cells. Our analysis shows that the retina-like convolution layer, a special case of the coupled convolution layer, can be realized by a normal convolutional layer with a pair of activation functions designated as Biased ON/OFF ReLU. Experimental comparisons demonstrate that the proposed coupled convolution layer performs better without increasing the number of parameters, which reveals two important facts. First, the separation of the positive and negative part into different channels plays an important role. Secondly, constraining weights across convolutions can produce better performance than training weights freely. We evaluate its effect by comparison with ReLU, LReLU, and PReLU using the CIFAR-10, CIFAR-100, and PlanktonSet 1.0 datasets.







Publication

Coupled Convolution Layer for Convolutional Neural Network
Kazutaka Uchida, Masayuki Tanaka, Masatoshi Okutomi
Proceedings of the 23rd International Conference on Pattern Recognition (ICPR2016), December, 2016 [PDF] [GitHub]