Latest News

MimicNorm: Weight Mean and Last BN Layer Mimic the Dynamic of Batch Normalization. (arXiv:2010.09278v2 [cs.LG] UPDATED)

[Submitted on 19 Oct 2020 (v1), last revised 24 Oct 2020 (this version, v2)]

Download PDF

Abstract: Substantial experiments have validated the success of Batch Normalization
(BN) Layer in benefiting convergence and generalization. However, BN requires
extra memory and float-point calculation. Moreover, BN would be inaccurate on
micro-batch, as it depends on batch statistics. In this paper, we address these
problems by simplifying BN regularization while keeping two fundamental impacts
of BN layers, i.e., data decorrelation and adaptive learning rate. We propose a
novel normalization method, named MimicNorm, to improve the convergence and
efficiency in network training. MimicNorm consists of only two light
operations, including modified weight mean operations (subtract mean values
from weight parameter tensor) and one BN layer before loss function (last BN
layer). We leverage the neural tangent kernel (NTK) theory to prove that our
weight mean operation whitens activations and transits network into the chaotic
regime like BN layer, and consequently, leads to an enhanced convergence. The
last BN layer provides autotuned learning rates and also improves accuracy.
Experimental results show that MimicNorm achieves similar accuracy for various
network structures, including ResNets and lightweight networks like ShuffleNet,
with a reduction of about 20% memory consumption. The code is publicly
available at this https URL.

Submission history

From: Wen Fei [view email]

[v1]
Mon, 19 Oct 2020 07:42:41 UTC (879 KB)

[v2]
Sat, 24 Oct 2020 01:50:11 UTC (879 KB)

Read More

Show More

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker