Learning Quantized Neural Nets by Coarse Gradient Method for Non-linear Classification

Quantized or low-bit neural networks are attractive due to their inference efficiency. However, training deep neural networks with quantized activations involves minimizing a discontinuous and piecewise constant loss function. Such a loss function has zero gradients almost everywhere (a.e.), which makes the conventional gradient-based algorithms inapplicable. To this end, we study a novel class of \emph{biased} first-order oracle, termed coarse gradient, for overcoming the vanished gradient issue. A coarse gradient is generated by replacing the a.e. zero derivatives of quantized (i.e., stair-case) ReLU activation composited in the chain rule with some heuristic proxy derivative called straight-through estimator (STE). Although having been widely used in training quantized networks empirically, fundamental questions like when and why the ad-hoc STE trick works, still lacks theoretical understanding. In this paper, we propose a class of STEs with certain monotonicity, and consider their applications to the training of a two-linear-layer network with quantized activation functions for non-linear multi-category classification. We establish performance guarantees for the proposed STEs by showing that the corresponding coarse gradient methods converge to the global minimum, which leads to a perfect classification. Lastly, we present experimental results on synthetic data as well as MNIST dataset to verify our theoretical findings and demonstrate the effectiveness of our proposed STEs.

1. Introduction. Deep neural networks (DNNs) have been the main driving force for the recent wave in artificial intelligence (AI). They have achieved remarkable success in a number of domains including computer vision [14,19], reinforcement learning [18,23] and natural language processing [4], to name a few. However, due to the huge number of model parameters, the deployment of DNNs can be computationally and memory intensive. As such, it remains a great challenge to deploy DNNs on mobile electronics with low computational budget and limited memory storage.
Recent efforts have been made to the quantization of weights and activations of DNNs while in the hope of maintaining the accuracy. More specifically, quantization techniques constrain the weights or/and activation values to low-precision arithmetic (e.g. 4-bit) instead of using the conventional floating-point (32-bit) representation [12,32,2,31,17,33]. In this way, the inference of quantized DNNs translates to hardware-friendly low-bit computations rather than floating-point operations. That being said, quantization brings three critical benefits for AI systems: energy efficiency, memory savings, and inference acceleration.
The approximation power of weight quantized DNNs was investigated in [8,6], while the recent paper [22] studies the approximation power of DNNs with discretized activations. On the computational side, training quantized DNNs typically calls for solving a large-scale optimization problem, yet with extra computational and mathe-matical challenges. Although people often quantize both the weights and activations of DNNs, they can be viewed as two relatively independent subproblems. Weight quantization basically introduces an additional set-constraint that characterizes the quantized model parameters, which can be efficiently carried out by projected gradient type methods [5,15,16,30,10,28]. Activation quantization (i.e., quantizing ReLU), on the other hand, involves a stair-case activation function with zero derivative almost everywhere (a.e.) in place of the sub-differentiable ReLU. Therefore, the resulting composite loss function is piece-wise constant and cannot be minimized via the (stochastic) gradient method due to the vanished gradient.
To overcome this issue, a simple and hardware friendly approach is to use a straight-through estimator (STE) [9,1,26]. More precisely, one can replace the a.e. zero derivative of quantized ReLU with an ad-hoc surrogate in the backward pass, while keeping the original quantized function during the forward pass. Mathematically, STE gives rise to a biased first-order oracle computed by an unusual chain rule. This first-order oracle is not the gradient of the original loss function because there exists a mismatch between the forward and backward passes. Throughout this paper, this STE-induced type of "gradient" is called coarse gradient. While coarse gradient is not the true gradient, in practice it works as it miraculously points towards a descent direction (see [26] for a thorough study in the regression setting). Moreover, coarse gradient has the same computational complexity as standard gradient. Just like the standard gradient descent, the minimization procedure of training activation quantized networks simply proceeds by repeatedly moving one step at current point in the opposite of coarse gradient with some step size. The performance of the resulting coarse gradient method, e.g. convergence property, naturally relies on the choice of STE. How to choose a proper STE so that the resulting training algorithm is provably convergent is still poorly understood, especially in the nonlinear classification setting.
1.1. Related Works. The idea of STE dated back to the classical perceptron algorithm [20,21] for binary classification. Specifically, the perceptron algorithm attempts to solve the empirical risk minimization problem: where (x i , y i ) is the i th training sample with y i ∈ {±1} being a binary label; for a given input x i , the single-layer perceptron model with weights w outputs the class prediction sign(x i w). To train perceptrons, Rosenblatt [20] proposed the following iteration for solving (1.1) with the step size η > 0: We note that the above perceptron algorithm is not the same as gradient descent algorithm. Assuming the differentiability, the standard chain rule computes the gradient of the i th sample loss function by Comparing (1.3) with (1.2), we observe that the perceptron algorithm essentially uses a coarse (and fake) gradient as if (sign) composited in the chain rule was the derivative of identity function being the constant 1.
The idea of STE was extended to train deep networks with binary activations [9]. Successful experimental results have demonstrated the effectiveness of the empirical STE approach. For example, [1] proposed a STE variant which uses the derivative of sigmoid function instead of identity function. [11] used the derivative of hard tanh function, i.e., 1 {|x|≤1} , as an STE in training binarized neural networks. To achieve less accuracy degradation, STE was later employed to train DNNs with quantized activations at higher bit-widths [12,32,2,3,29], where some other STEs were proposed including the derivatives of standard ReLU (max{x, 0}) and clipped ReLU (min{max{x, 0}, 1}).
Regarding the theoretical justification, it has been established that the perceptron algorithm in (1.2) with identity STE converges and perfectly classifies linearly separable data; see for examples [25,7] and references therein. Apart from that, to our knowledge, there had been almost no theoretical justification of STE until recently: [26] considered a two-linear-layer network with binary activation for regression problems. The training data is assumed to be instead linearly non-separable, being generated by some underlying model with true parameters. In this setting, [26] proved that the working STE is actually non-unique and that the coarse gradient algorithm is descent and converges to a valid critical point if choosing the STE to be the proxy derivative of either ReLU (i.e., max{x, 0}) or clipped ReLU function (i.e., min{max{x, 0}, 1}). Moreover, they proved that the identity STE fails to give a convergent algorithm for learning two-layer networks, although it works for single-layer perception.

Main
Contributions. Fig. 1 shows examples of 1-bit (binary) and 2-bit (ternary) activations. We see that a quantized activation function zeros out any negative input, while being increasing on the positive half. Intuitively, a working surrogate of the quantized function used in backward pass should also enjoy this monotonicity, as conjectured by [26] which proved the effectiveness of coarse gradient for two specific STEs: derivatives of ReLU and clipped ReLU, and for binarized activation. In this work, we take a further step towards understanding the convergence of coarse gradient methods for training networks with general quantized activations and for classification of linearly non-separable data. A major analytical challenge we face here is that the network loss function is not in closed analytical form, in sharp contrast to [26]. We present more general results to provide meaningful guidance on how to choose STE in activation quantization. Specifically, we study multi-category classification of linearly non-separable data by a two-linear-layer network with multibit activations and hinge loss function. We establish the convergence of coarse gradient methods for a broad class of surrogate functions. More precisely, if a function g : R → R satisfies the following properties: • g(x) = 0 for all x ≤ 0, • g (x) ≥ δ > 0 for all x > 0 with some constant δ, then with proper learning rate, the corresponding coarse gradient method converges and perfectly classifies the non-linear data when g serves as the STE during the backward pass. This gives the affirmation of a conjecture in [26] regarding good choices of STE for a classification (rather than regression) task under weaker data assumptions, e.g. allowing non-Gaussian distributions.

Notations.
We have Table 1 for notations used in this paper. Table 1 Frequently Used Notations

Symbols Definitions
indicator function which take value 1 for x ∈ S and 0 for x ∈ S |x| 2 -norm of vector x |W | the collumn-wise 2 -norm sum for a matrix W .
2.1. Data Assumptions. In this section, we consider the n-ary classification problem in the d-dimensional space X = R d . Let Y = [n] be the set of labels, and for i ∈ [n] let D i be probabilistic distributions over X × Y. Throughout this paper, we make the following assumptions on the data: 2. (Boundedness of data) There exist positive constants m and M , such that Later on, we denote D to be the evenly mixed distribution of D i for i ∈ [n].
Remark 1. The orthogonality of subspaces V i 's in the data assumption (1) above is technically needed for our proof here. However, the convergence in Theorem 3.1 to a perfect classification with random initialization is observed in more general settings when V i 's form acute angles and contain a certain level of noise. We refer to section 8.1 for supporting experimental results.
Remark 2. Assumption (3) can be relaxed to the following, while the proof remains basically the same.
There exists a linear decomposition of V i = ni j=1 V i,j and D i,j each has a marginal probability distribution function p i,j on V i,j . For any x ∈ V i,j and < m < |x| < M , it holds that We consider a two-layer neural architecture with k hidden neurons. Denote by W = [w 1 , · · · , w k ] ∈ R d×k the weight matrix in the hidden layer. Let h j = w j , x the input to the activation function, or the so-called pre-activation. Throughout this paper, we make the following assumptions: is fixed and known in the training process and satisfies: One can easily show that as long as k ≥ n, such a matrix V = (v i,j ) is ubiquitous.
For any input data x ∈ X = R d , the neural net output is The σ(·) is the quantized ReLU function acting element-wise; see Fig. 1 for examples of binary and ternary activation functions. More general quantized ReLU function of the bit-width b can be defined as follows: The prediction is given by the network output label ideallyŷ(x) = i for all x ∈ V i . The classification accuracy in percentage is the frequency that this event occurs (when network output labelŷ matches the true label) on a validation data set.
Given the data sample {x, y}, the associated hinge loss function reads To train the network with quantized activation σ, we consider the following population loss minimization problem where the sample loss l (W ; {x, y}) is defined in (2.2). Let l i be the population loss function of class i with the label y = i, i ∈ [n]. More precisely, Thus, we can rewrite the loss function as Note that the population loss fails to have simple closed-form solution even if p i are constant functions on their supports. We do not have closed-form formula at hand to analyze the learning process, which makes our analysis challenging. For notational convenience, we define: , We see that derivative of quantized ReLU function σ is a.e. zero, which gives a trivial gradient of sample loss function with respect to (w.r.t.) w j . Indeed, differentiating the sample loss function with respect to w j , we have The partial coarse gradient w.r.t. w j associated with the sample {x, y} is given by replacing σ with a straight through estimator (STE) which is the derivative of function g, namely, The sample coarse gradient∇l(W ; {x, y}) is just the concatenation of∇ wj l(W ; {x, y})'s. It is worth noting that coarse gradient is not an actual gradient, but some biased firstorder oracle which depends on the choice of g.
Throughout this paper, we consider a class of surrogate functions during the backward pass with the following properties: for all x > 0 with some constants 0 < δ <δ < ∞.
Such a g is ubiquitous in quantized deep networks training; see Fig.2 for examples of g(x) satisfying Assumption 2. Typical examples include the classical ReLU g(x) = max(x, 0) and log-tailed ReLU [2]: where q b := 2 b − 1 is the maximum quantization level. In addition, if the input of the activation function is bounded by a constant, one also can use g(x) = max{0, q b (1 − e −x/q b )}, which we call reverse exponential STE.  To train the network with quantized activation σ, we use the expectation of coarse gradient over training samples: where∇l(W ; {x, y}) is given by (2.4). In this paper, we study the convergence of coarse gradient algorithm for solving the minimization problem (2.3), which takes the following iteration with some learning rate η > 0:

Main Result and Outline of Proof.
We show that if the iterates {W t } are uniformly bounded in t, coarse gradient decent with the proxy function g under Assumption 2 converges to a global minimizer of the population loss, resulting in a perfect classification.  We outline the major steps in the proof below.
Step 1: Decompose the population loss into n components. Recall the definition of l i which is population loss functions for {x, y} ∼ D i . In Section 4, we show under certain decomposition of W , the coarse gradient decent of each one of them only affects a corresponding component of W .
Step 2: Bound the total increment of weight norm from above. Show that for all v i,j > 0 we have |w j,i |'s are monotonically increasing under coarse gradient descent. Based on boundedness on W , we further give an upper bound on the total increment of all |w j |'s, from which the convergence of coarse gradient descent follows.
Step 3: Show that when the coarse gradient vanishes, so does the population loss. In section 6, we show that when the coarse gradient vanishes towards the end of training, the population loss is zero which implies a perfect classification.

Space Decomposition. With
. Now, we can decompose X = R d into n + 1 linearly independent parts: and for any vector w j ∈ R d , we have a unique decomposition of w j : where w j,i ∈ V i for i ∈ [n + 1]. To simply notation, we let Proof. Note that for any x ∈ V i and j ∈ [k], we have x ∈ V, so for all W ∈ R d×k , x ∈ V i . The desired result follows. Proof of Lemma 4.2. Assume i, r ∈ [n] and i = r. Note that Since V i 's are linearly independent, we have and w j,r = w j,r .
By the above result, we know (2.5) is equivalent to

Learning Dynamics.
In this section, we show that some components of the weight iterates have strictly increasing magnitude whenever coarse gradient does not vanish, and it quantifies the increment during each iteration.
we have the following estimate: Proof of Lemma 5.1.
Proof of Lemma 5.2. First, we prove an inequality which will be used later. Recall that |x| ≤ M , and that∇ wj l(W , {x, y}) = 0 only when x ∈ Ω j W . Hence, we have w j,i , x > 0. We have Now, we use Fubini's Theorem to simplify the inner product: Now using the inequality just proved above, we have Combining the above two inequalities, we have where C p is defined as in Lemma 5.2 andv j as in Lemma 5.1.
Hence, it follows from Lemma 5.1 and Lemma 5.2 that which is the desired result.
Note that one component of w j is increasing but the weights are bounded by assumption, hence, summation of the increments over all steps should also be bounded. This gives the following proposition: where C p is as defined in Lemma 5.2 andv j defined in Lemma 5.1. This implies that as long asṽ i,j > 0.
6. Landscape Properties. We have shown that under boundedness assumptions, the algorithm will converge to some point where the coarse gradient vanishes. However, this does not immediately indicate the convergence to a valid point because coarse gradient is a fake gradient. We will need the following lemma to prove Proposition 2, which confirms that the points with zero coarse gradient are indeed global minima.
The first case is trivial. We show that the second case contradicts our assumption. Note that we know there exists some j ∈ [k] such that H l−1 (∂Ω j ∩ Ω) > 0. It follows from our assumption thatΩ = ∪ k j=1 Ω j = ∪ j =j Ω j , and hence Note that ∂Ω j 's are hyperplanes. Therefore, Ω j = Ω j , contradicting with our assumption that all Ω j 's are distinct.
The following result shows that the coarse gradient vanishes only at a global minimizer with zero loss, except for some degenerate cases.
Note that By assumption,∇ wj l i (W ) = 0 for allṽ i,j > 0 which implies 1 Ω W (x)1 Ω a w j (x) = 0 for allṽ i,j > 0 and a ∈ [n] almost surely. Now, for any x ∈ Ω a wj we have x ∈ Ω W . Note that x ∈ Ω W if and only if o i −o ξ ≥ 1, then for any x ∈ Ω a wj , since v i,j −v ξ,j < 1, there exist j = j and a ∈ [n] such that v i,j > 0 and x ∈ Ω a w j . By Lemma 6.1, P{x,y}∼D i [Ω W ] = 0 is empty, and thus l i (W ) = 0. The following lemma shows that the expected coarse gradient is continuous except at w j,i = 0 for some j ∈ [k].
Lemma 6.2. Consider the network in (2.1).∇ wj l i (W ) is continuous on Proof of Lemma 6.2. It suffices to prove the result for j ∈ [k]. Note that For any W 0 satisfying our assumption, we know The desired result follows from the Dominant Convergence Theorem.

Proof of Main Results. Equipped with the technical lemmas, we present:
Proof of Theorem 3.1. It is easily noticed from Assumption 1 that v i,j > 0 if and only ifṽ i,j > 0. By Lemma 5.3, if v i,j > 0 and |w 0 j,i | > 0, then |w t j,i | > 0 for all t. Since W is randomly initialized, we can ignore the possibility that w 0 j,i = 0 for some j ∈ [k] and i ∈ [n]. Moreover, Proposition 1 and Equation (2.5) Suppose W ∞ is an accumulation point and w ∞ j,r = 0 for all j ∈ [k] and r ∈ [n], we know for all v i,j > 0∇ Next, we consider the case when w j,r = 0 for some j ∈ [k] and r ∈ [n]. Lemma 5.2 implies v r,j = 0. We construct a new sequencê we knowô r = o r for all r ∈ [n]. Hence, we have This implies that ΩŴ t = Ω W t , so we have for all j ∈ [k], Letting t go to infinity on both side, we get By Lemma 5.1 and Lemma 5.2, we know so∇ W l i (W ∞ ) = 0. By Proposition 2, l i (W t ) = 0, which completes the proof.

Experiments.
In this section, we conduct experiments on both synthetic and MNIST data to verify and complement our theoretical findings. Experiments on larger networks and data sets will left for a future work. 8.1. Synthetic Data. Let {e 1 , e 2 , e 3 , e 4 } be orthonormal basis of R 4 , θ be an acute angle and v 1 = e 1 , v 2 = sin θ e 2 + cos θ e 3 , v 3 = e 3 , v 4 = e 4 . Now, we have two linearly independent subspaces of R 4 namely V 1 = Span ({v 1 , v 2 }) and V 2 = Span ({v 3 , v 4 }). We can easily calculate that the angle between V 1 and V 2 is θ. Next, with LetD i be uniform distributed onX i × {i} andD be a mixture ofD 1 andD 2 . Let X =X 1 ∪X 2 . The activation function σ is 4-bit quantized ReLU: For simplicity, we take k = 24 and v i,j = 1 2 if j − 12(i − 1) ∈ [12] for i ∈ [2] and j ∈ [24] and 0 otherwise. Now, our neural network becomes where h j = w j , x and x ∈ R 4 . The population loss is given by We choose the ReLU STE (i.e., g(x) = max{0, x}) and use the coarse gradient Taking learning rate η = 1, we have equation 2.5 becomes We find that the coarse gradient method converges to a global minimum with zero loss. As shown in box plots of Fig. 3, the convergence still holds when the subspaces V 1 and V 2 form an acute angle, and even when the data come from two levels of Gaussian noise perturbations of V 1 and V 2 . The convergence is faster and with a smaller weight norm when θ increases towards π 2 or V 2 are orthogonal to each other. This observation clearly supports the robustness of Theorem 1 beyond the regime of orthogonal classes.

MNIST Experiments.
Our theory works for a board range of STEs, while their empirical performances on deeper networks may differ. In this subsection, we compare the performances of the three type of STEs in Fig. 2.
As in [2], we resort to a modified batch normalization layer [13] and add it before each activation layer. As such, the inputs to quantized activation layers always follow unit Gaussian distribution. Then the scaling factor τ applied to the output of quantized activation layers can be pre-computed via k-means approach and get fixed during the whole training process. The optimizer we use to train quantized LeNet-5 is the (stochastic) coarse gradient method with momentum = 0.9. The batch size is 64, and learning rate is initialized to be 0.1 and then decays by a factor of 10 after every 20 epochs. The three backward pass substitutions g for the straight through estimator are (1) ReLU g(x) = max{x, 0}, (2) reverse exponential g(x) = max{0, q b (1−e −x/q b )} (3) log-tailed ReLU. The validation accuracy for each epoch is shown in Fig. 4. The validation accuracies at bit-widths 2 and 4 are listed in Table. 2. Our results show that these STEs all perform very well and give satisfactory accuracy. Specifically, reverse exponental and log-tailed STEs are comparable, both of which are slightly better than ReLU STE. In Fig. 5, we show 2D projections of MNIST features at the end of 100 epoch training of a 7 layer convolutional neural network [24] with quantized activation. The features are extracted from input to the last fully connected layer. The data points cluster near linearly independent subspaces. Together with  [24] with quantized activation function. The 10 classes are color coded, the feature points cluster near linearly independent subspaces. subsection 8.1, we have numerical evidence that the linearly independent subspace data structure (working as an extension of subspace orthogonality) occurs for high level features in a deep network for a nearly perfect classification, rendering support to the realism of our theoretical study. Enlarging angles between linear subspaces can improve classification accuracy, see [27] for such an effort on MNIST and CIFAR-10 data sets via linear feature transform. 8.3. CIFAR-10 Experiments. In this experiment, we train VGG-11/ResNet-20 with 4-bit activation function on CIFAR-10 data set to numerically validate the boundedness assumption upon the 2 -norm of weight. The optimizer is momentum SGD with no weight decay. We used initial learning rate = 0.1, with a decay factor of 0.1 at the 80-th and 140-th epoch. we see from Fig. 6 that the 2 norm of weights is bounded during the training process. This figure also shows that the norm of weights is generally increasing in epochs which coincides with our theoretical finding shown in Lemma 5.3.

9.
Summary. We studied a novel and important biased first-order oracle, called coarse gradient, in training quantized neural networks. The effectiveness of coarse gradient relies on the choice of STE used in backward pass only. We proved the convergence of coarse gradient methods for a class of STEs bearing certain monotonicity in non-linear classification using one-hidden-layer networks. In experiments on LeNet and MNIST data set, we considered three different proxy functions satisfying the monotonicity condition for backward pass: ReLU, reverse exponential function and log-tailed ReLU for training LeNet-5 with quantized activations. All of them exhibited good performance which verified our theoretical findings. In future work, we plan to expand theoretical understanding of coarse gradient descent for deep activation quantized networks.
10. Acknowledgement. This work was partially supported by NSF grants IIS-1632935, DMS-1854434, DMS-1924548, and DMS-1924935. On behalf of all authors, the corresponding author states that there is no conflict of interest.