Vanchurin, Vitaly (2021) Toward a theory of machine learning. Machine Learning: Science and Technology, 2 (3). 035012. ISSN 2632-2153
Vanchurin_2021_Mach._Learn.__Sci._Technol._2_035012.pdf - Published Version
Download (1MB)
Abstract
We define a neural network as a septuple consisting of (1) a state vector, (2) an input projection, (3) an output projection, (4) a weight matrix, (5) a bias vector, (6) an activation map and (7) a loss function. We argue that the loss function can be imposed either on the boundary (i.e. input and/or output neurons) or in the bulk (i.e. hidden neurons) for both supervised and unsupervised systems. We apply the principle of maximum entropy to derive a canonical ensemble of the state vectors subject to a constraint imposed on the bulk loss function by a Lagrange multiplier (or an inverse temperature parameter). We show that in an equilibrium the canonical partition function must be a product of two factors: a function of the temperature, and a function of the bias vector and weight matrix. Consequently, the total Shannon entropy consists of two terms which represent, respectively, a thermodynamic entropy and a complexity of the neural network. We derive the first and second laws of learning: during learning the total entropy must decrease until the system reaches an equilibrium (i.e. the second law), and the increment in the loss function must be proportional to the increment in the thermodynamic entropy plus the increment in the complexity (i.e. the first law). We calculate the entropy destruction to show that the efficiency of learning is given by the Laplacian of the total free energy, which is to be maximized in an optimal neural architecture, and explain why the optimization condition is better satisfied in a deep network with a large number of hidden layers. The key properties of the model are verified numerically by training a supervised feedforward neural network using the stochastic gradient descent method. We also discuss a possibility that the entire Universe at its most fundamental level is a neural network.
Item Type: | Article |
---|---|
Subjects: | South Archive > Multidisciplinary |
Depositing User: | Unnamed user with email support@southarchive.com |
Date Deposited: | 04 Jul 2023 04:33 |
Last Modified: | 03 Jun 2024 12:43 |
URI: | http://ebooks.eprintrepositoryarticle.com/id/eprint/1226 |