Optimization of Neural Network Weights with Nature Inspired Algorithm
Abstract
Neural networks are machine learning algorithms inspired by the human brain regarding structure and function. The artificial neural network (ANN) performs well at tasks on which other conventional approaches fail. They play a crucial role in knowledge representation and learning. The strength of the connection between neurons is determined by weights. Weight optimization is critical to neural networks due to several reasons, more specifically it enables more accurate predictions and reduces loss. A variety of optimization procedures are used for weight optimization. Gradient-based algorithms are the widely used method for optimization of neural network weights, but they are unable to tackle non-differentiable functions. Moreover, gradient-based algorithms may trap in local minima for non-convex functions. Many significant real-world problems have non-convex characteristics, so utilizing gradient descent can cause algorithms to be stuck in local minima. In this paper, we proposed a novel gradient-free approach for optimizing neural network weights utilizing a genetic algorithm. The genetic algorithm is a meta-heuristic algorithm based on the natural evolution process. It can be used to solve both constrained and unconstrained optimization problems. Hence it can solve the problem of convergence for non-differentiable functions and can lead solutions towards global optima. Additionally, we proposed an algorithm to optimize neural network weights utilizing a genetic algorithm. We proved the correctness of our algorithm using the loop invariant technique. Moreover, computational cost analysis is presented for the proposed algorithm. Lastly, we utilized the MNIST dataset for demonstration of our proposed approach. Genetic algorithm capabilities of global search can overcome issues of local minima trapping of non-convex functions.
DOI URL: