Abstract
Artificial neurons are a mathematical abstraction of biological neurons. The model of an artificial neuron considers the main characteristics of a biological neuron and represents them as the weights, an adder, and an activation function, where the knowledge learned is stored in the adjusted weights. Artificial neural networks (ANNs) are structures of interconnected artificial neurons. There exist learning algorithms, which provide rules for the adjusting weights, called training algorithms. Commonly, ANNs are used to solve problems of classification, pattern recognition, function approximation, control, among others. Often, ANNs are trained with learning rules based on information from error derivatives with respect to the weights. Backpropagation (BP) is based on information from the first-order derivative, and probably is the well-known training algorithm, but has a slow error convergence. Last two decades, training algorithms based on extended Kalman filter (EKF), which is faster than BP, have been exploited. However, the calculation of the derivatives using the EKF algorithm requires high computational resources. To avoid the derivatives calculation, in this work an online training algorithm based on the discrete-time super-twisting (ST) differentiator is proposed. Super-twisting is a control technique from sliding modes, which attenuates chattering, i.e., the high-frequency oscillation effects. This technique consists in designing a sliding variable of relative degree r=1, that, with a discontinuous control action, the sliding variable is forced to zero in finite time and remains at zero even in the presence of perturbation and uncertainties. Recently, in it is proposed a discrete-time super-twisting differentiator training algorithm. However, they used a classic ST structure, besides, with a high complexity to be implemented. In contrast, in the work we present, it is used a different ST structure proposed in which improves the convergence time, and in an easier way to implement. Therefore, in this work it is presented an online discrete-time training algorithm based on a super-twisting (ST) differentiator from sliding mode theory. Due to, sliding mode differentiators can estimate derivatives in finite time, the proposed training algorithm does not require to compute of the derivatives, unlike conventional training algorithms. The proposed training algorithm is implemented for the training of a recurrent high-order neural network (RHONN) identifier in a series-parallel configuration, and its performance is compared with the results using the extended Kalman filter (EKF) training algorithm. Simulation results of the RHONN identifier for the Lorenz system are presented.
Recommended Citation
Rios-Huerta, Daniel; Alanis, Alma Y.; Rios, Jorge; and Arana-Daniel, Nancy, "Neural Identifier using Super-Twisting Differentiator Training Algorithm" (2019). AMCIS 2019 Proceedings. 53.
https://aisel.aisnet.org/amcis2019/treo/treos/53
Neural Identifier using Super-Twisting Differentiator Training Algorithm
Artificial neurons are a mathematical abstraction of biological neurons. The model of an artificial neuron considers the main characteristics of a biological neuron and represents them as the weights, an adder, and an activation function, where the knowledge learned is stored in the adjusted weights. Artificial neural networks (ANNs) are structures of interconnected artificial neurons. There exist learning algorithms, which provide rules for the adjusting weights, called training algorithms. Commonly, ANNs are used to solve problems of classification, pattern recognition, function approximation, control, among others. Often, ANNs are trained with learning rules based on information from error derivatives with respect to the weights. Backpropagation (BP) is based on information from the first-order derivative, and probably is the well-known training algorithm, but has a slow error convergence. Last two decades, training algorithms based on extended Kalman filter (EKF), which is faster than BP, have been exploited. However, the calculation of the derivatives using the EKF algorithm requires high computational resources. To avoid the derivatives calculation, in this work an online training algorithm based on the discrete-time super-twisting (ST) differentiator is proposed. Super-twisting is a control technique from sliding modes, which attenuates chattering, i.e., the high-frequency oscillation effects. This technique consists in designing a sliding variable of relative degree r=1, that, with a discontinuous control action, the sliding variable is forced to zero in finite time and remains at zero even in the presence of perturbation and uncertainties. Recently, in it is proposed a discrete-time super-twisting differentiator training algorithm. However, they used a classic ST structure, besides, with a high complexity to be implemented. In contrast, in the work we present, it is used a different ST structure proposed in which improves the convergence time, and in an easier way to implement. Therefore, in this work it is presented an online discrete-time training algorithm based on a super-twisting (ST) differentiator from sliding mode theory. Due to, sliding mode differentiators can estimate derivatives in finite time, the proposed training algorithm does not require to compute of the derivatives, unlike conventional training algorithms. The proposed training algorithm is implemented for the training of a recurrent high-order neural network (RHONN) identifier in a series-parallel configuration, and its performance is compared with the results using the extended Kalman filter (EKF) training algorithm. Simulation results of the RHONN identifier for the Lorenz system are presented.