论文部分内容阅读
目前大部分神经网络学习算法都是对网络所有的参数同时进行学习.当网络规模较大时,这种做法常常很耗时.由于许多网络,例如感知器、径向基函数网络、概率广义回归网络以及模糊神经网络,都是一种多层前馈型网络,它们的输入输出映射都可以表示为一组可变基的线性组合.网络的参数也表现为二类:可变基中的参数是非线性的,组合系数是线性的.为此,提出了一个将这二类参数进行分离学习的算法.仿真结果表明,这个学习算法加快了学习过程,提高了网络的逼近性能
At present, most of the neural network learning algorithms learn all the parameters of the network at the same time. This is often time-consuming when the network is large. Because many networks, such as perceptrons, radial basis function networks, probabilistic generalized regression networks and fuzzy neural networks, are all multi-layer feedforward networks, their input and output mappings can be expressed as a set of linear combination. Network parameters also appear in two categories: variable parameters in the non-linear, combined coefficient is linear. To this end, proposed an algorithm to separate these two types of parameters learning. The simulation results show that this learning algorithm accelerates the learning process and improves the approximation performance of the network