论文部分内容阅读
神经网络的训练效果相当地仍依赖于样本的选取。本文介绍三种不同的样本选取方案,并从MonteCarlo实验进行比较研究来探讨这个问题。模拟分析结果表明,虽然三种方法选取的样本分布迥异,但训练后的网络泛化能力都很强,即接近于Bayes极限。此外,网络的泛化能力还依赖于训练样本集的大小。因此,适当地选取训练样本子集不仅使网络有较好的执行结果,还可以减少训练时间。
The training effect of neural network still depends quite on the sample selection. This article presents three different sample selection scenarios and discusses this issue from Monte Carlo experiments. The results of simulation analysis show that although the distributions of samples selected by the three methods are very different, the trained network has a very good generalization ability, which is close to the Bayesian limit. In addition, the generalization of the network also depends on the size of the training sample set. Therefore, proper selection of training sample subset not only makes the network have better performance, but also reduces the training time.