论文部分内容阅读
针对最小二乘支持向量回归大样本学习效率偏低的问题,提出了一种最小二乘支持向量回归快速学习算法模型。首先将欧氏距离进行推广,设计了一种支持向量回归高维特征空间相似性测度标准,然后构建了无监督核聚类分析支持向量选择算法,再通过Nystr?m方法逼近原最小二乘支持向量回归学习问题的解。最后采用Sinc函数和多个数据集测试了模型的性能。实验结果表明,在预测误差没有明显下降的情况下,该模型能克服最小二乘支持向量回归处理大样本学习问题时的内存溢出错误,显著提高其学习效率。
Aiming at the problem of low learning efficiency of large sample regression with least square support vector regression, a fast learning algorithm based on least squares support vector regression is proposed. First, the Euclidean distance is generalized. A similarity measure standard of support vector regression for high dimensional feature space is designed. Then an unsupervised kernel clustering analysis support vector selection algorithm is constructed, and the original Least Squares support is approximated by Nystr? M method Vector Regression Learning Problem Solution. Finally, we test the performance of the model with Sinc function and multiple data sets. The experimental results show that the proposed model can overcome the memory overflow error in the large-sample learning problem by using least-squares support vector regression when the prediction error is not significantly reduced, and significantly improve the learning efficiency.