论文部分内容阅读
针对文本无关话者辨别多分类目标和大训练样本情况,将经典Logistic回归模型进行多元化变形,并叠加L2惩罚因子以提高模型泛化能力.将最优目标负对数Logistic公式对偶化,并利用序列最小优化算法进行模型训练,速率优于传统多元核Logistic回归训练算法.实验显示,该模型构建简单,训练算法快捷,且识别率优于经典支持向量机与二元核Logistic回归模型所生成的“一对一”多分类方法.
Aiming at the situation that the irrelevant speaker discriminates the multi-class targets and the large training samples, the classical Logistic regression model is diversified, and the L2 penalty factor is added to improve the model generalization ability. The optimal target negative logarithm logistic formula is dualized The algorithm of sequence minimum optimization is used to train the model, which is superior to the traditional multivariate Logistic regression training algorithm.Experiments show that the model is simple to construct and the training algorithm is fast, and the recognition rate is better than that of the classical support vector machine and binary kernel Logistic regression model “One to one ” multi-classification method.