论文部分内容阅读
Knowledge base question answering aims to answer natural language questions by querying external knowledge base,which has been widely applied to many real-world systems.Most existing methods are template-based or training BiLSTMs or CNNs on the task-specific dataset.However,the hand-crafted templates are time-consuming to design as well as highly formalist without generalization ability.At the same time,BiLSTMs and CNNs require large-scale training data which is unpractical in most cases.To solve these problems,we utilize the prevailing pre-trained BERT model which leverages prior linguistic knowledge to obtain deep contextualized representations.Experimental results demonstrate that our model can achieve the state-of-the-art performance on the NLPCC-ICCPOL 2016 KBQA dataset,with an 84.12%averaged F1 score(1.65%absolute improvement).