BERT-Based Model for Reading Comprehension Question Answering

Mosaed, Abdelrahman A.; Hanan Hindy; Aref, M.;

Abstract


Question Answering (QA) has been an open research topic in the past years due to its significance in different domains. Moreover, it has different challenges to be solved to reach a stage where models can mimic human reasoning and answer various questions and tasks efficiently. This paper proposes an implementation of a pre-trained BERT model that aims to answer reading comprehension questions. The proposed model can be applied to different downstream tasks that are either single sentence, such as sentiment analysis tasks, or pair of sentences, such as question-answering tasks with a given context. The existence of different input embedding representations and output representations supported the model to integrate together to overcome such any downstream task. The proposed model is a BERTBASE with a total of 110M pre-trained parameters. The model has been fine-tuned and evaluated using the SQuAD 2.0 dataset. The model results reached an Exact Match score of 61.12 and 72.5 F1 score.


Other data

Title BERT-Based Model for Reading Comprehension Question Answering
Authors Mosaed, Abdelrahman A.; Hanan Hindy ; Aref, M. 
Keywords Natural Language Processing (NLP);Pre-trained models;Question Answering (QA);Reading Comprehension;SQuAD
Issue Date 1-Jan-2023
Conference Proceedings 11th IEEE International Conference on Intelligent Computing and Information Systems Icicis 2023
ISBN [9798350322101]
DOI 10.1109/ICICIS58388.2023.10391167
Scopus ID 2-s2.0-85184667480

Recommend this item

Similar Items from Core Recommender Database

Google ScholarTM

Check



Items in Ain Shams Scholar are protected by copyright, with all rights reserved, unless otherwise indicated.