Abstractive Auto Text Summarization using deep learning techniques
Amr Mahmoud Zaki;
Abstract
AMR MAHMOUD ZAKI. Abstractive Auto Text Summarization using deep learning techniques. (Under the direction of Prof. Dr. Hazem M. Abbas and Prof. Dr. Mahmoud I. Khalil).
Text summarization is the task of generating a summary from a long text. Extractive Methods approaches have been rst proposed by literature as statistical methods for this task. However, these models lack some sophisticated abilities in summarization, like paraphrasing and generalization. Newer methods were proposed which are based on neural approaches, called Abstractive Methods. They actively try to understand the context of the text to generate novel summaries. In this work we are studying some of these approaches.
Multiple Abstractive Methods were recently proposed by literature, they rely on a basic framework named encoder-decoder. Multiple baselines were proposed for the encoder-decoder, namely, seq2seq recurrent based models, and the transformer models. Our research tends to study the problems that this encoder-decoder framework suffers from. The seq2seq recurrent based model was chosen as a baseline in our work, as it was thoroughly studied in literature in order to minimize the encoder-decoder problems. This baseline is built using LSTM in an encoder-decoder architecture, with attention. However this baseline suffers from some problems. This work goes through multiple models to try and solve these problems. Beginning with Pointer-Generator, to using a curriculum learning approach called Scheduled-Sampling. Then our research applies the new approaches of combining reinforcement learning with seq2seq.
Text summarization is the task of generating a summary from a long text. Extractive Methods approaches have been rst proposed by literature as statistical methods for this task. However, these models lack some sophisticated abilities in summarization, like paraphrasing and generalization. Newer methods were proposed which are based on neural approaches, called Abstractive Methods. They actively try to understand the context of the text to generate novel summaries. In this work we are studying some of these approaches.
Multiple Abstractive Methods were recently proposed by literature, they rely on a basic framework named encoder-decoder. Multiple baselines were proposed for the encoder-decoder, namely, seq2seq recurrent based models, and the transformer models. Our research tends to study the problems that this encoder-decoder framework suffers from. The seq2seq recurrent based model was chosen as a baseline in our work, as it was thoroughly studied in literature in order to minimize the encoder-decoder problems. This baseline is built using LSTM in an encoder-decoder architecture, with attention. However this baseline suffers from some problems. This work goes through multiple models to try and solve these problems. Beginning with Pointer-Generator, to using a curriculum learning approach called Scheduled-Sampling. Then our research applies the new approaches of combining reinforcement learning with seq2seq.
Other data
| Title | Abstractive Auto Text Summarization using deep learning techniques | Other Titles | التلخيص النصي التوضيحي باستخدام تقنيات التعلم العميق | Authors | Amr Mahmoud Zaki | Issue Date | 2020 |
Attached Files
| File | Size | Format | |
|---|---|---|---|
| BB1898.pdf | 913.34 kB | Adobe PDF | View/Open |
Similar Items from Core Recommender Database
Items in Ain Shams Scholar are protected by copyright, with all rights reserved, unless otherwise indicated.