Exploring Technologies for Extractive Text Summarization: A Review of Transformer and Reinforcement Learning Models

Year : 2025 | Volume : 15 | Issue : 01 | Page : 1 6
    By

    Rajshree Tarapure,

  • Sharayu Mulay,

  • Ashlesha Kathe,

  • Prapti Jadhav,

  • Shalaka Deore,

  1. Student, Department of Computer Engineering, Modern Education Society’s College of Engineering, Savitribai Phule Pune University, Pune, Maharashtra, India
  2. Student, Department of Computer Engineering, Modern Education Society’s College of Engineering, Savitribai Phule Pune University, Pune, Maharashtra, India
  3. Student, Department of Computer Engineering, Modern Education Society’s College of Engineering, Savitribai Phule Pune University, Pune,, Maharashtra, India
  4. Student, Department of Computer Engineering, Modern Education Society’s College of Engineering, Savitribai Phule Pune University, Pune, Maharashtra, India
  5. Student, Department of Computer Engineering, Modern Education Society’s College of Engineering, Savitribai Phule Pune University, Pune, Maharashtra, India

Abstract

In recent years, the size of information on the Internet has increased exponentially. Therefore, a solution is needed to transform large amounts of raw data into useful information the human brain can understand. Automatic Text Summarization (ATS) is a part of Natural Language Processing (NLP) that aims to take long texts and shorten them, keeping the most important information in a clear and easy-to-understand way. This research report explores methods for extracting content from text and its relevance. Reinforcement Learning is used in extractive summarization to make summaries more relevant, clear, and varied. It helps the system learn to create better summaries by rewarding them when the summary meets these goals. According to the review, Transformer models excel in determining how words in a sentence relate to one another and the overall meaning. This skill has shown to be quite useful for tasks such as text summarization. They excel at navigating extensive chunks of literature and extracting the most important ideas. This way, the system gets better at picking the right sentences and organizing them effectively. We delve into the Transformer model, the implementation of BERT, and the integration of support learning. This study describes the main processes operating in content extraction and their important role in improving the quality and performance of text collection.

Keywords: Extractive, single document, transformer model, reinforcement learning, ATS, NLP

[This article belongs to Current Trends in Signal Processing ]

How to cite this article:
Rajshree Tarapure, Sharayu Mulay, Ashlesha Kathe, Prapti Jadhav, Shalaka Deore. Exploring Technologies for Extractive Text Summarization: A Review of Transformer and Reinforcement Learning Models. Current Trends in Signal Processing. 2025; 15(01):1-6.
How to cite this URL:
Rajshree Tarapure, Sharayu Mulay, Ashlesha Kathe, Prapti Jadhav, Shalaka Deore. Exploring Technologies for Extractive Text Summarization: A Review of Transformer and Reinforcement Learning Models. Current Trends in Signal Processing. 2025; 15(01):1-6. Available from: https://journals.stmjournals.com/ctsp/article=2025/view=197542


References

1. Alomari A, Idris N, Sabri AQ, Alsmadi I. Deep reinforcement and transfer learning for abstractive text summarization: A review. Comput Speech Lang. 2022 Jan 1; 71: 101276.
2. Wang G, Wu W. Surveying the landscape of text summarization with deep learning: A comprehensive review. Discrete Math Algorithms Appl. 2024;16(03):2330004. DOI: 10.1142/S179 3830923300047.
3. Rawat A. Enhancing abstractive and extractive reviews text summarization using NLP and neural networks [Doctoral Dissertation]. Ireland: Dublin Business School; 2024.
4. Guan W, Smetannikov I, Tianxing M. Survey on automatic text summarization and transformer models applicability. In Proceedings of the 2020 1st International Conference on Control, Robotics and Intelligent System. 2020 Oct 27; 176–184.
5. Gao Y, Meyer CM, Mesgar M, Gurevych I. Reward learning for efficient reinforcement learning in extractive document summarisation. arXiv preprint arXiv:1907.12894. 2019 Jul 30.
6. Ranganathan J, Abuka G. Text summarization using transformer model. In 2022 IEEE 9th International Conference on Social Networks Analysis, Management and Security (SNAMS). 2022, Nov; 1–5.
7. Srikanth A, Umasankar AS, Thanu S, Nirmala SJ. Extractive text summarization using dynamic clustering and coreference on BERT. In 2020 IEEE 5th international conference on computing, communication and security (ICCCS). 2020 Oct; 1–5.
8. Adhikari S. Nlp based machine learning approaches for text summarization. In 2020 IEEE Fourth International Conference on Computing Methodologies and Communication (ICCMC). 2020 Mar; 535–538.
9. Jugran S, Kumar A, Tyagi BS, Anand V. Extractive automatic text summarization using SpaCy in Python & NLP. In 2021 IEEE International conference on advanced computing and innovative technologies in engineering (ICACITE). 2021 Mar; 582–585.
10. Akhmetov I, Gelbukh A, Mussabayev R. Greedy optimization method for extractive summarization of scientific articles. IEEE Access. 2021; 9: 168141–168153.


Regular Issue Subscription Review Article
Volume 15
Issue 01
Received 07/12/2024
Accepted 06/01/2025
Published 08/02/2025
Publication Time 63 Days


Login


My IP

PlumX Metrics