inproceedings

Are Your Keywords Like My Queries? A Corpus-Wide Evaluation of Keyword Extractors with Real Searches

Proceedings of the 31st International Conference on Computational Linguistics | pages 1943--1951, 01, 2025

Author

Galletti, Martina and Prevedello, Giulio and Brugnoli, Emanuele and Lo Sardo, Donald Ruggiero and Gravino, Pietro

Editor

Rambow, Owen and Wanner, Leo and Apidianaki, Marianna and Al-Khalifa, Hend and Eugenio, Barbara Di and Schockaert, Steven

Abstract

Keyword Extraction (KE) is essential in Natural Language Processing (NLP) for identifying key terms that represent the main themes of a text, and it is vital for applications such as information retrieval, text summarisation, and document classification. Despite the development of various KE methods --- including statistical approaches and advanced deep learning models --- evaluating their effectiveness remains challenging. Current evaluation metrics focus on keyword quality, balance, and overlap with annotations from authors and professional indexers, but neglect real-world information retrieval needs. This paper introduces a novel evaluation method designed to overcome this limitation by using real query data from Google Trends and can be used with both supervised and unsupervised KE approaches. We applied this method to three popular KE approaches (YAKE, RAKE and KeyBERT) and found that KeyBERT was the most effective in capturing users' top queries, with RAKE also showing surprisingly good performance. The code is open-access and publicly available.

Related Members