Similar Text Fragments Extraction for Identifying Common Wikipedia Communities
Svitlana Petrasova , Nina Khairova , Włodzimierz Lewoniewski , Mamyrbayev Orken , Mukhsina Kuralai
AbstractSimilar text fragments extraction from weakly formalized data is the task of natural language processing and intelligent data analysis and is used for solving the problem of automatic identification of connected knowledge fields. In order to search such common communities in Wikipedia, we propose to use as an additional stage a logical-algebraic model for similar collocations extraction. With Stanford Part-Of-Speech tagger and Stanford Universal Dependencies parser, we identify the grammatical characteristics of collocation words. With WordNet synsets, we choose their synonyms. Our dataset includes Wikipedia articles from different portals and projects. The experimental results show the frequencies of synonymous text fragments in Wikipedia articles that form common information spaces. The number of highly frequented synonymous collocations can obtain an indication of key common up-to-date Wikipedia communities.
|Journal series||Data, ISSN 2306-5729, (0 pkt, indicated Indexes)|
|Publication size in sheets||0.5|
|Keywords in Polish||Wikipedia, eksploracja danych, jakość informacji, informacja|
|Keywords in English||Wikipedia, data mining, information quality, information|
|Score||= 15.0, 11-09-2020, ArticleFromJournal|
|Publication indicators||= 0|
|Citation count*||2 (2020-09-16)|
|Uwagi||Special Issue : Data Stream Mining and Processing|
* presented citation count is obtained through Internet information analysis and it is close to the number calculated by the Publish or Perish system.