Natural language processing models have been increasingly deployed in real-world applications for different tasks. Recently, given that Graph Neural Networks (GNNs) have proven to handle complex structures well and preserve global information, several researchers have explored the application of these techniques for text classification by proposing an alternative to traditional feature representation models, such as vector representation, which most of the time fail to map the full richness of the text.
To date, a number of graph-based models for text representation, document summarization as well as question-answering have been proposed and have provided a considerable boost in those tasks. However, most of the strategies were each proposed for a specific domain and validated on very specific data collections with particular textual features, making it difficult to compare and extend them to new scenarios. Furthermore, the models proposed to date generally base their graph construction method on the co-occurrence of terms, leaving aside critical factors such as syntax and co-reference.
All this reflects that the graph-based text representation area requires further study and in-depth exploration.