Comparing contextual embeddings for semantic textual similarity in Portuguese

Andrade Junior, J. E., Cardoso-Silva, J.ORCID logo & Bezerra, L. C. (2021). Comparing contextual embeddings for semantic textual similarity in Portuguese. In Britto, A. & Valdivia Delgado, K. (Eds.), Intelligent Systems - 10th Brazilian Conference, BRACIS 2021, Proceedings, Part 2 (pp. 389-404). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-91699-2_27
Copy

Semantic textual similarity (STS) measures how semantically similar two sentences are. In the context of the Portuguese language, STS literature is still incipient but includes important initiatives like the ASSIN and ASSIN 2 shared tasks. The state-of-the-art for those datasets is a contextual embedding produced by a Portuguese pre-trained and fine-tuned BERT model. In this work, we investigate the application of Sentence-BERT (SBERT) contextual embeddings to these datasets. Compared to BERT, SBERT is a more computationally efficient approach, enabling its application to scalable unsupervised learning problems. Given the absence of SBERT models pre-trained in Portuguese and the computational cost for such training, we adopt multilingual models and also fine-tune them for Portuguese. Results showed that SBERT embeddings were competitive especially after fine-tuning, numerically surpassing the results of BERT on ASSIN 2 and the results observed during the shared tasks for all datasets considered.

picture_as_pdf

subject
Accepted Version

Download

Export as

EndNote BibTeX Reference Manager Refer Atom Dublin Core JSON Multiline CSV
Export