Generative Pre-trained Transformer (GPT) Models for Irony Detection and Classification


Creative Commons License

Aytekin M., Erdem O. A.

4th International Informatics and Software Engineering Conference, Ankara, Türkiye, 21 - 22 Aralık 2023, cilt.1, sa.22, ss.1-8

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Cilt numarası: 1
  • Doi Numarası: 10.1109/iisec59749.2023.10391005
  • Basıldığı Şehir: Ankara
  • Basıldığı Ülke: Türkiye
  • Sayfa Sayıları: ss.1-8
  • Gazi Üniversitesi Adresli: Evet

Özet

The tasks of identifying and classifying ironic texts remain an ongoing challenge, necessitating continued exploration for enhanced solutions in NLP. This study delves into assessing the effectiveness of Generative Pre-trained Transformer (GPT) models, which have emerged in recent years, in handling irony detection and classification tasks through the implementation of zero-shot learning and few-shot learning methods in English texts. Additionally, we compare GPT text embedding models with GloVe, a proven text embedding model, utilizing various machine learning and deep learning approaches. Within this study, we employed the SemEval-2018 Task 3 dataset, curated as part of the Semantic Evaluation 2018 workshop. The most noteworthy achievement in binary classification, namely irony detection, is an F1 score of 68.9%, attained by the text-davinci-003 model through few-shot learning, with access to forty-two samples for training. In terms of multiclass classification, namely irony classification, the textembedding-ada-002 text embedding model, in conjunction with the Gaussian Naive Bayes algorithm, attained the best result with an F1 score of 48.5%. The best results obtained in the study achieved comparable results with previous studies.