4th International Informatics and Software Engineering Conference-IEEE 2023, Ankara, Türkiye, 21 - 22 Aralık 2023, cilt.3, sa.22, ss.1-8, (Tam Metin Bildiri)
The tasks of identifying and classifying ironic
texts remain an ongoing challenge, necessitating continued
exploration for enhanced solutions in NLP. This study delves
into assessing the effectiveness of Generative Pre-trained
Transformer (GPT) models, which have emerged in recent
years, in handling irony detection and classification tasks
through the implementation of zero-shot learning and few-shot
learning methods in English texts. Additionally, we compare
GPT text embedding models with GloVe, a proven text
embedding model, utilizing various machine learning and deep
learning approaches. Within this study, we employed the
SemEval-2018 Task 3 dataset, curated as part of the Semantic
Evaluation 2018 workshop. The most noteworthy achievement
in binary classification, namely irony detection, is an F1 score of
68.9%, attained by the text-davinci-003 model through few-shot
learning, with access to forty-two samples for training. In terms
of multiclass classification, namely irony classification, the textembedding-ada-002 text embedding model, in conjunction with
the Gaussian Naive Bayes algorithm, attained the best result
with an F1 score of 48.5%. The best results obtained in the study
achieved comparable results with previous studies.