Fine-Tuning Llama3-8B with LoRA for Emotion Text Classification

##plugins.themes.academic_pro.article.main##

Atika Nishat
Areej Mustafa

Abstract

Emotion classification has become a critical component in Natural Language Processing (NLP) tasks, allowing machines to understand human emotions and respond accordingly. In recent years, large language models like Llama3-8B have demonstrated impressive performance in various NLP tasks. However, due to their substantial size and computational requirements, fine-tuning these models can be challenging. In this paper, we explore the fine-tuning of Llama3-8B, a state-of-the-art language model, using Low-Rank Adaptation (LoRA), a technique that reduces the computational cost of model fine-tuning by introducing low-rank updates to the model's weights. Specifically, we apply this approach to emotion text classification, where the task is to classify a given piece of text into one of several predefined emotion categories (e.g., joy, sadness, anger, etc.). We demonstrate that using LoRA enables effective emotion classification without the need for retraining the entire model, thus making it more efficient while maintaining high accuracy. Through extensive experimentation, we show that LoRA-based fine-tuning achieves comparable performance to traditional fine-tuning methods but with significantly reduced computational overhead.

##plugins.themes.academic_pro.article.details##

How to Cite
Atika Nishat, & Areej Mustafa. (2024). Fine-Tuning Llama3-8B with LoRA for Emotion Text Classification. Pioneer Research Journal of Computing Science, 1(1), 93–99. Retrieved from http://prjcs.com/index.php/prjcs/article/view/21