Fine-tuning BERT-based NLP Models for Sentiment Analysis of Korean Reviews: Optimizing the sequence length
Sunga Hwang, Seyeon Park, Beakcheol Jang, Journal of Internet Computing and Services, Vol. 25, No. 4, pp. 47-56, Aug. 2024
10.7472/jksii.2024.25.4.47, Full Text:
Keywords: BERT, hyperparameter fine-tuning, input sequence length, topic modeling, sentiment analysis, Korean review analysis
Abstract
Statistics
Show / Hide Statistics
Statistics (Cumulative Counts from November 1st, 2017)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.
Statistics (Cumulative Counts from November 1st, 2017)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.
|
Cite this article
[APA Style]
Hwang, S., Park, S., & Jang, B. (2024). Fine-tuning BERT-based NLP Models for Sentiment Analysis of Korean Reviews: Optimizing the sequence length. Journal of Internet Computing and Services, 25(4), 47-56. DOI: 10.7472/jksii.2024.25.4.47.
[IEEE Style]
S. Hwang, S. Park, B. Jang, "Fine-tuning BERT-based NLP Models for Sentiment Analysis of Korean Reviews: Optimizing the sequence length," Journal of Internet Computing and Services, vol. 25, no. 4, pp. 47-56, 2024. DOI: 10.7472/jksii.2024.25.4.47.
[ACM Style]
Sunga Hwang, Seyeon Park, and Beakcheol Jang. 2024. Fine-tuning BERT-based NLP Models for Sentiment Analysis of Korean Reviews: Optimizing the sequence length. Journal of Internet Computing and Services, 25, 4, (2024), 47-56. DOI: 10.7472/jksii.2024.25.4.47.