Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
nicolay-rΒ 
posted an update Jun 1
Post
2183
The most recent LLaMA-3-70B Instruct showcases the beast performance in zero-shot-learning mode in Target-Sentiment-Analsys (TSA) πŸ”₯πŸš€ In particular we experiment with sentence-level analysis, with sentences fetched from the WikiArticles that were formed into RuSentNE-2023 dataset.

The key takeaways out of LLaMA-3-70B performance on original (πŸ‡·πŸ‡Ί) texts and translated into English are as follows:
1. Outperforms all ChatGPT-4 and all predecessors on non-english-texts (πŸ‡·πŸ‡Ί)
2. Surpasses all ChatGPT-3.5 / nearly performs as good as ChatGPT-4 on english texts πŸ₯³

Benchmark: https://github.com/nicolay-r/RuSentNE-LLM-Benchmark
Model: meta-llama/Meta-Llama-3-70B-Instruct
Dataset: https://github.com/dialogue-evaluation/RuSentNE-evaluation
Related paper: Large Language Models in Targeted Sentiment Analysis (2404.12342)
Collection: https://huggingface.co./collections/nicolay-r/sentiment-analysis-665ba391e0eba729021ea101
In this post