A Comparative Study of Deep Learning Architectures for Aspect-Level Sentiment Analysis on Multivariate Feature Data
By: Nikhat Fatma Mumtaz Husain Shaikh
| Pages: 28 - 35
|
Open
Abstract
Aspect-level sentiment analysis (ALSA) is a challenging problem in natural language processing that involves accurate identification of sentiment toward a particular aspect in text. This paper offers an extensive comparison of three deep learning architectures: recurrent neural networks (RNNs), long short-term memory networks (LSTMs), and transformer models, on multiclass ALSA with multivariate feature-enhanced data. We use a uniform preprocessing pipeline with sequence padding, aspect-position encoding, and feature selection (TFIDF, POS tags, dependency relations) prior to providing data to every model. All of the architectures use embedding layers and attention mechanisms, using the same training protocols (Adam optimizer, 5 epochs, batch size 64) for comparative assessment. Transformer uses a distilled BERT architecture with aspectspecific attention heads. Experimental evidence on benchmark datasets reveals that the Transformer model performs better in accuracy than LSTM and RNN, taking advantage of its self-attention. LSTMs are more efficient than transformers but have competitive accuracy in most aspects. RNNs provide the quickest inference faster than LSTM but have trouble with longrange dependencies on sophisticated sentences. Statistical tests verify these performance gaps to be significant. The feature ablation studies show multivariate features provide accuracy gains on all models, with the greatest benefit to Transformers coming from syntactic patterns. This gives us actionable advice: Transformers are best suited to accuracy-critical tasks, LSTMs provide equitable performance, and RNNs are still an option for low-latency systems. The study also shows that augmenting traditional features with deep learning frameworks provides consistent improvements over the ”pure” end-to-end methods. Sentiments are shown on a continuous five-point scale, and a perception score is derived for each review. Deep learning models, namely RNN, LSTM, and Transformer architectures, have also been compared in this research based on their sentiment classification performance.
DOI URL: https://doi.org/10.64820/AEPJCSER.22.28.35.122025





