BLOG POSTS
How to Train a Neural Network for Sentiment Analysis

How to Train a Neural Network for Sentiment Analysis

Sentiment analysis is becoming a cornerstone of modern applications, from e-commerce review systems to social media monitoring and customer service automation. Training your own neural network for sentiment analysis gives you control over the model’s behavior, allows customization for domain-specific language, and eliminates dependency on third-party APIs. In this post, we’ll walk through building a neural network from scratch using Python and TensorFlow, cover deployment strategies on server infrastructure, and tackle the common pitfalls that trip up most developers during implementation.

How Neural Networks Process Sentiment

Neural networks approach sentiment analysis by learning patterns in text data through multiple layers of interconnected nodes. The process starts with converting text into numerical representations (embeddings), then feeding these through hidden layers that identify increasingly complex patterns, finally outputting a probability score for sentiment classes.

The most effective architectures for sentiment analysis include:

  • Recurrent Neural Networks (RNNs) – Process text sequentially, maintaining context from previous words
  • Long Short-Term Memory (LSTM) – Handle long-range dependencies better than standard RNNs
  • Convolutional Neural Networks (CNNs) – Identify local patterns and n-gram features in text
  • Transformer-based models – Use attention mechanisms to focus on relevant parts of the input

For most server deployments, LSTM networks provide the best balance of accuracy and computational efficiency. They’re particularly well-suited for applications running on VPS environments where memory and CPU resources need careful management.

Step-by-Step Implementation Guide

Let’s build a complete sentiment analysis system that you can deploy on your infrastructure. We’ll use the IMDB movie reviews dataset for training, which contains 50,000 labeled reviews.

Environment Setup

First, install the required dependencies on your server:

pip install tensorflow==2.13.0 pandas numpy scikit-learn matplotlib seaborn nltk
python -c "import nltk; nltk.download('stopwords'); nltk.download('punkt')"

Data Preprocessing Pipeline

Create a robust preprocessing pipeline that handles real-world text variations:

import pandas as pd
import numpy as np
import re
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from sklearn.model_selection import train_test_split
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences

class TextPreprocessor:
    def __init__(self, max_features=10000, max_length=100):
        self.max_features = max_features
        self.max_length = max_length
        self.tokenizer = Tokenizer(num_words=max_features, oov_token="<OOV>")
        self.stop_words = set(stopwords.words('english'))
    
    def clean_text(self, text):
        # Remove HTML tags
        text = re.sub(r'<.*?>', '', text)
        # Remove special characters and digits
        text = re.sub(r'[^a-zA-Z\s]', '', text)
        # Convert to lowercase
        text = text.lower()
        # Remove extra whitespace
        text = re.sub(r'\s+', ' ', text).strip()
        return text
    
    def remove_stopwords(self, text):
        tokens = word_tokenize(text)
        filtered_tokens = [word for word in tokens if word not in self.stop_words]
        return ' '.join(filtered_tokens)
    
    def fit_transform(self, texts, labels):
        # Clean texts
        cleaned_texts = [self.remove_stopwords(self.clean_text(text)) for text in texts]
        
        # Fit tokenizer and convert to sequences
        self.tokenizer.fit_on_texts(cleaned_texts)
        sequences = self.tokenizer.texts_to_sequences(cleaned_texts)
        
        # Pad sequences
        padded_sequences = pad_sequences(sequences, maxlen=self.max_length)
        
        return padded_sequences, np.array(labels)
    
    def transform(self, texts):
        cleaned_texts = [self.remove_stopwords(self.clean_text(text)) for text in texts]
        sequences = self.tokenizer.texts_to_sequences(cleaned_texts)
        return pad_sequences(sequences, maxlen=self.max_length)

Building the Neural Network Architecture

Here’s a production-ready LSTM model with dropout regularization and proper layer configuration:

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, LSTM, Dense, Dropout, Bidirectional
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint

def create_sentiment_model(vocab_size, embedding_dim=128, lstm_units=64, max_length=100):
    model = Sequential([
        Embedding(vocab_size, embedding_dim, input_length=max_length),
        Bidirectional(LSTM(lstm_units, dropout=0.3, recurrent_dropout=0.3, return_sequences=True)),
        Bidirectional(LSTM(lstm_units//2, dropout=0.3, recurrent_dropout=0.3)),
        Dense(32, activation='relu'),
        Dropout(0.5),
        Dense(16, activation='relu'),
        Dropout(0.3),
        Dense(1, activation='sigmoid')
    ])
    
    # Compile with appropriate optimizer and metrics
    model.compile(
        optimizer=Adam(learning_rate=0.001),
        loss='binary_crossentropy',
        metrics=['accuracy', 'precision', 'recall']
    )
    
    return model

# Model configuration
VOCAB_SIZE = 10000
EMBEDDING_DIM = 128
LSTM_UNITS = 64
MAX_LENGTH = 100

model = create_sentiment_model(VOCAB_SIZE, EMBEDDING_DIM, LSTM_UNITS, MAX_LENGTH)
print(model.summary())

Training Configuration and Callbacks

Configure training with callbacks that prevent overfitting and save the best model:

# Training callbacks
callbacks = [
    EarlyStopping(
        monitor='val_loss',
        patience=5,
        restore_best_weights=True,
        verbose=1
    ),
    ReduceLROnPlateau(
        monitor='val_loss',
        factor=0.5,
        patience=3,
        min_lr=1e-7,
        verbose=1
    ),
    ModelCheckpoint(
        'best_sentiment_model.h5',
        monitor='val_accuracy',
        save_best_only=True,
        verbose=1
    )
]

# Training configuration
BATCH_SIZE = 32
EPOCHS = 50
VALIDATION_SPLIT = 0.2

# Load and preprocess your data
# Assuming you have texts and labels loaded
preprocessor = TextPreprocessor(max_features=VOCAB_SIZE, max_length=MAX_LENGTH)
X, y = preprocessor.fit_transform(texts, labels)

# Split the data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Train the model
history = model.fit(
    X_train, y_train,
    batch_size=BATCH_SIZE,
    epochs=EPOCHS,
    validation_split=VALIDATION_SPLIT,
    callbacks=callbacks,
    verbose=1
)

Real-World Examples and Use Cases

Here are practical implementations I’ve seen work well in production environments:

E-commerce Review Analysis

For analyzing product reviews, you’ll want to adapt the model for multi-aspect sentiment analysis:

class MultiAspectSentimentAnalyzer:
    def __init__(self, model_path, preprocessor):
        self.model = tf.keras.models.load_model(model_path)
        self.preprocessor = preprocessor
        self.aspects = ['quality', 'price', 'shipping', 'service']
    
    def analyze_review(self, review_text):
        # Extract aspect-specific sentences
        sentences = self.extract_aspect_sentences(review_text)
        results = {}
        
        for aspect, aspect_sentences in sentences.items():
            if aspect_sentences:
                processed_text = self.preprocessor.transform(aspect_sentences)
                predictions = self.model.predict(processed_text)
                avg_sentiment = np.mean(predictions)
                results[aspect] = {
                    'sentiment_score': float(avg_sentiment),
                    'confidence': float(np.std(predictions))
                }
        
        return results
    
    def extract_aspect_sentences(self, text):
        # Implementation depends on your specific use case
        # This is a simplified version
        aspect_keywords = {
            'quality': ['quality', 'build', 'material', 'durability'],
            'price': ['price', 'cost', 'expensive', 'cheap', 'value'],
            'shipping': ['delivery', 'shipping', 'arrived', 'fast'],
            'service': ['service', 'support', 'help', 'staff']
        }
        # ... sentence extraction logic
        pass

Social Media Monitoring System

For real-time sentiment monitoring, implement a streaming pipeline that processes tweets or posts as they arrive:

import asyncio
import aiohttp
from datetime import datetime

class RealTimeSentimentMonitor:
    def __init__(self, model, preprocessor, threshold=0.7):
        self.model = model
        self.preprocessor = preprocessor
        self.threshold = threshold
        self.alert_callbacks = []
    
    async def process_stream(self, text_stream):
        results = []
        batch = []
        
        async for text in text_stream:
            batch.append(text)
            
            # Process in batches for efficiency
            if len(batch) >= 10:
                processed_batch = self.preprocessor.transform(batch)
                predictions = self.model.predict(processed_batch, verbose=0)
                
                for i, (text, sentiment) in enumerate(zip(batch, predictions)):
                    result = {
                        'text': text,
                        'sentiment': float(sentiment[0]),
                        'timestamp': datetime.utcnow().isoformat(),
                        'classification': 'positive' if sentiment[0] > 0.5 else 'negative'
                    }
                    
                    # Trigger alerts for extreme sentiments
                    if sentiment[0] > self.threshold or sentiment[0] < (1 - self.threshold):
                        await self.trigger_alert(result)
                    
                    results.append(result)
                
                batch = []
        
        return results
    
    async def trigger_alert(self, result):
        for callback in self.alert_callbacks:
            try:
                await callback(result)
            except Exception as e:
                print(f"Alert callback failed: {e}")

Architecture Comparisons

Different neural network architectures perform differently based on your specific requirements:

Architecture Accuracy Training Time Inference Speed Memory Usage Best Use Case
Simple RNN 78-82% Fast Very Fast Low Resource-constrained environments
LSTM 83-87% Medium Fast Medium General-purpose sentiment analysis
Bidirectional LSTM 85-89% Medium Medium Medium-High High-accuracy applications
CNN + LSTM 86-90% Medium-Slow Medium High Complex text patterns
Transformer (BERT-base) 90-94% Very Slow Slow Very High Maximum accuracy requirements

Deployment and Production Considerations

When deploying your sentiment analysis model on server infrastructure, several factors affect performance and reliability:

Server Resource Requirements

Based on testing across different server configurations, here are the recommended specifications:

Request Volume Model Type RAM Required CPU Cores Storage Recommended Server
< 100 req/min LSTM 2GB 2 cores 10GB Basic VPS
100-1000 req/min Bidirectional LSTM 4GB 4 cores 20GB Performance VPS
1000+ req/min Multiple models 8GB+ 8+ cores 50GB+ Dedicated server

Flask API Deployment

Create a production-ready API for your sentiment analysis model:

from flask import Flask, request, jsonify
import tensorflow as tf
import pickle
import logging
from functools import lru_cache
import time

app = Flask(__name__)

# Configure logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

class SentimentAPI:
    def __init__(self, model_path, preprocessor_path):
        self.model = tf.keras.models.load_model(model_path)
        with open(preprocessor_path, 'rb') as f:
            self.preprocessor = pickle.load(f)
        logger.info("Model and preprocessor loaded successfully")
    
    @lru_cache(maxsize=1000)
    def predict_sentiment(self, text):
        """Cached prediction to improve performance for repeated requests"""
        try:
            processed_text = self.preprocessor.transform([text])
            prediction = self.model.predict(processed_text, verbose=0)[0][0]
            
            return {
                'sentiment_score': float(prediction),
                'classification': 'positive' if prediction > 0.5 else 'negative',
                'confidence': abs(prediction - 0.5) * 2
            }
        except Exception as e:
            logger.error(f"Prediction error: {str(e)}")
            raise

# Initialize the sentiment analyzer
sentiment_analyzer = SentimentAPI('best_sentiment_model.h5', 'preprocessor.pkl')

@app.route('/analyze', methods=['POST'])
def analyze_sentiment():
    start_time = time.time()
    
    try:
        data = request.get_json()
        if not data or 'text' not in data:
            return jsonify({'error': 'Missing text field'}), 400
        
        text = data['text']
        if len(text.strip()) == 0:
            return jsonify({'error': 'Empty text provided'}), 400
        
        if len(text) > 10000:  # Limit text length
            return jsonify({'error': 'Text too long (max 10000 characters)'}), 400
        
        result = sentiment_analyzer.predict_sentiment(text)
        result['processing_time'] = time.time() - start_time
        
        logger.info(f"Processed request in {result['processing_time']:.3f}s")
        return jsonify(result)
    
    except Exception as e:
        logger.error(f"API error: {str(e)}")
        return jsonify({'error': 'Internal server error'}), 500

@app.route('/health', methods=['GET'])
def health_check():
    return jsonify({'status': 'healthy', 'timestamp': time.time()})

@app.route('/batch', methods=['POST'])
def batch_analyze():
    try:
        data = request.get_json()
        if not data or 'texts' not in data:
            return jsonify({'error': 'Missing texts field'}), 400
        
        texts = data['texts']
        if len(texts) > 100:  # Limit batch size
            return jsonify({'error': 'Batch size too large (max 100)'}), 400
        
        results = []
        for text in texts:
            if isinstance(text, str) and len(text.strip()) > 0:
                result = sentiment_analyzer.predict_sentiment(text)
                results.append(result)
            else:
                results.append({'error': 'Invalid text'})
        
        return jsonify({'results': results})
    
    except Exception as e:
        logger.error(f"Batch API error: {str(e)}")
        return jsonify({'error': 'Internal server error'}), 500

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000, debug=False)

Docker Containerization

Package your application for consistent deployment across different environments:

# Dockerfile
FROM python:3.9-slim

WORKDIR /app

# Install system dependencies
RUN apt-get update && apt-get install -y \
    gcc \
    g++ \
    && rm -rf /var/lib/apt/lists/*

# Copy requirements first for better caching
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Download NLTK data
RUN python -c "import nltk; nltk.download('stopwords'); nltk.download('punkt')"

# Copy application code
COPY . .

# Create a non-root user
RUN useradd -m -u 1000 appuser && chown -R appuser:appuser /app
USER appuser

EXPOSE 5000

# Use gunicorn for production
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "4", "--timeout", "120", "app:app"]
# docker-compose.yml
version: '3.8'

services:
  sentiment-api:
    build: .
    ports:
      - "5000:5000"
    environment:
      - FLASK_ENV=production
    volumes:
      - ./models:/app/models:ro
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:5000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
  
  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
    depends_on:
      - sentiment-api
    restart: unless-stopped

Common Pitfalls and Troubleshooting

Here are the most frequent issues developers encounter and how to solve them:

Memory Issues During Training

If you're running out of memory during training, especially on VPS instances, implement these optimizations:

# Use data generators instead of loading everything into memory
from tensorflow.keras.utils import Sequence

class SentimentDataGenerator(Sequence):
    def __init__(self, texts, labels, batch_size, preprocessor, shuffle=True):
        self.texts = texts
        self.labels = labels
        self.batch_size = batch_size
        self.preprocessor = preprocessor
        self.shuffle = shuffle
        self.indices = np.arange(len(texts))
        if shuffle:
            np.random.shuffle(self.indices)
    
    def __len__(self):
        return len(self.texts) // self.batch_size
    
    def __getitem__(self, idx):
        batch_indices = self.indices[idx * self.batch_size:(idx + 1) * self.batch_size]
        batch_texts = [self.texts[i] for i in batch_indices]
        batch_labels = [self.labels[i] for i in batch_indices]
        
        # Process batch
        X = self.preprocessor.transform(batch_texts)
        y = np.array(batch_labels)
        
        return X, y
    
    def on_epoch_end(self):
        if self.shuffle:
            np.random.shuffle(self.indices)

# Use mixed precision training to reduce memory usage
from tensorflow.keras.mixed_precision import set_global_policy
set_global_policy('mixed_float16')

# Enable memory growth for GPU
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
    for gpu in gpus:
        tf.config.experimental.set_memory_growth(gpu, True)

Model Overfitting

Overfitting is common with small datasets. Here's how to detect and prevent it:

import matplotlib.pyplot as plt

def plot_training_history(history):
    """Plot training metrics to identify overfitting"""
    fig, axes = plt.subplots(2, 2, figsize=(12, 8))
    
    # Accuracy
    axes[0, 0].plot(history.history['accuracy'], label='Training')
    axes[0, 0].plot(history.history['val_accuracy'], label='Validation')
    axes[0, 0].set_title('Model Accuracy')
    axes[0, 0].legend()
    
    # Loss
    axes[0, 1].plot(history.history['loss'], label='Training')
    axes[0, 1].plot(history.history['val_loss'], label='Validation')
    axes[0, 1].set_title('Model Loss')
    axes[0, 1].legend()
    
    # Precision
    axes[1, 0].plot(history.history['precision'], label='Training')
    axes[1, 0].plot(history.history['val_precision'], label='Validation')
    axes[1, 0].set_title('Model Precision')
    axes[1, 0].legend()
    
    # Recall
    axes[1, 1].plot(history.history['recall'], label='Training')
    axes[1, 1].plot(history.history['val_recall'], label='Validation')
    axes[1, 1].set_title('Model Recall')
    axes[1, 1].legend()
    
    plt.tight_layout()
    plt.savefig('training_history.png', dpi=300, bbox_inches='tight')
    plt.show()

# Data augmentation techniques for text
def augment_text_data(texts, labels, augmentation_factor=2):
    """Simple text augmentation by synonym replacement"""
    from nltk.corpus import wordnet
    import random
    
    augmented_texts = []
    augmented_labels = []
    
    for text, label in zip(texts, labels):
        augmented_texts.append(text)
        augmented_labels.append(label)
        
        # Create augmented versions
        for _ in range(augmentation_factor):
            words = text.split()
            new_words = []
            
            for word in words:
                # Randomly replace some words with synonyms
                if random.random() < 0.1:  # 10% chance
                    synonyms = []
                    for syn in wordnet.synsets(word):
                        for lemma in syn.lemmas():
                            synonyms.append(lemma.name().replace('_', ' '))
                    
                    if synonyms:
                        new_words.append(random.choice(synonyms))
                    else:
                        new_words.append(word)
                else:
                    new_words.append(word)
            
            augmented_text = ' '.join(new_words)
            augmented_texts.append(augmented_text)
            augmented_labels.append(label)
    
    return augmented_texts, augmented_labels

Slow Inference Performance

For production deployments, inference speed is critical. Here are optimization techniques:

# Model quantization for faster inference
def quantize_model(model_path, output_path):
    """Convert model to TensorFlow Lite for faster inference"""
    converter = tf.lite.TFLiteConverter.from_saved_model(model_path)
    converter.optimizations = [tf.lite.Optimize.DEFAULT]
    
    # Optional: Use integer quantization for even more speed
    converter.representative_dataset = representative_dataset_gen
    converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
    converter.inference_input_type = tf.int8
    converter.inference_output_type = tf.int8
    
    tflite_model = converter.convert()
    
    with open(output_path, 'wb') as f:
        f.write(tflite_model)

# Batch processing for multiple requests
class BatchProcessor:
    def __init__(self, model, preprocessor, max_batch_size=32, max_wait_time=0.1):
        self.model = model
        self.preprocessor = preprocessor
        self.max_batch_size = max_batch_size
        self.max_wait_time = max_wait_time
        self.pending_requests = []
        self.processing = False
    
    async def process_request(self, text):
        """Add request to batch and wait for result"""
        future = asyncio.Future()
        self.pending_requests.append((text, future))
        
        if not self.processing:
            asyncio.create_task(self._process_batch())
        
        return await future
    
    async def _process_batch(self):
        """Process accumulated requests in batch"""
        self.processing = True
        await asyncio.sleep(self.max_wait_time)
        
        if not self.pending_requests:
            self.processing = False
            return
        
        # Extract texts and futures
        batch_texts = []
        futures = []
        
        for _ in range(min(len(self.pending_requests), self.max_batch_size)):
            text, future = self.pending_requests.pop(0)
            batch_texts.append(text)
            futures.append(future)
        
        try:
            # Process batch
            processed_texts = self.preprocessor.transform(batch_texts)
            predictions = self.model.predict(processed_texts, verbose=0)
            
            # Return results
            for future, prediction in zip(futures, predictions):
                result = {
                    'sentiment_score': float(prediction[0]),
                    'classification': 'positive' if prediction[0] > 0.5 else 'negative'
                }
                future.set_result(result)
        
        except Exception as e:
            # Handle errors
            for future in futures:
                future.set_exception(e)
        
        self.processing = False
        
        # Process remaining requests if any
        if self.pending_requests:
            asyncio.create_task(self._process_batch())

Best Practices and Performance Optimization

Based on experience deploying sentiment analysis models in production, here are essential best practices:

Model Versioning and A/B Testing

class ModelManager:
    def __init__(self):
        self.models = {}
        self.current_model = None
        self.fallback_model = None
    
    def load_model(self, model_name, model_path, is_primary=False):
        """Load a model version"""
        try:
            model = tf.keras.models.load_model(model_path)
            self.models[model_name] = {
                'model': model,
                'loaded_at': datetime.utcnow(),
                'request_count': 0,
                'error_count': 0
            }
            
            if is_primary:
                self.current_model = model_name
            
            logger.info(f"Model {model_name} loaded successfully")
            return True
        except Exception as e:
            logger.error(f"Failed to load model {model_name}: {e}")
            return False
    
    def predict(self, text, model_name=None):
        """Make prediction with specified model or current model"""
        target_model = model_name or self.current_model
        
        if target_model not in self.models:
            raise ValueError(f"Model {target_model} not found")
        
        try:
            model_info = self.models[target_model]
            prediction = model_info['model'].predict(text, verbose=0)
            model_info['request_count'] += 1
            return prediction
        except Exception as e:
            model_info['error_count'] += 1
            logger.error(f"Prediction failed for model {target_model}: {e}")
            
            # Fallback to backup model
            if self.fallback_model and self.fallback_model != target_model:
                return self.predict(text, self.fallback_model)
            raise
    
    def get_model_stats(self):
        """Get performance statistics for all models"""
        stats = {}
        for name, info in self.models.items():
            stats[name] = {
                'request_count': info['request_count'],
                'error_count': info['error_count'],
                'error_rate': info['error_count'] / max(info['request_count'], 1),
                'loaded_at': info['loaded_at'].isoformat()
            }
        return stats

Monitoring and Alerting

Implement comprehensive monitoring to track model performance in production:

import logging
from datetime import datetime, timedelta
import json

class SentimentMonitor:
    def __init__(self, alert_threshold=0.1):
        self.alert_threshold = alert_threshold
        self.metrics = {
            'total_requests': 0,
            'error_count': 0,
            'response_times': [],
            'sentiment_distributions': {'positive': 0, 'negative': 0},
            'hourly_stats': {}
        }
    
    def log_prediction(self, text, prediction, response_time, error=None):
        """Log prediction metrics"""
        current_hour = datetime.utcnow().strftime('%Y-%m-%d-%H')
        
        # Initialize hourly stats if needed
        if current_hour not in self.metrics['hourly_stats']:
            self.metrics['hourly_stats'][current_hour] = {
                'requests': 0,
                'errors': 0,
                'avg_response_time': 0,
                'sentiment_dist': {'positive': 0, 'negative': 0}
            }
        
        hour_stats = self.metrics['hourly_stats'][current_hour]
        
        # Update metrics
        self.metrics['total_requests'] += 1
        hour_stats['requests'] += 1
        
        if error:
            self.metrics['error_count'] += 1
            hour_stats['errors'] += 1
        else:
            # Track response time
            self.metrics['response_times'].append(response_time)
            hour_stats['avg_response_time'] = (
                (hour_stats['avg_response_time'] * (hour_stats['requests'] - 1) + response_time) 
                / hour_stats['requests']
            )
            
            # Track sentiment distribution
            sentiment = 'positive' if prediction > 0.5 else 'negative'
            self.metrics['sentiment_distributions'][sentiment] += 1
            hour_stats['sentiment_dist'][sentiment] += 1
        
        # Check for alerts
        self._check_alerts(hour_stats)
    
    def _check_alerts(self, hour_stats):
        """Check if alerts should be triggered"""
        if hour_stats['requests'] > 10:  # Only alert if we have enough data
            error_rate = hour_stats['errors'] / hour_stats['requests']
            
            if error_rate > self.alert_threshold:
                self._send_alert(f"High error rate: {error_rate:.2%}")
            
            if hour_stats['avg_response_time'] > 5.0:  # 5 second threshold
                self._send_alert(f"High response time: {hour_stats['avg_response_time']:.2f}s")
    
    def _send_alert(self, message):
        """Send alert (implement your notification system)"""
        logger.warning(f"ALERT: {message}")
        # Implement webhook, email, or Slack notification here
    
    def get_metrics_summary(self):
        """Get current metrics summary"""
        if not self.metrics['response_times']:
            avg_response_time = 0
        else:
            avg_response_time = sum(self.metrics['response_times']) / len(self.metrics['response_times'])
        
        error_rate = self.metrics['error_count'] / max(self.metrics['total_requests'], 1)
        
        return {
            'total_requests': self.metrics['total_requests'],
            'error_rate': error_rate,
            'avg_response_time': avg_response_time,
            'sentiment_distributions': self.metrics['sentiment_distributions'],
            'uptime_hours': len(self.metrics['hourly_stats'])
        }

Successfully training and deploying neural networks for sentiment analysis requires careful attention to data preprocessing, model architecture, and production infrastructure. The key is starting with a solid foundation using proven architectures like LSTM, then iteratively optimizing based on your specific use case and performance requirements. With proper monitoring and gradual scaling, you can build robust sentiment analysis systems that handle real-world workloads effectively. Remember to benchmark your models against different server configurations to find the optimal balance between accuracy and computational cost for your deployment scenario.



This article incorporates information and material from various online sources. We acknowledge and appreciate the work of all original authors, publishers, and websites. While every effort has been made to appropriately credit the source material, any unintentional oversight or omission does not constitute a copyright infringement. All trademarks, logos, and images mentioned are the property of their respective owners. If you believe that any content used in this article infringes upon your copyright, please contact us immediately for review and prompt action.

This article is intended for informational and educational purposes only and does not infringe on the rights of the copyright owners. If any copyrighted material has been used without proper credit or in violation of copyright laws, it is unintentional and we will rectify it promptly upon notification. Please note that the republishing, redistribution, or reproduction of part or all of the contents in any form is prohibited without express written permission from the author and website owner. For permissions or further inquiries, please contact us.

Leave a reply

Your email address will not be published. Required fields are marked