Explore sequential data processing with RNNs, LSTMs, and GRUs
Full RNN/LSTM/GRU training
Predict weather using RNN
How RNNs handle sequential data
Long Short-Term Memory cells
Gated Recurrent Units vs LSTM
Understanding the problem
Forecasting with RNNs
Train RNN, LSTM, or GRU networks with full control over hyperparameters and watch sequence learning in real-time.
Recurrent cell architecture
Controls convergence speed
Hidden state dimension
Input sequence length
Training iterations
Sequences per batch
Prevents overfitting
Train Loss
0
Val Loss
0
Train Perplexity
0
Val Perplexity
0
Gradient Norm
0
| Epoch | Train Loss | Val Loss | Train PPL | Val PPL | Grad Norm |
|---|
Forget Gate:
Input Gate:
Cell State Update:
Output Gate:
Hidden State:
Cross-Entropy Loss:
Perplexity: