What I got from this:
This is by far the most in clear explanation of Back propigation through time that I have read.
It does a great job outlining the vanishing and exploding gradient problems, and explaning how LSTMs reduce the risk of the former. All this
while keeping the explanatio concice
This paper also does a good job at giving solid and concice overviews on popular RNN versions and advancements. Deep RNNs, Bidirectional RNNs,
LSTMs, Encoder-Decoder and seq2seq, Transformers, and Pointer networks.
The sections on Auto-Encoders and Transformers
were the most benifical to me. With all the work being done in the DaSc world regarding Transformers, it's alwasy great to make sure I have
a solid base for the low level mechanics and funtionalty of these models.
TODO: Here is a summary from what I read. To see more on what I think, read my blog post here:
A Gentle Tutorial of Recurrent Neural Network with Error BackpropagationTODO: Here is a summary from what I read. To see more on what I think, read my blog post here:
Long Short-Term MemoryTODO: Here is a summary from what I read. To see more on what I think, read my blog post here:
Attention Is All You NeedTODO: Here is a summary from what I read. To see more on what I think, read my blog post here:
Auto-Encoder: What Is It? And What Is It Used For?TODO: Here is a summary from what I read. To see more on what I think, read my blog post here:
5 Ways to Detect Outliers/Anomalies That Every Data Scientist Should KnowTODO: Here is a summary from what I read. To see more on what I think, read my blog post here:
Random Forest in PythonTODO: Here is a summary from what I read. To see more on what I think, read my blog post here:
Convolutional Neural Networks ExplainedTODO: Here is a summary from what I read. To see more on what I think, read my blog post here:
The Unreasonable Effectiveness of Recurrent Neural NetworksTODO: Here is a summary from what I read. To see more on what I think, read my blog post here:
LSTMs Explained: A Complete, Technically Accurate, Conceptual Guide with KerasTODO: ~~~Understand why this is an important improvement over RNNs~~~ Here is a summary from what I read. To see more on what I think, read my blog post here:
DCGAN, cGAN and SAGAN & the CIFAR-10 datasetTODO: Here is a summary from what I read. To see more on what I think, read my blog post here:
Introduction to Diffusion Models for Machine LearningTODO: Here is a summary from what I read. To see more on what I think, read my blog post here:
An Overview of ResNet and its Variants~~~ understand why these are important in terms of information flow. Relay it to GANs. ~~~TODO: Here is a summary from what I read. To see more on what I think, read my blog post here:
The Illustrated TransformerTODO: Here is a summary from what I read. To see more on what I think, read my blog post here:
Solving Math Word ProblemsTODO: Here is a summary from what I read. To see more on what I think, read my blog post here:
Summarizing Books with Human FeedbackTODO: Here is a summary from what I read. To see more on what I think, read my blog post here:
Multimodal Neurons in Artificial Neural NetworksTODO: Here is a summary from what I read. To see more on what I think, read my blog post here:
AI and EfficiencyTODO: Here is a summary from what I read. To see more on what I think, read my blog post here:
Deep Double DescentTODO: Here is a summary from what I read. To see more on what I think, read my blog post here:
Generative Modeling with Sparse TransformersTODO: Here is a summary from what I read. To see more on what I think, read my blog post here:
Dreamento: An open-source dream engineering toolbox utilizing sleep wearableTODO: Here is a summary from what I read. To see more on what I think, read my blog post here:
Anomaly Detection with Machine LearningTODO: Here is a summary from what I read. To see more on what I think, read my blog post here:
Diffusion Models for Video ModelingTODO: Here is a summary from what I read. To see more on what I think, read my blog post here:
TitleTODO: Here is a summary from what I read. To see more on what I think, read my blog post here: