Links to useful online tutorials, blogposts, images etc. all about deep learning and machine learning. This is a work in progress and I will continue to add to to this list as and when I find useful things.
If you have any nice tutorials or links you think I should add please submit as an issue and I will look at putting them in the list. Also if any of the links are broken please let me know so I can remove them or find alternatives.
- Introducition to Machine Learning
- Deep Learning
- Neural Networks
- Large Language Models
- Convolutional Neural Networks
- Normalization
- Attention and Transformers
- Recurrent Neural Networks
- Tensorflow
- Tensorflow 2/Keras api
- Pytorch
- Linear Algebra
- Optimisation
- Generative Models
- Natural Language Processing
- Computer Vision
- Action recognition
- Object Detection
- Reinforcement Learning
- Anomaly Detection
- Diffusion models
- Projects
- Research papers
- Datasets
- Andrew Ng https://www.coursera.org/learn/machine-learning
- Elements of Statistical Learning (Ch.1-4/7) http://statweb.stanford.edu/%7Etibs/ElemStatLearn/printings/ESLII_print10.pdf
- Machine/Deep Learning cheatsheets https://github.com/kailashahirwar/cheatsheets-ai
- Geoffrey Hinton Neural Networks for Machine Learning https://www.coursera.org/learn/neural-networks
- Andrew Ng (2017) https://www.coursera.org/specializations/deep-learning
- Visual proof of NN universal approximation http://neuralnetworksanddeeplearning.com/chap4.html
- Rotary Positional Embeddings (RoPE) https://medium.com/ai-insights-cobet/rotary-positional-embeddings-a-detailed-look-and-comprehensive-understanding-4ff66a874d83
- Rotary Positional Embeddings (RoPE) https://blog.eleuther.ai/rotary-embeddings/
- CNN basics explained well (explaining kernels) https://adeshpande3.github.io/adeshpande3.github.io/A-Beginner%27s-Guide-To-Understanding-Convolutional-Neural-Networks/
- Explaining different convolution types https://towardsdatascience.com/types-of-convolutions-in-deep-learning-717013397f4d
- Receptive field calculation https://medium.com/mlreview/a-guide-to-receptive-field-arithmetic-for-convolutional-neural-networks-e0f514068807
- Calculate output size for Transpose Convolution https://www.quora.com/How-do-you-calculate-the-output-dimensions-of-a-deconvolution-network-layer
- Experiments on placement of batchnorm in Resnets http://torch.ch/blog/2016/02/04/resnets.html
- Different normalization methods explained https://mlexplained.com/2018/11/30/an-overview-of-normalization-methods-in-deep-learning/
- Soft attention for images https://jhui.github.io/2017/03/15/Soft-and-hard-attention/
- seq2seq Encoder decoder attention explained https://jalammar.github.io/visualizing-neural-machine-translation-mechanics-of-seq2seq-models-with-attention/
- Transformers explained http://jalammar.github.io/illustrated-transformer/
- Transformers explained https://towardsdatascience.com/transformers-141e32e69591
- Transformers explained http://mlexplained.com/2017/12/29/attention-is-all-you-need-explained/
- LSTM basics http://colah.github.io/posts/2015-08-Understanding-LSTMs/
- CTC loss https://towardsdatascience.com/intuitively-understanding-connectionist-temporal-classification-3797e43a86c
- Beam search decoding https://towardsdatascience.com/beam-search-decoding-in-ctc-trained-neural-networks-5a889a3d85a7
- Freezing/saving and serving a model https://blog.metaflow.fr/tensorflow-how-to-freeze-a-model-and-serve-it-with-a-python-api-d4f3596b3adc
- Various tutorials/ipynb https://github.com/aymericdamien/TensorFlow-Examples
- Various resources https://github.com/astorfi/Awsome-TensorFlow-Resources
- Various resources https://github.com/jtoy/awesome-tensorflow
- Various resources https://github.com/astorfi/TensorFlow-World-Resources
- TFRecord example http://warmspringwinds.github.io/tensorflow/tf-slim/2016/12/21/tfrecords-guide/
- TFRecord for images explained https://planspace.org/20170403-images_and_tfrecords/
- Visualise TF graphs in ipynb https://blog.jakuba.net/2017/05/30/tensorflow-visualization.html
- Profiling/tracing TF https://medium.com/towards-data-science/howto-profile-tensorflow-1a49fb18073d
- Eager mode (dynamic graphs) https://medium.com/@yaroslavvb/tensorflow-meets-pytorch-with-eager-mode-714cce161e6c
- Dataset api buffer_size meaning https://stackoverflow.com/questions/46444018/meaning-of-buffer-size-in-dataset-map-dataset-prefetch-and-dataset-shuffle
- Import a .pb model to tensorboard https://medium.com/@daj/how-to-inspect-a-pre-trained-tensorflow-model-5fd2ee79ced0
- Different Dataset API Iterators explained https://towardsdatascience.com/how-to-use-dataset-in-tensorflow-c758ef9e4428
- Scoping and sharing variables https://jhui.github.io/2017/03/08/TensorFlow-variable-sharing/
- Quantization aware training https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/quantize/README.md
- TFLite conversion https://github.com/tensorflow/tensorflow/blob/865b2783aad179eaf06161d016a42fbd4c18bfb4/tensorflow/contrib/lite/g3doc/convert/cmdline_examples.md
- tf.data https://dominikschmidt.xyz/tensorflow-data-pipeline/
- Image classification basics https://lambdalabs.com/blog/tensorflow-2-0-tutorial-01-image-classification-basics/
- Word embeddings in Keras https://machinelearningmastery.com/use-word-embedding-layers-deep-learning-keras/
- Symbolic (functional) vs imperative (subclassing) api https://medium.com/tensorflow/what-are-symbolic-and-imperative-apis-in-tensorflow-2-0-dfccecb01021
- Overview of TF 2.0 high level apis https://medium.com/tensorflow/standardizing-on-keras-guidance-on-high-level-apis-in-tensorflow-2-0-bad2b04c819a
- Reinforcment learning in TensorFlow 2.0 http://inoryy.com/post/tensorflow2-deep-reinforcement-learning/
- Quantization https://gist.github.com/NobuoTsukamoto/0470fa22f3808f305db1fd4fbe01e3e4
- Variety of pretrained models and training scripts https://github.com/aaron-xichen/pytorch-playground
- Intro to PyTorch for Kaggle competitions https://github.com/bfortuner/pytorch-kaggle-starter
- Cheat sheet https://github.com/Tgaaly/pytorch-cheatsheet/blob/master/README.md
- Cheat sheet with examples https://github.com/bfortuner/pytorch-cheatsheet/blob/master/pytorch-cheatsheet.ipynb
- Migrating to PyTorch 0.4 https://pytorch.org/blog/pytorch-0_4_0-migration-guide/
- Dearling with variable length sequences https://medium.com/@sonicboom8/sentiment-analysis-with-variable-length-sequences-in-pytorch-6241635ae130
- https://medium.com/towards-data-science/linear-algebra-cheat-sheet-for-deep-learning-cd67aba4526c#.nfoveoe56
- http://parrt.cs.usfca.edu/doc/matrix-calculus/index.html
- Momentum - http://distill.pub/2017/momentum/
- Overview of differences in losses for different GANS https://github.com/hwalsuklee/tensorflow-generative-model-collections
- 17 hacks for training GANS https://github.com/soumith/ganhacks
- Picture of why GANs are cool https://ai2-s2-public.s3.amazonaws.com/figures/2016-11-08/546b3592f59b3445ef12fba506b729c832198c33/2-Figure3-1.png
- CS224n https://www.youtube.com/watch?v=OQQ-W_63UgQ&list=PL3FW7Lu3i5Jsnh1rnUwq_TcylNr7EkRe6
- CS224n cheat sheet https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-recurrent-neural-networks
- Word2Vec http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/
- Alternatives to RNN for encoding https://hanxiao.github.io/2018/06/25/4-Encoding-Blocks-You-Need-to-Know-Besides-LSTM-RNN-in-Tensorflow/
- Some PyTorch examples of basic NLP tasks https://github.com/lyeoni/nlp-tutorial
- Calculating MFCC http://practicalcryptography.com/miscellaneous/machine-learning/guide-mel-frequency-cepstral-coefficients-mfccs/
- CS231n https://www.youtube.com/watch?v=vT1JzLTH4G4&list=PL3FW7Lu3i5JvHM8ljYj-zLfQRF3EO8sYv
- CS231n Cheat sheet https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks#filter
- Overview of architectures https://medium.com/@arthur_ouaknine/review-of-deep-learning-algorithms-for-image-semantic-segmentation-509a600f7b57
- Summary of state of the art http://blog.qure.ai/notes/deep-learning-for-videos-action-recognition-review
- Intro to super resolution with cnn https://medium.com/@hirotoschwert/introduction-to-deep-super-resolution-c052d84ce8cf
- Yolov3 https://blog.paperspace.com/how-to-implement-a-yolo-object-detector-in-pytorch/
- Yolov3 code http://leiluoray.com/2018/11/10/Implementing-YOLOV3-Using-PyTorch/?fbclid=IwAR2y0tNAxWq3kij4YFn99pW6jYB3CFmMd4gos2H1Al_bFMPgZ-QW_qekKb8
- David Silver Deepmind lectures https://www.youtube.com/playlist?list=PLeJKOhW5z62XKURemUDc3N92Min9yaR12
- Intro to RL algorithms https://medium.com/@huangkh19951228/introduction-to-various-reinforcement-learning-algorithms-i-q-learning-sarsa-dqn-ddpg-72a5e0cb6287
- Intro to RL https://medium.freecodecamp.org/an-introduction-to-reinforcement-learning-4339519de419
- Autoencoder for fraud detection https://medium.com/@curiousily/credit-card-fraud-detection-using-autoencoders-in-keras-tensorflow-for-hackers-part-vii-20e0c85301bd
- Cheat code for diffusion models https://sander.ai/2022/05/26/guidance.html
- Illustrated stable diffusion http://jalammar.github.io/illustrated-stable-diffusion/
- Stable diffusion using hugging face https://towardsdatascience.com/stable-diffusion-using-hugging-face-501d8dbdd8
- Stable diffusion using hugging face - variants https://towardsdatascience.com/stable-diffusion-using-hugging-face-variations-of-stable-diffusion-56fd2ab7a265
- Why do we need unconditional embedding https://forums.fast.ai/t/why-do-we-need-the-unconditioned-embedding/101134/11
- Stable diffusion code deep dive https://github.com/fastai/diffusion-nbs/blob/master/Stable%20Diffusion%20Deep%20Dive.ipynb
- Intro to recommendation systems https://medium.com/datadriveninvestor/how-to-built-a-recommender-system-rs-616c988d64b2
- Collection of links https://github.com/terryum/awesome-deep-learning-papers/blob/master/README.md
- Collection of links https://github.com/sbrugman/deep-learning-papers
- Pose estimation https://arxiv.org/abs/1312.4659
- Object detection https://arxiv.org/abs/1311.2524
- Object detection https://arxiv.org/abs/1312.2249
- Object detection https://arxiv.org/abs/1412.1441
- Object detection https://arxiv.org/abs/1506.01497 (Faster R-CNN)
- Object detection https://arxiv.org/abs/1506.02640 (YOLO)
- Object detection https://arxiv.org/abs/1512.02325 (SSD)
- Object detection https://arxiv.org/abs/1611.10012 (Object detection comparison)
- Semantic segmentation https://arxiv.org/abs/1511.00561 (Segnet)
- Semantic segmentation https://arxiv.org/abs/1505.07293 (Segnet)
- Semantic segmentation https://arxiv.org/abs/1511.02680 (Bayesian Segnet)
- Network architectures https://arxiv.org/abs/1602.07360 (SqueezeNet)
- Network architectures https://arxiv.org/abs/1606.00373 (Fully Convolution ResNet)
- Re Identification https://arxiv.org/abs/1703.07737 (In defence of triplet loss)
- Depth estimation https://arxiv.org/abs/1406.2283 (Depth map prediction, multi-scale deep network)
- Depth estimation https://arxiv.org/abs/1411.4734 (Depth, surface normals & semnatic labels with common multi scale conv.)
- Depth estimation https://arxiv.org/abs/1411.6387 (Deep conv neural fields for depth estimation from a single image)
- Network understanding https://arxiv.org/abs/1701.04128 (Understanding effective receptive field in deep CNN)
- Self driving cars https://arxiv.org/pdf/1704.07911.pdf (Explaining How a DNN can steer a car NVIDIA)
- Resnet paper https://arxiv.org/pdf/1512.03385.pdf
- Quantization https://arxiv.org/pdf/1502.02551.pdf
- Quantization http://proceedings.mlr.press/v48/linb16.pdf
- Quantization https://www.microsoft.com/en-us/research/wp-content/uploads/2017/04/FxpNet-submitted.pdf
- Quantization https://arxiv.org/pdf/1703.03073.pdf
- Quantizatoin https://arxiv.org/pdf/1605.06402.pdf (ristretto)
- Super Resolution with GANs https://arxiv.org/pdf/1609.04802.pdf
- Instance segmentation https://arxiv.org/abs/1703.06870 (Mask R-CNN)
- EfficientNets (how to scale up network architectures) https://arxiv.org/pdf/1905.11946.pdf
- CNNs trained on ImageNet learn textures rather than shapes https://arxiv.org/pdf/1811.12231.pdf
- NLP https://github.com/niderhoff/nlp-datasets
- MNIST png format https://github.com/myleott/mnist_png