The Fascinating No-Gradient Approach to Neural Net Optimization

From: https://towardsdatascience.com/the-fascinating-no-gradient-approach-to-neural-net-optimization-abb287f88c97 Gradient descent is one of the most important ideas in machine learning: given some cost function to minimize, the algorithm iteratively takes steps of the greatest downward slope, theoretically landing in a minima after a sufficient number of iterations. First discovered by Cauchy in 1847 but expanded upon in Haskell Curry for non-linear … Continue reading The Fascinating No-Gradient Approach to Neural Net Optimization

PyTorch layer dimensions: what size and why?

From: https://towardsdatascience.com/pytorch-layer-dimensions-what-sizes-should-they-be-and-why-4265a41e01fd Preface This article covers defining tensors, and properly initializing neural network layers in PyTorch, and more! Introduction You might be asking: “How do I initialize my layer dimensions in PyTorch without getting yelled at?” Is it all just trial and error? No, really… What are they supposed to be? For starters, did you … Continue reading PyTorch layer dimensions: what size and why?

Denosing Lung CT Scans using Neural Networks with Interactive Code — Part 4, Convolutional Residual Neural Networks

From: https://towardsdatascience.com/denosing-lung-ct-scans-using-neural-networks-with-interactive-code-part-4-convolutional-resnet-74335714a4ae Another attempt to denoise CT Scan of lungs, this time we are going to use more sophisticated Convolutional ResNet Architecture. Specifically, we are going to use the architecture proposed in this paper, “Deep Residual Learning for Image Recognition”. Also, as usual lets do manual back propagation to compare our results. Network Architecture (Image Form) Image … Continue reading Denosing Lung CT Scans using Neural Networks with Interactive Code — Part 4, Convolutional Residual Neural Networks

Denosing Lung CT Scans using Neural Networks with Interactive Code — Part 2, Convolutional Neural Network

From: https://towardsdatascience.com/only-numpy-medical-denosing-lung-ct-scans-using-neural-networks-with-interactive-code-part-2-6def73cabba5 So today, I will continue on the image denoising series, and fortunately I found this paper “Low-dose CT denoising with convolutional neural network. In Biomedical Imagin” by Hu Chen. So lets take a dive into their implementation and see what results we get. Finally, for fun let’s use different type of back propagation to compare … Continue reading Denosing Lung CT Scans using Neural Networks with Interactive Code — Part 2, Convolutional Neural Network

Denosing Lung CT Scans using Neural Networks with Interactive Code — Part 1, Vanilla Auto Encoder Model

From: https://towardsdatascience.com/only-numpy-medical-denosing-lung-ct-scans-using-auto-encoders-with-interactive-code-part-1-a6c3f9400246 Image from Pixel Bay My passion lies in Artificial Intelligent, and I want my legacy to be in the field of Health Care, using AI. So in hopes to make my dream come true as well as to practice OOP approach of implementing neural networks I will start the first part of long series … Continue reading Denosing Lung CT Scans using Neural Networks with Interactive Code — Part 1, Vanilla Auto Encoder Model

How to Control the Stability of Training Neural Networks With the Batch Size

From: https://machinelearningmastery.com/how-to-control-the-speed-and-stability-of-training-neural-networks-with-gradient-descent-batch-size/ Neural networks are trained using gradient descent where the estimate of the error used to update the weights is calculated based on a subset of the training dataset. The number of examples from the training dataset used in the estimate of the error gradient is called the batch size and is an important hyperparameter that … Continue reading How to Control the Stability of Training Neural Networks With the Batch Size

Noise: It’s not always annoying

From: https://towardsdatascience.com/noise-its-not-always-annoying-1bd5f0f240f One of the first concepts you learn when you begin to study neural networks is the meaning of overfitting and underfitting. Sometimes, it is a challenge to train a model that generalizes your data perfectly, especially when you have a small dataset because: When you train a neural network with small datasets, the network generally memorizes … Continue reading Noise: It’s not always annoying

An Easy Guide to Gauge Equivariant Convolutional Networks

From: https://towardsdatascience.com/an-easy-guide-to-gauge-equivariant-convolutional-networks-9366fb600b70 Geometric deep learning is a very exciting new field, but its mathematics is slowly drifting into the territory of algebraic topology and theoretical physics. This is especially true for the paper “Gauge Equivariant Convolutional Networks and the Icosahedral CNN” by Cohen et. al.(https://arxiv.org/abs/1902.04615), which I want to explore in this article. The paper uses … Continue reading An Easy Guide to Gauge Equivariant Convolutional Networks

Bayesian Convolutional Neural Networks with Bayes by Backprop

From: https://medium.com/neuralspace/bayesian-convolutional-neural-networks-with-bayes-by-backprop-c84dcaaf086e So far, we have elaborated how Bayes by Backprop works on a simple feedforward neural network. In this post, I will explain how you can apply exactly this framework to any convolutional neural network (CNN) architecture you like. You might have seen Gal’s & Ghahramani’s (2015) publication of a Bayesian CNN, but that’s an entirely different approach … Continue reading Bayesian Convolutional Neural Networks with Bayes by Backprop