Autoencoders has been in the deep learning literature for a long time now, most popular for data compression tasks. With their easy structure and not so complicated underlying mathematics, they became one of the first choices when it comes to dimensionality reduction in simple data. However, using basic fully connected layers fail to capture the patterns in pixel-data since they do not hold the neighboring information. For a good capturing of the image data in latent variables, convolutional layers are usually used in autoencoders.

Image for post
Image for post

For the previous post on Autoncoders, please visit:

Autoencoders are unsupervised neural network models that summarize…


Autoencoders are unsupervised neural network models that are designed to learn to represent multi-dimensional data with fewer parameters. Data compression algorithms have been known for a long time however, learning the nonlinear operations for mapping the data into lower dimensions has been the contribution of autoencoders into the literature.

Image for post
Image for post
A general scheme of autoencoders (Figure is taken from[1])

Autoencoders provided a very basic approach to extract the most important features of data by removing the redundancy. Redundancy occurs when multiple pieces (a column in a .csv file or a pixel location in an image dataset) of a dataset show a high correlation among themselves. …


Convolutional Neural Networks or ConvNets or even in shorter CNNs are a family of neural networks that are commonly implemented in computer vision tasks, however the use cases are not limited to that. Today, CNNs are employed in deep learning research and practical artificial intelligence applications with interests ranging from autonomous driving to medical imaging.

Before diving into the fundamental glossary and the chronological evolution of CNNs, I will rewind the tape back to the mid-1900s and briefly mention the story of human-like learning attempts in computer science.

The first model to mimic the biological nervous system is developed by…


ResNet owes its name to its residual blocks with skip connections that enable the model to be extremely deep. Even though including skip connections is a common idea in the community now, it was a revolutionary architectural choice and allowed ResNet to reach up to 152 layers with no vanishing or exploding gradient problems during training.

Image for post
Image for post
ResNet 34 Architecture (Illustration is taken from the original paper [1])

For the previous posts, please visit:

With the developments in hardware technology and the variety of design techniques in deep learning deeper and deeper, models became popular in ImageNet competition. Unlike LeNet and AlexNet, VGG and GoogLeNet managed to deal with larger structures. However…


InceptionV1 or with a more remarkable name GoogLeNet is one of the most successful models of the earlier years of convolutional neural networks. Szegedy et al. from Google Inc. published the model in their paper named Going Deeper with Convolutions[1] and won ILSVRC-2014 with a large margin. The name both signifies the affiliation of most of the contributed scholars with Google and the reference to the LeNet[2] model.

Image for post
Image for post
GoogLeNet architecture, forward propagation from right to left (Illustration is taken from [3].)

For the previous posts, please visit:

After analyzing and implementing VGG16[7] (runner-up of ILSVRC-2014), now it is time for the winner of the competition, GoogLeNet. As the name of the paper[1] implies…


VGG owes its name to the Visual Geometry Group of Oxford University. After being submitted to ILSVRC in 2014[1], the model VGGNet became as popular as the group itself. It is mostly considered as one step further from AlexNet due to deeper architecture and smaller kernel sizes.

Image for post
Image for post
VGGNet Architecture (Illustration is taken from [2].)

For the previous posts, please visit:

Years after the release of LeNet[3], in 2012 AlexNet paper[4] was published and drew so much attention back to the CNNs. The following models focused on previous ones’ deficiencies and aimed for higher accuracies in more efficient ways. VGGNet borrows a lot from AlexNet yet it is…


AlexNet is an important milestone in the visual recognition tasks in terms of available hardware utilization and several architectural choices. After its publication in 2012 by Alex Krizhevsky et al.[1] the popularity of deep learning and, in specific, the popularity of CNNs grew drastically. Below you can see the architecture of AlexNet:

Image for post
Image for post
AlexNet Architecture (It is also truncated from the top in the original paper.)

For the previous post, please visit:

After the big success of LeNet[5] in handwritten digit recognition, computer vision applications using deep learning came to a halt. Instead of feature extraction methods by filter convolutions, researchers preferred more handcrafted image processing tasks such as wavelets, Gabor filters, and many…


LeNet is considered to be the ancestor of convolutional neural networks and is a well-known model among the computer vision community.

LeNet for Digit Recognition
LeNet for Digit Recognition
LeNet for Digit Recognition

LeNet is one of the most fundamental deep learning models that is primarily used to classify handwritten digits. Proposed by Yann LeCun[1] in 1989, LeNet is one of the earliest neural networks that employ the convolution operation. Combining newly developed back-propagation algorithms with convolutional neural networks, LeCun et al. became pioneers of image classification using deep learning. The name LeNet is mostly used interchangeably with LeNet-5 which indicates the kernel size of the convolution masks.

This tutorial is intended…

mrgrhn

Boğaziçi Üniversitesi ’20 Electrical & Electronics Engineering — Physics | Articles on various Deep Learning topics

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store