![]() |
Deep Learning and Reinforcement Learning | Coursera (IBM) |
This course provides an introduction to two highly sought-after fields in Machine Learning: Deep Learning and Reinforcement Learning. Deep Learning, a subset of Machine Learning, is utilized in both Supervised and Unsupervised Learning, powering many of the AI applications we use daily. The course begins by covering the theory behind Neural Networks, the foundation of Deep Learning, and explores various modern architectures. After building several Deep Learning models, the focus shifts to Reinforcement Learning, a Machine Learning approach that has gained significant attention recently. While Reinforcement Learning currently has limited practical applications, it is a promising research area in AI with potential future relevance.
Upon completing this course, and if you have followed the IBM Specialization sequence, you will have substantial practice and a solid understanding of the main types of Machine Learning: Supervised Learning, Unsupervised Learning, Deep Learning, and Reinforcement Learning. By the end of this course, you should be able to:
- Identify problems suitable for Unsupervised Learning
- Explain the curse of dimensionality and its impact on clustering with many features
- Describe and apply common clustering and dimensionality-reduction algorithms
- Perform clustering where appropriate and compare the performance of per-cluster models
- Understand metrics for characterizing clusters
This course is designed for aspiring data scientists looking to gain practical experience with Deep Learning and Reinforcement Learning. To benefit fully from this course, you should be familiar with programming in a Python development environment and have a fundamental understanding of Data Cleaning, Exploratory Data Analysis, Unsupervised Learning, Supervised Learning, Calculus, Linear Algebra, Probability, and Statistics.
Notice!
Always refer to the module on your course for the most accurate and up-to-date information.
Attention!
If you have any questions that are not covered in this post, please feel free to leave them in the comments section below. Thank you for your engagement.
WEEK 1 QUIZ
1. What is another name for the “neuron” on which all neural networks are based?- perceptron
- A network of neurons can represent a non-linear decision boundary.
- 8
- 3 x 4 matrix.

- Two

- Eight
- A single-layer Neural Network can be parameterized to generate results equivalent to Linear or Logistic Regression.
WEEK 2 QUIZ
1. The backpropagation algorithm updates which of the following?- The parameters only.
- They add non-linearity into the model, allowing the model to learn complex pattern.
- The actual output is determined by computing the output of neurons in each hidden layer
1. Use fit() and specify the number of epochs to train the model for.
2. Create a Sequential model with the relevant layers.
3. Normalize the features with layers.Normalization() and apply adapt().
4. Compile using model.compile() with specified optimizer and loss.
- ANSWER: 3, 2, 4, 1
- False
WEEK 3 QUIZ
1. What is the main function of backpropagation when training a Neural Network?- Make adjustments to the weights
- True
- True
- Leaky hyperbolic tangent
- Cases in which explainability is the main objective
- Pruning
- False
- online learning
- True
- Keras
WEEK 4 QUIZ
1. What is the main function of backpropagation when training a Neural Network?- Make adjustments to the weights
- True
- True
- Sigmoid
- Hyperbolic tangent
- Leaky hyperbolic tangent
- ReLu
- Cases in which explainability is the main objective
- Pruning
- False
- online learning
- True
- Improving the speed at which large models can be trained from scratch
- Pooling can reduce both computational complexity and overfitting.
WEEK 5 QUIZ
1. (True/False) In Keras, the Dropout layer has an argument called rate, which is a probability that represents how often we want to invoke the layer in the training.- False
- Save early layers for generalization before re-training later layers for specific applications.
- freeze the layers such that their weights don’t update during training.
2. Improve the model by fine-tuning.
3. Train the model with a new output layer in place.
4. Select a pre-trained model as the base of our training.
- ANSWER: 4, 1, 3, 2
- 30000 There is one feature per pixel of each channel, so 1001003=30000.
- Dense layer with the number of units corresponding to the number of classes
- filters applied
WEEK 6 QUIZ
1. (True/False) RNN models are mostly used in the fields of natural language processing and speech recognition.- True
- True
- True
- True
- False
- The Greedy Search algorithm selects one best candidate as an input sequence for each time step while the Beam Search produces multiple different hypothesis based on conditional probability.
- GRUs
- Speech Recognition
- Machine Translation
- Image Captioning
- Generating Images
- Anomaly Detection
- Robotic Control
WEEK 7 QUIZ
1. Select the correct option:Statement 1: Autoencoders are a supervised learning technique.
Statement 2: Autoencoder’s output is exactly the same as the input.
- Both statements are false.
Statement 1: Autoencoders can be viewed as a generalization of PCA that discovers lower dimensional representation of complex data.
Statement 2: We can implement overcomplete autoencoder by constraining the number of units present in the hidden layers of the neural network.
- Statement 1 is true, statement 2 is false.
- True
- True
WEEK 8 QUIZ
1. Select the right assertion:- Autoencoders learn from a compressed representation of the data, while variational autoencoders learn from a probability distribution representing the data.
- True
- Add layers and epochs
- True
- True
- generator and discriminator
- True
WEEK 9 QUIZ
1. (True/False) Simulation is a common approach for Reinforcement Learning applications that are complex or computing intensive.- True
- False
- True
- True
- Convolutional Neural Network
- Recurrent Neural Network
- Autoencoders