Key Points
Introduction to Machine Learning & Deep Learning |
|
Deep Learning to Identify Smartphone Applications |
|
Overview of Deep Neural Network Concepts |
|
An Introduction to Keras with Binary Classification Task |
|
Classifying Smartphone Apps with Keras |
|
Tuning Neural Network Models for Better Accuracy |
|
Effective Deep Learning Workflow on HPC |
|
Post-Analysis for Modeling Tuning Experiments |
|
Using Multicore CPUs or GPUs for KERAS Computation |
|
Dealing with Issues in Data and Model Training |
|
Deep Learning in the Real World |
|
Glossary
- Activation Function
- A mathematical function that introduces non-linearity into a (NN) model.
- Backpropagation
- An algorithm that updates/corrects the weights so as to bring the predicted outcome closer to the expected outcome.
- Features
- Are attributes of data used for training and testing ML algorithms.
- Forward Propagation
- Calculates the output based on the current weights (and biases).
- A layer of (artificial) neurons between the input and output layers. The layers introduce nonlinearity to the model.
- Hyperparameters
- “Settings” or variables that are set beforehand that controls the learning process (of a model). Hyperparameters govern how the model learns and adjusts the parameters. Hyperparameters for a neural network can include the number of layers and the number of neurons in each layer, the learning rate, the activation function, the optimizer, and many more.
- Inference
- Uses a trained model to make predictions or “inferences” on new, unseen data. In other words, deploying the model to make predictions on new, unseen data.
- Input Layer
- The first layer of a neural network that receives the input data and passes it to the next layer.
- Loss Function
- Measures how well a model’s (current) output/predictions compare with the actual target values (also referred to as ground-truth labels). Also known as a cost or error function.
- Parameters
- Internal variables that are adjusted during training. Refers to weights and bias.
- Optimizer
- An algorithm used to iteratively improve the model during training. It adjusts the parameters (weights and bias) according to the loss.
- Output Layer
- The final layer in a neural network that generates the prediction or output.
- Shape
- The shape of a tensor or vector refers to the dimensionality. A 1-D shaped tensor (a vector) represents the sequence length. Can also be referred to as “size.”
Further Reading
Other Courses
-
Howard, Jeremy. “Practical Deep Learning for Coders.” 2022. https://course.fast.ai/.
A course that teaches how to apply deep learning and machine learning to practical problems. This course assumes some coding experience.
-
Ng, Andrew. “AI for Everyone.” Coursera. https://www.coursera.org/learn/ai-for-everyone/.
A good overview AI course that is not too technical.
Building neural networks
-
“Pipelines, Mind Maps and Convolutional Neural Networks.” https://towardsdatascience.com/pipelines-mind-maps-and-convolutional-neural-networks-34bfc94db10c.
This article describes the discipline that a data scientist exerted over himself and his impulses so that he could get to his end-goal more efficiently.
He was using a “mind map” to help keep track what one has done in changing network, etc. (as well as the effect of each change) to achieve a better-performing network.
Optimizers
-
Amananandrai. “10 famous Machine Learning Optimizers.” Dev, May 3, 2024. https://dev.to/amananandrai/10-famous-machine-learning-optimizers-1e22.
-
A. Zohrevand and Z. Imani, “An Empirical Study of the Performance of Different Optimizers in the Deep Neural Networks,” 2022 International Conference on Machine Vision and Image Processing (MVIP), Ahvaz, Iran, Islamic Republic of, 2022, pp. 1-5, doi: 10.1109/MVIP53647.2022.9738743. keywords: {Training;Deep learning;Handwriting recognition;Image recognition;Costs;Machine vision;Neural networks;Convolutional Neural Network (CNN);Persian handwritten word recognition;Optimizer;Training cost;Recognition accuracy}. https://ieeexplore.ieee.org/document/9738743.
-
“Linear Regression: Gradient Descent.” Google Machine Learning Education: Machine Learning Crash Course, 8 Nov. 2024. https://developers.google.com/machine-learning/crash-course/linear-regression/gradient-descent.
-
Ruder, Sebastian. “An overview of gradient descent optimization algorithms.” Ruder.io, 2016. arXiv preprint arXiv:1609.04747. https://www.ruder.io/optimizing-gradient-descent/#adagrad.