Effective Deep Learning Workflow on HPC
Overview
Teaching: 15 min
Exercises: 50 minQuestions
How do we train and tune deep learning models effectively using HPC?
How to convert a Jupyter notebook to a Python script?
How can we perform post-analysis of HPC computations using Jupyter?
Objectives
Switch the mindset from a single-tasked model development workflow to the ‘dispatch and analyze’ mode which offloads heavy-duty computations to HPC.
Perform a full conversion of a Juptyer Notebook to a Python script.
Analyze and aggregate results from HPC model tuning jobs on Jupyter.
Introduction
Motivation
In the previous episode, we introduced the model tuning procedure on a Jupyter notebook. As you may recall, the process was painfully time-consuming, because we have to wait for one training to complete before we can start another model training. Not only does this lead to longer wait times to finish all the required trainings, this approach will also be impractical in real-world deep learning, where each model training process could take hours or even days to complete. In this episode, we will introduce an alternative workflow to tune a deep learning model on an HPC system. With HPC, model trainings can be submitted and executed in parallel. We will show that this approach can greatly reduce the total human time needed to try out all the various hyperparameter combinations.
There are several potential issues with utilizing Juptyer Notebook, especially with well established neural
network code.
One is that the code must by executed one cell at a time.
The potential for non-linear execution of blocks/cells of code
is more helpful when developing and testing code, rather
than when running established code.
Another is that the code must be ran from the beginning everytime you close and reopen the notebook
(since the variable values are not saved between runs).
Another is that Jupyter Notebook cannot run multiple runs (e.g., network trainings)
in parallel.
Running multiple runs is difficult in Jupyter Notebook, since each run needs to be queued
one after another, even though there is no dependency relationship between runs.
For these reasons, sometimes it is more efficient to utilize Python and batch scripting
instead of relying on Jupyter Notebook.
Specifically, switching to Python script will improve throughput and turnaround time
for the Baseline_Model
notebook introduced in a prior episode.
Before switching, make sure to establish a working pipeline for machine learning.
Batch Scheduler
One huge benefit from converting a Jupyter notebook into a Python script, is that it allows for the use of batch (non-interactive) scheduling (of the Python scripts). It allows the user to launch neural network trainings through one batch script. Real machine-learning work requires many repetitive experiments, each of which may take a long time to complete. The batch scheduler allows many experiments to be carried out in parallel, allowing for more and faster results.
HPC is well suited for this type of workflow – in fact, it is the most effective when used in this way. Here are the key components of the “batch” way of working:
-
A job scheduler (such as SLURM job scheduler on HPC) to manage the jobs and run them on the appropriate resources;
-
The machine learning script written in Python, which will read inputs from files and write outputs to files and/or standard output;
-
The job script to launch the machine learning script in the non-interactive environment (e.g. HPC compute node);
-
A way to systematically repeat the experiments with some variations. This can be done by adding some command-line arguments for the (hyper)parameters that will be varied for each experiment.
Since the jobs (where each job is one or more experiments) will be run in parallel, keep in mind the following:
-
The Python script will need to utilize the same input file(s), but each one must work (and provide output) in their own directory. This will assist in organization and during post-processing and post-analysis. This will also assist in ensuring that there is no clashing amongst parallel jobs/experiments.
-
Using proper job names. Each experiment should be assigned a unique job name. This job name is very useful for organization, troubleshooting, and post-processing analysis.
The SLURM script can be modified to combine the two ideas. It can pass the unique job name to the Python script. The Python script can then create a working directory that includes the (unique) job name.
A Baseline Model to Tune on HPC
In this episode, we will demonstrate the process of converting a Jupyter notebook
to a Python script using the Baseline neural network model for the sherlock_18apps
classification.
We have a Jupyter notebook prepared, Baseline_Model.ipynb
,
which contains a complete pipeline of machine learning from data loading and
preparation, followed by neural network model definition, training, and saving.
The codes in this notebook are essentially the same as those that define the
Baseline Model in the previous episode of this module
(“Tuning Neural Network Models for Better Accuracy”).
The saved model can be reloaded later to deploy it for the actual application.
The Baseline Model
As a reminder, the Baseline Model for tuning the
sherlock_18apps
classifier is defined with the following hyperparameters:
- one hidden layer with 18 neurons
- learning rate of 0.0003
- batch size of 32
Steps to Convert a Jupyter Notebook to a Python Script
The first step is to convert the Jupyter notebook to a Python script. There are several ways to convert a Juypter Notebook into a Python script:
-
Manual process: Go over cell-by-cell in the Jupyter interface, copying and pasting the relevant code cells to a blank Python script. (Both Jupyter Lab and Jupyter Notebook support editing a Python script.) This can be especially useful when there is a lot of convoluted code or if there are multiple iterations of the same code in the same notebook. While this allows for very intentional and precise selection of code segments, it can be time consuming and prone to manual errors.
-
Automatic conversion: Use the
jupyter nbconvert
command. This command extracts all the codes in a given notebook into a Python script. The script will generally need be edited to account for the differences between the interactive Jupyter platform and noninteractive execution in Python.
Using nbconvert
Ensure that nbconvert is an accessible library/package. When using wahab, make sure that nbconvert is loaded in (you can module load
tensorflow-cpu/2.6.0
).
This is an example that converts the Baseline_Model.ipynb
to Basline_Model.py
using nbconvert
.
crun jupyter nbconvert --to script Baseline_Model.ipynb
Cleaning up the Code
If selecting to use the nbconvert
option, make sure to make adjustments to clean up the
code and make corrections.
-
Remove comments such as
In[1]:
,In[2]
, etc., which is used to note the separation of the cells. -
Verify the retaining of all comments. It also comments out the text blocks created in the Jupyter notebook.
-
Remove any unnecessary (code) cells that have been commented out.
-
Remove any Jupyter notebook exclusive commands/code, such as
%matplotlib inline
. -
Verify the retention of any commands used in Jupyter notebook to view information. Some commands in Jupyter allow the user to view information, such as
head()
andtail()
, but these will not print from the Python script without being surrounded byprint()
.
Also, note that the previously saved cell outputs are not included (not even as a comment). This is fine, since it is the output from a previous run.
Editing and Adjusting the Code
Remove all interactive and GUI input/output. Input prompts should be modified to read the input file.
Any outputs should be saved to a unique working directory and/or be uniquely named.
This mindfulness will assist in allowing the output to be machine processable later.
This includes images - changing matplotlib.show()
to savefig()
and other valuable outputs (e.g. tables) - saved as files (e.g. CSV).
Example Using nbconvert: Leading to Baseline_Model.py
Exercise: Converting Baseline_Model.ipynb to Baseline_Model.py
1. Utilize the nbconvert command explained above.
Solution
crun jupyter nbconvert --to script Baseline_Model.ipynb
2. Header and Importing Python Libraries Sections
Remove all unnecessary comments in comment header and import statement sections. Also, remove the unnecessary Jupyter notebook lines.
Solution
#!/usr/bin/env python # coding: utf-8 # # Demo Notebook for Converting a Jupyter Notebook to a Python Script # # This notebook is intended to demonstrate the conversion of itself to a full-fledge Python script that can be submitted to an HPC job scheduler. # # This notebook contains a full code to train the "baseline model" in the model tuning process. # # The code in this notebook will train the neural network model to classify among 18 different mobile phone apps using the `sherlock_18apps` dataset. The baseline model has one hidden layer with 18 neurons, with learning rate 0.0003 and batch size 32. # ## 1. Loading Python Libraries import os import sys import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib.gridspec as gridspec # CUSTOMIZATIONS (optional) np.set_printoptions(linewidth=1000) # tools for deep learning: import tensorflow as tf import tensorflow.keras as keras # Add to the sys path the path to ML toolbox files current_dir = os.getcwd() parent_dir = os.path.dirname(current_dir).split("-devel/devel/")[0] parent_dir = os.path.dirname(parent_dir + '-devel/devel/sherlock/19F17C/') sys.path.append(parent_dir) # Import said ML toolbox files from sherlock_ML_toolbox import *
3. Clean up the load sherlock sections.
Remove the unnecessary comments to clean up the code. The cell separation comments and the RUNIT comments. Also, add
Solution
# # 2. Loading Sherlock Applications Data # Load in the pre-processed SherLock data and then split it into train and validation datasets datafile = "sherlock/sherlock_18apps.csv" df_orig, df, labels, df_labels_onehot, df_features = load_prep_data_18apps(datafile,print_summary=False) train_features, val_features, train_labels, val_labels, train_L_onehot, val_L_onehot = split_data_18apps(df_features, labels, df_labels_onehot) # Though we verified the loading and splitting of the pre-processed data, the following cells are just additional verification. print(df.head(10)) print(df.tail(10)) print(df_features.info()) print(df_features.head(10)) print("- training dataset: %d records" % (len(train_features),)) print("- valing dataset: %d records" % (len(val_features),)) sys.stdout.flush() print("Now the feature matrix is ready for machine learning!") print(train_features.head(10)) app_counts = df.groupby('ApplicationName')['CPU_USAGE'].count() print(app_counts) print("Num of applications:",len(app_counts)) print(train_L_onehot.head())
3. Defining the Baseline Model (NN_Model_1H method), Training, Graphing Results, and Saving.
Remove the unnecessary cell separation comments.
Solution
# ## 3. The Baseline Model model_1H = NN_Model_1H(18,0.0003) model_1H_history = model_1H.fit(train_features, train_L_onehot, epochs=10, batch_size=32, validation_data=(val_features, val_L_onehot), verbose=2) history_file = 'model_1H_history.csv' plot_file = 'loss_acc_plot.png' model_file = 'model_1H.h5' history_df = pd.DataFrame(model_1H_history.history) history_df.to_csv(history_file, index=False) combine_plots(model_1H_history, plot_loss, plot_acc, loss_epoch_shifts=(0, 1), loss_show=False, acc_epoch_shifts=(0, 1), acc_show=False, save_path=plot_file) model_1H.save(model_file)
SLURM Batch Script Review
To create the SLURM batch script, we need to define the SBATCH directives, module loading, any environmental variables, and then the executable SLURM commands.
The SBATCH directives
This is the section where every line starts with #SBATCH
.
These are the first lines of the script (not including the line #!/bin/bash
.
We will set the job name to Baseline_Model
and output file name (specified using -o
) is compiled using
two different SLURM filename patterns.
The job name %x
and %j
for the job allocation number of the running job.
The partition -p
is set to main (the default partition, see Partition Policies).
The last two lines (of the section), #SBATCH -N 1
and #SBATCH -n 1
are used to specify the computing resources.
The first line specifies one node, and the second line specifies
that there should only be one core per task.
The job is given a maximum of 1 hour.
Much like python, aside from the SBATCH directives section, anything else with #
is treated as a comment.
Module Loading and Environmental Variables
This is the section for module (i.e. package) loading.
In this case, we load the default container_env
and tensorflow-gpu/2.6.0
modules.
Variables work like any other variable in Linux, and must be called using a $
.
Both CRUN
and CRUN_ENVS_FLAGS
variables are used to make the executable line easier to read.
Save this as Baseline_Model.slurm
.
#!/bin/bash
#SBATCH -J Baseline_Model
#SBATCH -o %x.out%j
#SBATCH -p main
#SBATCH -t 1:00:00
#SBATCH -N 1
#SBATCH -n 1
module load container_env
module load tensorflow-gpu/2.6.0
CRUN=crun.tensorflow-gpu
CRUN_ENVS_FLAGS="-p $HOME/envs/default-tensorflow-gpu-2.6.0"
$CRUN $CRUN_ENVS_FLAGS python3 Baseline_Model.py
Setting up the Environment (Non-Wahab)
Using other HPC resources other than ODU’s Wahab will require the user to change some of the above SLURM file. For example, consult with your HPC team on the correct partition and for any default/pre-made modules.
Running the SBATCH Script
Run the script from the terminal using the
sbatch
command with the file name.sbatch path/to/file/Baseline_Model.slurm
Post-Analysis on Baseline_Model.py
After running Baseline_Model.slurm
, use post_analysis_Baseline_Model.ipynb
(recreated below) to run analysis on the results.
Step 0: Import modules
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
Step 1: Discovery of all the results
## We know all of the output file names (because we set them)
## So for now, just use the set file paths
csvFile = './model_1H18N_history.csv'
imgPath = "./loss_acc_plot.png"
modelPath = "./model_1H18N.h5"
Step 2: Load the results
# Load in the csv file containing the loss, accuracy, val_loss, and val_accuracy
df = pd.read_csv(csvFile)
print(df)
loss accuracy val_loss val_accuracy
0 1.065620 0.706739 0.529197 0.889391
1 0.390929 0.915839 0.302021 0.933847
2 0.258330 0.938712 0.224719 0.942636
3 0.201338 0.950046 0.180583 0.956881
4 0.166800 0.962922 0.152426 0.967940
5 0.142645 0.968859 0.131611 0.970174
6 0.124644 0.972938 0.115774 0.975703
7 0.110696 0.977213 0.104889 0.977919
8 0.099604 0.979328 0.094146 0.979823
9 0.090591 0.980184 0.085793 0.980354
# Collect the plots for each job.
# Here, we can use matplotlib to import the saved images.
from matplotlib import pyplot as plt
from matplotlib import image as mpimg
image = mpimg.imread(imgPath)
plt.title("Baseline_Model")
fig = plt.imshow(image)
plt.axis('off')
fig.axes.get_xaxis().set_visible(False)
fig.axes.get_yaxis().set_visible(False)
plt.show()
## Load in the model, though for this case, we do not need this
#model_1H = keras.saving.load_model("./model_1H18N.h5")
Step 3: Validation phase: visualization of training data
1) Inspect if the trainings did not go as expected
2) Visually inspect for anomalies
3) Visually or numerically check for convergence (e.g. check the last 4-5 epochs, what the slope is like; any fluctuations?)
Most of Step 3 can be validated using the graphs:
The training did behave as expected.
The training loss is decreasing as the number of epochs increases, while the accuracy increases.
There are no major anomalies (such as any random spikes or dips).
It looks like both the accuracy and loss functions are starting to converge.
Therefore, this model behaves as expected.
Step 4: Analysis phase: visualizing the results
The post-analysis phase is for comparing among experiments. Since this post-analysis script is for one model’s results, this step can be skipped.
Running Experiments Utilizing Command Line Arguments (NN_Model_1H)
We just created a Python and SLURM script that runs one model with fixed hyperparameters. However, we want to be able to utilize batch scripts to run multiple experiments at the same time.
One way to accomplish this is by utilizing command line arguments. Command line arguments will add the capability of defining hyperparameters in the SLURM file (or on the terminal). This will allow the user to create one (flexible) Python script with multiple SLURM scripts.
With this one to many convention, we must establish an organization method. We can utilize the naming convention established in the previous episode.
MODEL_DIR = model_1H + HIDDEN_NEURONS + N_lr + LEARNING_RATE + _bs + BATCH_SIZE + _e + EPOCHS
The python script will create an output directory and give it this MODEL_DIR name.
This output directory will contain loss_acc_plot.png
, model_history.csv
, model_metadata.json
and model_weights.h5
.
The JSON file will be explained shortly, but for now,
just assume it is another (unique) output for one experiment
(i.e. run of the Python script).
The MODEL_DIR name will also be used to name the SLURM output file, alomg with the job number.
So, each experiment/run will have it’s own corresponding output directory and SLURM output file named according to the hyperparameters used.
Implementing Command Line Arguments
Duplicate Baseline_Model.py
and rename the new copy to be NN_Model_1H.py
.
Make the following changes to the code.
0) (Optional) Change the heading of the file.
"""
NN_Model_1H.py
Python script for model tuning experiments.
Running this script requires four arguments on the command line:
python3 NN_Model_1H.py HIDDEN_NEURONS LEARNING_RATE BATCH_SIZE EPOCHS
"""
1) Import the additional libraries.
import time
import json
2) Define the hyperparameters at the top and assign them
the command line argument values using sys.argv
.
We create a standard model output directory name that
is based on the hyperparameter values.
As explained above, having a standard name assists in maintaining the
experiments and during post-analysis and post-processing.
Next, print the hyperparameters to the output file.
HIDDEN_NEURONS = int(sys.argv[1])
LEARNING_RATE = float(sys.argv[2])
BATCH_SIZE = int(sys.argv[3])
EPOCHS = int(sys.argv[4])
# Create model output directory
MODEL_DIR = "model_1H" + str(HIDDEN_NEURONS) + "N_lr" + str(LEARNING_RATE) + "_bs" + str(BATCH_SIZE) + "_e" + str(EPOCHS)
if not os.path.exists(MODEL_DIR):
os.makedirs(MODEL_DIR)
print()
print("Hyperparameters for the training:")
print(" - hidden_neurons:", HIDDEN_NEURONS)
print(" - learning_rate: ", LEARNING_RATE)
print(" - batch_size: ", BATCH_SIZE)
print(" - epochs: ", EPOCHS)
print()
3) Then, change all (variable) references to the hardcoded hyperparameters.
model_1H = NN_Model_1H(HIDDEN_NEURONS, LEARNING_RATE)
model_1H_history = model_1H.fit(train_features,
train_L_onehot,
epochs=EPOCHS, batch_size=BATCH_SIZE,
validation_data=(val_features, val_L_onehot),
verbose=2)
4) Next, change the output file names using the model directory path created above. This will allow all of the output files to share a common name and be contained in their respective model directory.
history_file = os.path.join(MODEL_DIR, 'model_history.csv')
plot_file = os.path.join(MODEL_DIR, 'loss_acc_plot.png')
model_file = os.path.join(MODEL_DIR, 'model_weights.h5')
metadata_file = os.path.join(MODEL_DIR, 'model_metadata.json')
5) Then, add the additional step of creating the metadata. Using JSON allows the user to easily read and write the structured metadata. It saves name-value pairs that can be queried.
# Because of the terseness of Keras API, we create our own definition
# of a model metadata.
# timestamp of the results (at the time of saving)
model_1H_timestamp = time.strftime('%Y-%m-%dT%H:%M:%S%z')
# last epoch results is a key-value pair (i.e. a Series)
last_epoch_results = history_df.iloc[-1]
model_1H_metadata = {
# Our own information
'dataset': 'sherlock_18apps',
'keras_version': tf.keras.__version__,
'SLURM_JOB_ID': os.environ.get('SLURM_JOB_ID', None),
'timestamp': model_1H_timestamp,
'model_code': MODEL_DIR,
'optimizer': 'Adam',
# the number of hidden layers will be deduced from the length
# of the hidden_neurons array:
'hidden_neurons': [HIDDEN_NEURONS],
'learning_rate': LEARNING_RATE,
'batch_size': BATCH_SIZE,
'epochs': EPOCHS,
# Some results
'last_results': {
'loss': round(last_epoch_results['loss'], 8),
'accuracy': round(last_epoch_results['accuracy'], 8),
'val_loss': round(last_epoch_results['val_loss'], 8),
'val_accuracy': round(last_epoch_results['val_accuracy'], 8),
}
}
with open(metadata_file, 'w') as F:
json.dump(model_1H_metadata, F, indent=2)
Metadata
Recall from the previous episode, that metadata should be collected during each experiment. The metadata collect is decided by the user and should contain helpful information to the user. This should include information about the compute environment or information to help with troubleshooting, such as the
keras_version
andtimestamp
. It should also contain information about the experiment, such as the hyperparameters. It can also contain “summary” information, such aslast_results
, which saves the last epoch results.
Creating SLURM Script with Command Line Arguments
First, duplicate the Baseline_Model.slurm
file and rename it to NN_Model_1H.slurm
.
Then, change the name of the job:
#SBATCH -J model-tuning-1H
Then, add the block that will take in
the command line arguments.
Note, the actual values will be set in
a different slurm script, or can be passed within the sbatch command (on the terminal).
Make sure that the order of the variables in the Python and SLURM script match!
The values (syntax-wise) can be defined a couple different ways.
One way is by doing --[variableName] [value]
.
Another is by doing ""
# HYPERPARAMS_LIST contains four arguments,
# which must be given in the command-line argument:
# * the number of hidden neurons
# * the learning rate
# * the batch size
# * the number of epochs to train
HIDDEN_NEURONS=$1
LEARNING_RATE=$2
BATCH_SIZE=$3
EPOCHS=$4
Then, add the hyperparameters to the last line.
$CRUN $CRUN_ENVS_FLAGS python3 NN_Model_1H.py "$HIDDEN_NEURONS" "$LEARNING_RATE" "$BATCH_SIZE" "$EPOCHS"
Run NN_Model_1H.slurm by passing in the arguments into the command line.
In the terminal, run the
NN_Model_1H.slurm
. In this example, the number of hidden neurons in the first layer is 18, the learning rate is 0.0003, the batch size is 32, and the number of epochs is 10.sbatch NN_Model_1H.slurm 18 0.0003 32 10
Since these were the same hyperparameters used in
Baseline_Model.py
, the results should look the same.
Using a Master Launcher Script to Launching a Series of Jobs for a Certain Hyperparameter Scanning Task
We can use a master launcher script with a “for loop,” to allow for the launching of a series of jobs for a certain hyperparameter scanning task. This can be used to replicate the experiments from before, within Tuning Neural Network Models for Better Accuracy.
Create another directory named scan-hidden-neurons-02
and create a submit-scan-hidden-neurons.sh
file that will submit multiple jobs that vary the number of hidden neurons
(in the first layer).
Make sure to also copy the NN_Model_1H.slurm
and NN_Model_1H.py
files.
Note, for the NN_Model_1H.py
file, change the metadata section’s model code to
'model_code': "1H"+str(HIDDEN_NEURONS)+"N",
since this is the hyperparameter that is changing.
To do so, define the hyperparameters at the top of the script that do not vary.
Then, create a for loop with the varying hyperparameter with the values to experiment.
Next, define JOBNAME, which will be the name of the output file.
This will follow the same naming convention for the output directory names.
Then, use the sbatch command to call NN_Model_1H.slurm
with the given variable values.
#!/bin/bash
#HIDDEN_NEURONS= -- varied
LEARNING_RATE=0.0003
BATCH_SIZE=32
EPOCHS=30
for HIDDEN_NEURONS in 1 2 4 8 12 18 40 80; do
JOBNAME=model_1H${HIDDEN_NEURONS}N_lr${LEARNING_RATE}_bs${BATCH_SIZE}_e${EPOCHS}
echo "Training for hyperparams:" "$HIDDEN_NEURONS" "$LEARNING_RATE" "$BATCH_SIZE" "$EPOCHS"
sbatch -J "$JOBNAME" NN_Model_1H.slurm "$HIDDEN_NEURONS" "$LEARNING_RATE" "$BATCH_SIZE" "$EPOCHS"
done
For this script, 8 different jobs will be spawned, each with a
0.0003 learning rate, batch size of 32, and 30 epochs.
Each of the 8 jobs will have a different amount of hidden neurons (in the first layer).
So, the first job will call NN_Model_1H.slurm
with a 0.0003 learning rate, batch size of 32, and 30 epochs, and with 1 hidden neuron.
The second job will call NN_Model_1H.slurm
with a 0.0003 learning rate, batch size of 32, and 30 epochs, and with 2 hidden neuron.
Run the experiments.
Additional Epochs
To get a better understanding of how the hyperparameters affect the accuracy of the model, additional epochs were added. The number of epochs increase from 10 to 30. Note that this change is implemented in the script.
Additional Hidden Neurons
Perform the following additional experiments: Use
NN_1H_Model.py
to create two different models/experiments with 256 and 512 hidden neurons.Solution
There are two different ways to do this. You can use the command line:
sbatch NN_Model_1H.slurm 256 0.0003 32 30 sbatch NN_Model_1H.slurm 512 0.0003 32 30
OR
Modify the for loop in
submit-scan-hidden-neurons.sh
to also run 256 and 512 hidden neurons.for HIDDEN_NEURONS in 256 512 1024; do
Post-Processing Results from submit-scan-hidden-neurons.sh
(post_processing.ipynb
)
After running submit-scan-hidden-neurons.sh
, usepost_processing.ipynb
(also reproduced below) to analyze the results.
Step 0: Import modules
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import glob # For directory/file discovery
import json # For reading in the model's metadata stored in JSON files
from itertools import chain # used for flattening list of lists into 1-D list
%matplotlib inline
Step 1: Discover the results: Discovering output directories using glob
# Get the directories that contain the results for each model.
# The output directory will match the dirPath.
# If the model ran correctly, it should contain a .json file in it.
dirListJSON = [] # a list that contains the folders with json metadata
dirPath = "../model_tuning_batch_hpc/scan-*/model_*H*N*/"
# Glob will traverse the directories and search for a directory
# that matches the dirPath.
# In this case, one that is in model_tuning_batch_hpc parent directory
# and is contained in one of the experimental directories (that start with `scan-`)
# and is contained in a directory that starts with `model_` and contains
# `H` and `N`.
# Then, it only adds the directory path if the model produced output (contains a json file).
dirListTemp = glob.glob(dirPath)
for dirPathI in dirListTemp:
if glob.glob(dirPathI+"model_metadata.json"):
dirListJSON.append(dirPathI)
print(len(dirListJSON))
Step 2: Load the results:
Step 2.1: Create the DataFrame and populate it with the model metadata and model history
# Read in Json files and load metadata.
numRows = [] # used to hold the number of epochs per experiment
metaData = []
history = []
for dirI in dirListJSON:
# For each directory, read in the json file and get the neccessary metadata information
with open(dirI + "model_metadata.json") as f:
data = json.load(f)
hidden_neurons = data['hidden_neurons']
learning_rate = data['learning_rate']
batch_size = data['batch_size']
epochs = data['epochs']
jobID = data['SLURM_JOB_ID']
model_type = data['model_code']
if 'neuron' in dirI:
model_type+="-neuron"
elif 'learning' in dirI:
model_type+= "-lr"
elif 'scan-batch' in dirI:
model_type += "-batch"
elif 'layer' in dirI:
model_type += "-layers"
numRows.append(int(epochs))
# Create a list with the metadata for model_type, jobID, hidden neurons list (one value for the number of neurons in
# each layer), learning rate, and batch size.
toAdd = [str(model_type), int(jobID), tuple(hidden_neurons), float(learning_rate), int(batch_size)]
for i in range(epochs):
metaData.append(toAdd)
# Now, get the model history
history.append(pd.read_csv(dirI+"/model_history.csv"))
# Initialize a pre-allocated DataFrame and assign the values.
df = pd.DataFrame(np.zeros((sum(numRows), 10), dtype=float), columns = ['Model_Type', 'job_ID', 'neurons', 'learning_rate', 'batch_size','epoch', "loss", "accuracy", "val_loss", "val_accuracy"])
# Now, assign the values in the DataFrame.
# The first 5 columns are the metadata information.
df['Model_Type'] = [sublist[0] for sublist in metaData] # the ModelType column is the first element in the metaData list
df['job_ID'] = [sublist[1] for sublist in metaData]
df['neurons'] = [sublist[2] for sublist in metaData]
df['learning_rate'] = [sublist[3] for sublist in metaData]
df['batch_size'] = [sublist[4] for sublist in metaData]
# The 6th column is the epochs.
tempEpochs = [list(range(x)) for x in numRows] # for each number of epochs, create a ranged list
df['epoch'] = list(chain.from_iterable(tempEpochs)) # flatten that list into a 1-D list
# the 7th-10th columns are the history information from the CSV file
lossTemp = [list(sublist['loss']) for sublist in history] # the loss column is the first element in the history list
df["loss"] = list(chain.from_iterable(lossTemp)) # flatten the list
accTemp = [list(sublist['accuracy']) for sublist in history] # the accuracy column is the 2nd element in the history list
df["accuracy"] = list(chain.from_iterable(accTemp)) # flatten the list
valLossTemp = [list(sublist['val_loss']) for sublist in history]
df["val_loss"] = list(chain.from_iterable(valLossTemp)) # flatten the list
valAccTemp = [list(sublist['val_accuracy']) for sublist in history]
df["val_accuracy"] = list(chain.from_iterable(valAccTemp)) # flatten the list
# Make sure the data types are correct
df['epoch'] = df['epoch'].astype(int)
print(df)
Step 2.2: Save the new DataFrame
df.to_csv("post_processing_all_hpc_batch.csv") # save the csv file
Step 2.3: Recreate the model history graphs (to be used for validation)
for i in range(len(history)):
model_historyI = history[i]
print(model_historyI)
plt.figure()
plt.plot(range(0, 30), model_historyI.iloc[:,1], label='train_accuracy')
plt.plot(range(1, 31), model_historyI.iloc[:,3], label = 'val_accuracy')
plt.title('Model Accuracy: ' + metaData[i][0])
plt.ylabel('Accuracy')
plt.xlabel('Epochs')
plt.legend()
plt.show()
Run post_processing.ipynb
This should result in a csv file.
Solution
Model_Type job_ID neurons learning_rate batch_size epoch loss accuracy val_loss val_accuracy 0 1H18N-layers 4274921 [18] 0.0003 32 0 1.063700 0.705481 0.491797 0.891515 1 1H18N-layers 4274921 [18] 0.0003 32 1 0.372994 0.915958 0.298905 0.933023 2 1H18N-layers 4274921 [18] 0.0003 32 2 0.261755 0.938790 0.233927 0.941592 3 1H18N-layers 4274921 [18] 0.0003 32 3 0.213864 0.948938 0.197564 0.953841 4 1H18N-layers 4274921 [18] 0.0003 32 4 0.183789 0.956024 0.172288 0.956313 .. ... ... ... ... ... ... ... ... ... ... 745 bs128-batch 4279395 [18] 0.0003 128 25 0.081204 0.982661 0.080848 0.981324 746 bs128-batch 4279395 [18] 0.0003 128 26 0.078460 0.983745 0.078027 0.981635 747 bs128-batch 4279395 [18] 0.0003 128 27 0.075721 0.984633 0.075335 0.985590 748 bs128-batch 4279395 [18] 0.0003 128 28 0.073381 0.985320 0.073213 0.981764 749 bs128-batch 4279395 [18] 0.0003 128 29 0.071049 0.985741 0.070776 0.985224
Step 3: Validation Phase: visualization of training data
1) Inspect if the trainings did not go as expected
2) Visually inspect for anomalies
3) Visually or numerically check for convergence (e.g. check the last 4-5 epochs, what the slope is like; any fluctuations?)
This is the step where one can easily look at the graphics and determine hyperparameters that produce bad results. For example, the graphic produced from the model with 0.1 learning rate does not appear to produce good results. Both the training loss and validation loss do not exhibit the typical trend. It does not appear to stabilize at all during the 30 epochs. Suggesting that this is not a good model.
Learning Rate and Batch Size Experiments
Learning Rate Experiment
To recreate the experiment: test with different learning rates of 0.0003, 0.001, 0.01, and 0.1, create scan-learning-rate
.
Make sure that this directory includes ther sherlock data/directory, NN_Model.1H.slurm
and NN_Model.1H.py
.
Next, create submit-scan-learning-rate.sh
.
This script will be similar to the submit-scan-hidden-neurons.sh
.
#!/bin/bash
HIDDEN_NEURONS=18
#LEARNING_RATE= -- varied
BATCH_SIZE=32
EPOCHS=30
for LEARNING_RATE in 0.0003 0.001 0.01 0.1; do
#JOBNAME=model-tuning-1H${HIDDEN_NEURONS}N
#OUTDIR=model_1H${HIDDEN_NEURONS}N_lr${LEARNING_RATE}_bs${BATCH_SIZE}_e${EPOCHS}
JOBNAME=model_1H${HIDDEN_NEURONS}N_lr${LEARNING_RATE}_bs${BATCH_SIZE}_e${EPOCHS}
echo "Training for hyperparams:" "$HIDDEN_NEURONS" "$LEARNING_RATE" "$BATCH_SIZE" "$EPOCHS"
sbatch -J "$JOBNAME" NN_Model_1H.slurm "$HIDDEN_NEURONS" "$LEARNING_RATE" "$BATCH_SIZE" "$EPOCHS"
done
Change the Python script’s model_1H_metadata model code line to the following.
'model_code': "lr" + str(LEARNING_RATE),
Batch Size Experiment
The batch size experiment will be conducted similarly to the learning rate experiment.
To recreate the experiment: test with different batch sizes: 16, 32, 64, 128, 512, 1024, create scan-batch-size
.
Make sure that this directory includes ther sherlock data/directory, NN_Model.1H.slurm
and NN_Model.1H.py
.
Next, create submit-scan-batch-size.sh
.
This script will be similar to the submit-scan-hidden-neurons.sh
.
#!/bin/bash
HIDDEN_NEURONS=18
LEARNING_RATE=0.0003
#BATCH_SIZE= --varied
EPOCHS=30
for BATCH_SIZE in 16 32 64 128 512 1024; do
#JOBNAME=model-tuning-1H${HIDDEN_NEURONS}N
#OUTDIR=model_1H${HIDDEN_NEURONS}N_lr${LEARNING_RATE}_bs${BATCH_SIZE}_e${EPOCHS}
JOBNAME=model_1H${HIDDEN_NEURONS}N_lr${LEARNING_RATE}_bs${BATCH_SIZE}_e${EPOCHS}
echo "Training for hyperparams:" "$HIDDEN_NEURONS" "$LEARNING_RATE" "$BATCH_SIZE" "$EPOCHS"
sbatch -J "$JOBNAME" NN_Model_1H.slurm "$HIDDEN_NEURONS" "$LEARNING_RATE" "$BATCH_SIZE" "$EPOCHS"
done
Also, change the model code in the model_1H_metadata section to be
'model_code': "bs"+str(BATCH_SIZE),
Multiple Layer Experiment
The multiple layer experiment requires more changes
to the Python script.
Make a copy of the previous NN_Model_1H.py
and rename it to NN_Model_XH.py
.
This new script will be flexible enough to accept a list of hidden neurons
with a comma separating the number of neurons in each layer.
So, for example, the user can input 18,18,18
which will create a model
with three layers, each containing 18 hidden neurons.
The model code (i.e. naming convention) stays the same, except that the
number before the H represents the number of layers,
which is followed by the number of neurons in each layer separated by N.
So, in the prior example, the model code would be model_3H18N18N18N_lr0.0003_bs32_e30
.
The Python code will change in the following ways.
After the reading in of the command line arguments,
create two variables, MULT_LAYERS
and NumLayers
.
The first is a boolean of if there are multiple layers,
and the second represents the number of layers.
MULT_LAYERS = False
NumLayers = 1
# Create model output directory
# If it has a "," in it, it will have multiple layers
if "," in HIDDEN_NEURONS:
MULT_LAYERS = True
HIDDEN_NEURONS_TEMP = HIDDEN_NEURONS.split(',')
toAdd = "N"
HIDDEN_NEURONS_DIR = toAdd.join(HIDDEN_NEURONS_TEMP)
NumLayers = len(HIDDEN_NEURONS_TEMP)
else:
HIDDEN_NEURONS_DIR = HIDDEN_NEURONS
MODEL_DIR = "model_"+str(NumLayers) + "H" + str(HIDDEN_NEURONS_DIR) + "N_lr" + str(LEARNING_RATE) + "_bs" + str(BATCH_SIZE) + "_e" + str(EPOCHS)
Then, change the NN_Model_1H
function to NN_Model_XH
.
If there are multiple layers, iterate through the list
and create a Dense
layer with that number of hidden neurons.
Note, the first layer needs to be given a specific input shape.
def NN_Model_XH(hidden_neurons, learning_rate):
"""Definition of deep learning model with one or more dense hidden layer(s)"""
# Use TensorFlow random normal initializer
random_normal_init = tf.random_normal_initializer(mean=0.0, stddev=0.05)
model = Sequential()
if MULT_LAYERS:
hidden_neurons = hidden_neurons.split(',')
print("Number of layers: " + str(NumLayers))
# The first layer should have a given input shape
model.add(Dense(int(hidden_neurons[0]), activation='relu', input_shape=(19,),
kernel_initializer=random_normal_init)) # Hidden Layer
for i in range(NumLayers-1):
model.add(Dense(int(hidden_neurons[i+1]), activation='relu',
kernel_initializer=random_normal_init)) # Hidden Layer
else:
model.add(Dense(int(hidden_neurons), activation='relu', input_shape=(19,),
kernel_initializer=random_normal_init)) # Hidden Layer
model.add(Dense(18, activation='softmax',
kernel_initializer=random_normal_init)) # Output Layer
adam_opt = Adam(learning_rate=learning_rate, beta_1=0.9, beta_2=0.999, amsgrad=False)
model.compile(optimizer=adam_opt,
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
Since this uses a XH model, make sure to update the variable names.
model_XH = NN_Model_XH(HIDDEN_NEURONS, LEARNING_RATE)
model_XH_history = model_XH.fit(train_features,
train_L_onehot,
epochs=EPOCHS, batch_size=BATCH_SIZE,
validation_data=(val_features, val_L_onehot),
verbose=2)
history_df = pd.DataFrame(model_XH_history.history)
history_df.to_csv(history_file, index=False)
combine_plots(model_XH_history,
plot_loss, plot_acc,
loss_epoch_shifts=(0, 1), loss_show=False,
acc_epoch_shifts=(0, 1), acc_show=False,
save_path=plot_file)
model_XH.save(model_file)
# Because of the terseness of Keras API, we create our own definition
# of a model metadata.
# timestamp of the results (at the time of saving)
model_XH_timestamp = time.strftime('%Y-%m-%dT%H:%M:%S%z')
Then, change the model_XH_metadata.
'timestamp': model_XH_timestamp,
'model_code': str(NumLayers)+"H"+str(HIDDEN_NEURONS)+"N",
However, that is not how to actually set the metadata for the hidden neurons, so this code will fix it.
# Make sure the hidden neurons metadata is saved correctly!
# Make sure it is a list of integers
if MULT_LAYERS:
HIDDEN_NEURONS = HIDDEN_NEURONS.split(',')
HIDDEN_NEURONS = [int(item) for item in HIDDEN_NEURONS]
model_XH_metadata['hidden_neurons'] = HIDDEN_NEURONS
model_XH_metadata['model_code'] = model_XH_metadata['model_code'].replace(",", "N")
else:
model_XH_metadata['hidden_neurons'] = [int(HIDDEN_NEURONS)]
Next, make sure to change NN_Model_1H.slurm
to NN_Model_XH.slurm
and
fix the Python file pointer and job name.
#SBATCH -J model-tuning-XH
$CRUN $CRUN_ENVS_FLAGS python3 NN_Model_XH.py "$HIDDEN_NEURONS" "$LEARNING_RATE" "$BATCH_SIZE" "$EPOCHS"
Finally, submit-scan-layers.sh
.
First, change the for loop to test a model with a single layer with 18 hidden neurons, a model with two layers, each with 18 hidden neurons, a model with three layers, each with 18 hidden neurons, and finally a model with four layers, each with 18 hidden neurons.
for HIDDEN_NEURONS in "18" "18,18" "18,18,18" "18,18,18,18"; do
Second, use the following Linux commands to get the number of layers (used to input the correct JOBNAME). It will get the number of layers by changing the commas into new lines and then counting the number of new line characters. This is indented and is right after the for loop
# get the number of layers: changes the commas into new lines, then counts that number of new lines
H=$(echo $HIDDEN_NEURONS | sed -e 's/,/\n/g' | wc -l)
Then, set the JOBNAME correctly.
# So for example, "18,18,18,18" will produce an output file called model_4H18N18N18N18N_lr0.0003_bs32_e30
JOBNAME=model_${H}H${HIDDEN_NEURONS}N_lr${LEARNING_RATE}_bs${BATCH_SIZE}_e${EPOCHS}
echo "Training for hyperparams:" "$HIDDEN_NEURONS" "$LEARNING_RATE" "$BATCH_SIZE" "$EPOCHS"
# remove the , and then add the N after it for the Hidden Neurons
JOBNAME=$(printf '%s' "$JOBNAME" | sed 's/,/N/g')
Finally, update the file pointer to be NN_Model_XH.slurm
.
sbatch -J "$JOBNAME" NN_Model_XH.slurm "$HIDDEN_NEURONS" "$LEARNING_RATE" "$BATCH_SIZE" "$EPOCHS"
Note: > Make sure to re-run the post-processing notebook after completing all of the experiments!
Summary
We learned how to utilize the batch scheduler to more effectively train and tune models.
First, through converting the Baseline_Model
Jupyter Notebook into a Python script.
Then, by recreating the experiments in Tuning Neural Network Models for Better Accuracy
lesson utilizing Python and SLURM scripts.
Introducing ommand-line arguments allowed for the hyperparameters to be
defined in the SLURM scripts (instead of the Python scripts).
Using a master launcher script to launch a seriesof jobs for a certain
hyperparameter scanning task allows for the HPC to run the jobs in parallel.
The outputs from the experiments are compiled (and saved) as well as studied during the post-processing phase. The next phase is the post-analysis phase. During this phase, the user will load in the data from the post-processing phase and plot the relationship between one hyperparameter and the accuracy (from the last epoch) of the model.
Key Points
How scripting works by converting the notebook to job scripts
Build a simple toolset/skillset to create, launch, and manage the multiple batch jobs.
Use this toolset to obtain the big picture result after analyzing the entire calculation results as a set.
Use Jupyter notebook as the workflow driver instead of using it to do the heavy-lifting computations on it.