Keras for Beginners: Building Your First Neural Network
Keras is a simple-to-use but powerful deep learning library for Python. In this post, we’ll see how easy it is to build a feedforward neural network and train it to solve a real problem with Keras.
This post is intended for complete beginners to Keras but does assume a basic background knowledge of neural networks. My introduction to Neural Networks covers everything you need to know (and more) for this post — read that first if necessary.
Let’s get started!
The Problem: MNIST digit classification
We’re going to tackle a classic machine learning problem: MNIST handwritten digit classification. It’s simple: given an image, classify it as a digit.
Each image in the MNIST dataset is 28x28 and contains a centered, grayscale digit. We’ll flatten each 28x28 into a 784 dimensional vector, which we’ll use as input to our neural network. Our output will be one of 10 possible classes: one for each digit.
1. Setup
I’m assuming you already have a basic Python installation ready (you probably do). Let’s first install some packages we’ll need:
$ pip install keras tensorflow numpy mnist
Note: We need to installtensorflow
because we're going to run Keras on a TensorFlow backend (i.e. TensorFlow will power Keras).
You should now be able to import these packages and poke around the MNIST dataset:
2. Preparing the Data
As mentioned earlier, we need to flatten each image before we can pass it into our neural network. We’ll also normalize the pixel values from [0, 255] to [-0.5, 0.5] to make our network easier to train (using smaller, centered values is often better).
We’re ready to start building our neural network!
3. Building the Model
Every Keras model is either built using the Sequential class, which represents a linear stack of layers, or the functional Model class, which is more customizable. We’ll be using the simpler
Sequential
model, since our network is indeed a linear stack of layers.
We start by instantiating a
Sequential
model:
The
Sequential
constructor takes an array of Keras Layers. Since we're just building a standard feedforward network, we only need the Dense layer, which is your regular fully-connected (dense) network layer.
Let’s throw in 3
Dense
layers:
The first two layers have 64 nodes each and use the ReLU activation function. The last layer is a Softmax output layer with 10 nodes, one for each class.
The last thing we always need to do is tell Keras what our network’s input will look like. We can do that by specifying an
input_shape
to the first layer in the Sequential
model:
Once the input shape is specified, Keras will automatically infer the shapes of inputs for later layers. We’ve finished defining our model! Here’s where we’re at:
4. Compiling the Model
Before we can begin training, we need to configure the training process. We decide 3 key factors during the compilation step:
- The optimizer. We’ll stick with a pretty good default: the Adam gradient-based optimizer. Keras has many other optimizers you can look into as well.
- The loss function. Since we’re using a Softmax output layer, we’ll use the Cross-Entropy loss. Keras distinguishes between
binary_crossentropy
(2 classes) andcategorical_crossentropy
(>2 classes), so we'll use the latter. See all Keras losses. - A list of metrics. Since this is a classification problem, we’ll just have Keras report on the accuracy metric.
Here’s what that compilation looks like:
Onwards!
5. Training the Model
Training a model in Keras literally consists only of calling
fit()
and specifying some parameters. There are a lot of possible parameters, but we'll only manually supply a few:- The training data (images and labels), commonly known as X and Y, respectively.
- The number of epochs (iterations over the entire dataset) to train for.
- The batch size (number of samples per gradient update) to use when training.
Here’s what that looks like:
This doesn’t actually work yet, though — we overlooked one thing. Keras expects the training targets to be 10-dimensional vectors, since there are 10 nodes in our Softmax output layer, but we’re instead supplying a single integer representing the class for each image.
Conveniently, Keras has a utility method that fixes this exact issue: to_categorical. It turns our array of class integers into an array of one-hot vectors instead. For example,
2
would become [0, 0, 1, 0, 0, 0, 0, 0, 0, 0]
(it's zero-indexed).
We can now put everything together to train our network:
Running that code gives us something like this:
Epoch 1/5
60000/60000 [==============================] - 2s 35us/step - loss: 0.3772 - acc: 0.8859
Epoch 2/5
60000/60000 [==============================] - 2s 31us/step - loss: 0.1928 - acc: 0.9421
Epoch 3/5
60000/60000 [==============================] - 2s 31us/step - loss: 0.1469 - acc: 0.9536
Epoch 4/5
60000/60000 [==============================] - 2s 31us/step - loss: 0.1251 - acc: 0.9605
Epoch 5/5
60000/60000 [==============================] - 2s 31us/step - loss: 0.1079 - acc: 0.9663
We reached 96.6% training accuracy after 5 epochs! This doesn’t tell us much, though — we may be overfitting. The real challenge will be seeing how our model performs on our test data.
6. Testing the Model
Evaluating the model is pretty simple:
Running that gives us:
10000/10000 [==============================] - 0s 15us/step
[0.10821614159140736, 0.965]
evaluate() returns an array containing the test loss followed by any metrics we specified. Thus, our model achieves a 0.108 test loss and 96.5% test accuracy! Not bad for your first neural network.
7. Using the Model
Now that we have a working, trained model, let’s put it to use. The first thing we’ll do is save it to disk so we can load it back up anytime:
model.save_weights('model.h5')
We can now reload the trained model whenever we want by rebuilding it and loading in the saved weights:
Using the trained model to make predictions is easy: we pass an array of inputs to
predict()
and it returns an array of outputs. Keep in mind that the output of our network is 10 probabilities (because of softmax), so we'll use np.argmax() to turn those into actual digits.8. Extensions
What we’ve covered so far was but a brief introduction — there’s much more we can do to experiment with and improve this network. I’ve included a few examples below:
Tuning Hyperparameters
A good hyperparameter to start with is the learning rate for the Adam optimizer. What happens when you increase or decrease it?
What about the batch size and number of epochs?
Network Depth
What happens if we remove or add more fully-connected layers? How does that affect training and/or the model’s final performance?
Activations
What if we use an activation other than ReLU, e.g. sigmoid?
Dropout
What if we tried adding Dropout layers, which are known to prevent overfitting?
Validation
We can also use the testing dataset for validation during training. Keras will evaluate the model on the validation set at the end of each epoch and report the loss and any metrics we asked for. This allows us to monitor our model’s progress over time during training, which can be useful to identify overfitting and even support early stopping.
Conclusion
You’ve implemented your first neural network with Keras! We achieved a test accuracy of 96.5% on the MNIST dataset after 5 epochs, which is not bad for such a simple network. I’ll include the full source code again below for your reference.
If you want to learn about more advanced techniques to approach MNIST, I recommend checking out my introduction to Convolutional Neural Networks (CNNs). In it, we see how to achieve much higher (>99%) accuracies on MNIST using more complex networks. I also recommend my guide on implementing a CNN with Keras, which is similar to this post.
Further reading you might be interested in include:
- The official getting started with Keras guide.
- This introduction to CNNs with Keras.
- A collection of Keras examples.
Thanks for reading this post! The full source code is below.
The Full Code
Originally published at https://victorzhou.com.
Comments