Using Flux DataLoader
In this tutorial, we show how to load image data in Flux DataLoader and process it in mini-batches. We use the DataLoader type to handle iteration over mini-batches of data. For this example, we load the MNIST dataset using the MLDatasets package.
Before we start, make sure you have installed the following packages:
To install these packages, run the following in the REPL:
Load the packages we'll need:
using MLDatasets: MNIST using Flux.Data: DataLoader using Flux: onehotbatch
We load the MNIST train and test data from MLDatasets:
train_x, train_y = MNIST(:train)[:] test_x, test_y = MNIST(:test)[:]
This code loads the MNIST train and test images as Float32 as well as their labels. The data set
train_x is a 28×28×60000 multi-dimensional array. It contains 60000 elements and each one of it contains a 28x28 array. Each array represents a 28x28 image (in grayscale) of a handwritten digit. Moreover, each element of the 28x28 arrays is a pixel that represents the amount of light that it contains. On the other hand,
test_y is a 60000 element vector and each element of this vector represents the label or actual value (0 to 9) of a handwritten digit.
Before we load the data onto a DataLoader, we need to reshape it so that it has the correct shape for Flux. For this example, the MNIST train data must be of the same dimension as our model's input and output layers.
For example, if our model's input layer expects a 28x28x1 multi-dimensional array, we need to reshape the train and test data as follows:
train_x = reshape(train_x, 28, 28, 1, :) test_x = reshape(test_x, 28, 28, 1, :)
Also, the MNIST labels must be encoded as a vector with the same dimension as the number of categories (unique handwritten digits) in the data set. To encode the labels, we use the Flux's onehotbatch function:
train_y, test_y = onehotbatch(train_y, 0:9), onehotbatch(test_y, 0:9)
Note: For more information on other encoding methods, see Handling Data in Flux.
Now, we load the train images and their labels onto a DataLoader object:
data_loader = DataLoader((train_x, train_y); batchsize=128, shuffle=true)
Notice that we set the DataLoader
batchsize to 128. This will enable us to iterate over the data in batches of size 128. Also, by setting
shuffle=true the DataLoader will shuffle the observations each time that iterations are re-started.
Finally, we can iterate over the 60000 MNIST train data in mini-batches (most of them of size 128) using the Dataloader that we created in the previous step. Each element of the DataLoader is a tuple
(x, y) in which
x represents a 28x28x1 array and
y a vector that encodes the corresponding label of the image.
for (x, y) in data_loader size(x) == (28, 28, 1, 128) || size(x) == (28, 28, 1, 96) size(y) == (10, 128) || size(y) == (10, 96) ... end
Now, we can create a model and train it using the
data_loader we just created. For more information on building models in Flux, see Model-Building Basics.