Until now, we have mostly used custom functions to build and train a model. However, building more complicated neural networks using custom functions is a difficult task. Luckily, there is a much simpler way to build and train neural network models in TensorFlow 2.0.
In this chapter, you will learn how to use TensorFlow more efficiently for building and training deep learning models. We will be developing a neural network that tries to learn the following relationship between two variables $x$ and $y$.
$$y = 2x + 5$$
1. Preparing the data
For this, let us first start by generating dummy data to train our model.
# Generating random values for x and y x = tf.random.normal((100, ), dtype=tf.dtypes.float32, seed=20) y = x*2 + 5
2. Model Building
For the purpose of this chapter, we will be building a very simple Neural Network architecture which will have three layers:
- Input layer with a single node.
- Hidden layer with 5 nodes.
- Output layer with only a single node.
Here is a visual representation of the model we will be building:
We will be using the Sequential() function for defining our desired neural network:
# Defining the model model = tf.keras.models.Sequential([ tf.keras.layers.Dense(units=1, input_shape=[1], name="input_layer"), tf.keras.layers.Dense(5, activation='relu', name="hidden_layer"), tf.keras.layers.Dense(1, name="output_layer") ])
The first layer of the model is the input layer which takes an input of shape (1). The second layer is a hidden layer having 5 nodes and has a ReLU activation function. The final layer is the output layer that has a single node.
3. Model Training
Next, we compile the model using the compile() function and pass the names of optimizer and loss function. The summary of the model better explains the model we built.
model.compile(optimizer='sgd', loss='mean_squared_error') print(model.summary())
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_layer (Dense) (None, 1) 2 _________________________________________________________________ hidden_layer (Dense) (None, 5) 10 _________________________________________________________________ output_layer (Dense) (None, 1) 6 ================================================================= Total params: 18 Trainable params: 18 Non-trainable params: 0 _________________________________________________________________ None
Finally, we train the model for 100 epochs by passing the $x$ and $y$ data to the fit() function of the model.
history = model.fit(x, y, epochs = 100, steps_per_epoch=100)
4. Visualizing the results
The value of losses computed during each epoch will be stored in the history of the fitted model. We can simply plot the value of losses to view how well the model is training.
# Visualizing the loss from the model history plt.plot(history.history['loss']) plt.xlabel('Num of epochs') plt.ylabel('Loss') plt.show()
This is how easy it is to build a Neural Network using TensorFlow 2.0. In the next chapter, you will learn to build and train Convolutional Neural Networks using TensorFlow 2.0 for classifying images.
Do you want to learn Python, Data Science, and Machine Learning while getting certified? Here are some best selling Datacamp courses that we recommend you enroll in:
- Introduction to Python (Free Course) - 1,000,000+ students already enrolled!
- Introduction to Data Science in Python- 400,000+ students already enrolled!
- Introduction to TensorFlow for Deep Learning with Python - 90,000+ students already enrolled!
- Data Science and Machine Learning Bootcamp with R - 70,000+ students already enrolled!