Linear Regression in TensorFlow 2.0

Greetings! Some links on this site are affiliate links. That means that, if you choose to make a purchase, The Click Reader may earn a small commission at no extra cost to you. We greatly appreciate your support!

[latexpage]

In this chapter, we will be using TensorFlow 2.0 to implement one of the most fundamental machine learning algorithms: Linear Regression.

The formula for linear regression with a single independent variable is,

$$y = W*x + b$$

where,
$y$ is the dependent variable,
$x$ is the independent variable,
$W$ is the weight, and
$b$ is the bias.

We will be using TensorFlow to find the value of weight $w$ and bias $b$ that can best predict the value $y$ given $x$.

For this problem, we will be using a dummy dataset. We will create random values of $x$ using the random function of Tensorflow. The value of $y$ is created by multiplying $x$ by a weight of $1$ and adding a bias of $0.5$.

# Initializing x with random data
x = tf.random.normal((10, ), dtype=tf.dtypes.float32, seed=20)

# initial weight = 1 and initial bias = 0.5 
W = 1
b = 0.5
y = W*x + b  

print(f'x: {x.numpy()}')
print(f'y: {y.numpy()}')
x: [ 0.8500154   0.63634557  0.5312669   0.16741174  0.44453236  0.61807156
 -0.11082884 -0.0360144  -2.4177778  -1.9543582 ]
y: [ 1.3500154   1.1363456   1.0312669   0.66741174  0.9445324   1.1180716
  0.38917115  0.4639856  -1.9177778  -1.4543582 ]

1. Building the model

For implementing linear regression from using TensorFlow 2.0, we will define a Model class that contains two methods: init and call.

class Model:
    def __init__(self):
        
        # Initializing variables weight(W) and bias(b)
        self.W = tf.Variable(16.0)
        self.b = tf.Variable(10.0)

    def __call__(self, x):
        return self.W * x + self.b

The init method initializes the weight and bias for the linear regression model. The call method returns the predicted value, as per the equation $y = W*x + b$.

2. Loss function and training module

In the above model, the weights and biases are initialized randomly. We will be training this model to determine the best possible value for weight and bias. For that, we need loss function that will indicate how well the model is performing. We will be using the mean square error as the loss function.

def loss(y, y_pred):
    return tf.reduce_mean(tf.square(y - y_pred))

Now, for each iteration (epoch) during the model training, we need to:

  • Compute gradients of the model parameters with respect to the loss: The tf.GradientTape() method records all the operations that are being executed inside the context manager. This is required when computing the gradient.
  • Update the model parameters: After computing the gradients of $W$ and $b$, we multiply the gradients with a learning rate and subtract this result from the current value of $W$ and $b$.
def train(model, x, y, lr=0.1):
    with tf.GradientTape() as t:
        # Get prediction
        y_hat = model(x)
        # Compute the loss
        current_loss = loss(y, model(x)) 
        
    # Find the gradient
    grad_W, grad_b = t.gradient(current_loss, [model.W, model.b])
    
    # Update weight
    model.W.assign_sub(lr * grad_W) 
    
    # Update Bias
    model.b.assign_sub(lr * grad_b)

3. Model Training

Finally, the model is initialized and is trained for 60 iterations (epochs).

# Initialize the model
model = Model()
epochs = 60
losses = []

for epoch_count in range(epochs):
    current_loss = loss(y, model(x))
    losses.append(current_loss)
    
    # Train the model
    train(model, x, y)

4. Visualizing the results

Finally, we can view the final values of weight($W$) and bias($b$) after the training and compare it against the true values($W=1$ and $b=0.5$). We can also visualize how the value of loss decreases over each epoch by visualizing the values of loss in each iteration using the matplotlib library.

Note: Running the training for a higher number of epochs may decrease the loss even further. So feel free to try it out!

print(f'Value of W: {model.W.numpy()}')
print(f'Value of b: {model.b.numpy()}')

import matplotlib.pyplot as plt

# Plot the loss function
plt.plot(losses)
plt.xlabel('Num of epochs')
plt.ylabel('Loss')
plt.show()
Value of W: 1.0000309944152832
Value of b: 0.5000572800636292
Linear Regression in TensorFlow 2.0

With this, you learned to implement one of the most fundamental regression algorithm using TensorFlow 2.0. In the next chapter, you will learn about implementing a classification algorithm using TensorFlow 2.0.


Linear Regression in TensorFlow 2.0Linear Regression in TensorFlow 2.0

Do you want to learn Python, Data Science, and Machine Learning while getting certified? Here are some best selling Datacamp courses that we recommend you enroll in:

  1. Introduction to Python (Free Course) - 1,000,000+ students already enrolled!
  2. Introduction to Data Science  in Python- 400,000+ students already enrolled!
  3. Introduction to TensorFlow for Deep Learning with Python - 90,000+ students already enrolled!
  4. Data Science and Machine Learning Bootcamp with R - 70,000+ students already enrolled!

Leave a Comment