Thanks to theidioms.com

Semantic and Instance Segmentation on Videos using PixelLib in Python

Semantic and Instance Segmentation on videos using PixelLib in Python
Data Science

Semantic and Instance Segmentation on Videos using PixelLib in Python

Learn to perform semantic and instance segmentation on videos with few lines of code using PixelLib in Python.

What is PixelLib?

PixelLib is a library created for performing image and video segmentation using few lines of code. It is a flexible library created to allow easy integration of image and video segmentation into software solutions.

You can perform two different kinds of segmentation using PixelLib:

  • Semantic Segmentation – Objects in an image with the same pixel values are segmented with the same colormaps.
Semantic Segmentation using PixelLib
Semantic Segmentation in PixelLib
  • Instance Segmentation –  Instances of the same object are segmented with different color maps.
Instance Segmentation using PixelLib
Instance Segmentation in PixelLib

Installing and Importing PixelLib

The basic requirements for PixelLib are as follows:

  • Python 3.5-3.7
  • pip version >= 19.0
  • TensorFlow 2.0 or above

To install the libraries necessary for this tutorial, you can run the following commands through the command line/terminal on your machine,

pip install tensorflow pixellib

To import the library in Python, you can simply write,

import pixellib

Performing Semantic Segmentation using PixelLib

We will be performing semantic segmentation in this sample video,

To perform semantic segmentation using PixelLib, we need a pre-trained model called Xception which has been trained on the ade20k dataset. You can download the pre-trained model from here and keep it in the root directory of your codebase.

The code for performing semantic segmentation using PixelLib is as follows:

# Omporting the semantic_segmentation object type from pixellib.semantic
from pixellib.semantic import semantic_segmentation

# Setting the file path for the input video
input_video_file_path = "videos/test_video.mp4"

# Creating a semantic_segmentation object
segment_video = semantic_segmentation()

# Loading the Xception model trained on ade20k dataset
segment_video.load_ade20k_model("deeplabv3_xception65_ade20k.h5")

# Processing the video
segment_video.process_video_ade20k(
    input_video_file_path, 
    frames_per_second=30,
    output_video_name="videos/semantic_segmentation_output.mp4",
)

The output video looks like this,

Segmantic Segmentation using PixelLib in Python

Performing Instance Segmentation using PixelLib

We will be performing instance segmentation in this sample video (same video as before),

To perform instance segmentation using PixelLib, we need a pre-trained model called Mask R-CNN which has been trained on the COCO dataset. You can download the pre-trained model from here and keep it in the root directory of your codebase.

The code for performing instance segmentation using PixelLib is as follows:

# Importing the instance_segmentation object type from pixellib.instance
from pixellib.instance import instance_segmentation

# Setting the file path for the input video
input_video_file_path = "videos/test_video.mp4"

# Creating an instance_segmentation object
segment_video = instance_segmentation()

# Loading the Mask R-CNN model trained on the COCO dataset
segment_video.load_model("mask_rcnn_coco.h5")

# Processing the video
segment_video.process_video(
    input_video_file_path, 
    show_bboxes=True, 
    extract_segmented_objects=True, 
    save_extracted_objects=False,
    frames_per_second=30,
    output_video_name="videos/instance_segmentation_output.mp4",
)

The output video looks like this,

Instance Segmentation in Python

Next Step: Build a video background removal tool

Now that you know how to segment semantics as well as instances from a video, you can now create a video background removal tool by altering the pixels of a video.


If you are interested in more things data, The Click Reader provides you a large collection of data science resources to study from—check it out by clicking here.

Leave your thought here

Your email address will not be published. Required fields are marked *