When working with depth visualisation using Intel RealSense camera in Python, there are several ways to achieve the desired output. In this article, we will explore three different options to solve this problem.
Option 1: Using the pyrealsense2 library
The pyrealsense2 library provides a Python interface for Intel RealSense cameras. To use this library, you need to install it first by running the following command:
pip install pyrealsense2
Once the library is installed, you can use the following code to achieve advanced depth visualization:
import pyrealsense2 as rs
import numpy as np
import cv2
# Initialize the camera
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
pipeline.start(config)
# Create a colorizer object
colorizer = rs.colorizer()
while True:
# Wait for a new frame
frames = pipeline.wait_for_frames()
# Get the depth frame
depth_frame = frames.get_depth_frame()
# Colorize the depth frame
colorized_depth_frame = colorizer.colorize(depth_frame)
# Convert the colorized depth frame to a numpy array
depth_image = np.asanyarray(colorized_depth_frame.get_data())
# Display the depth image
cv2.imshow('Depth Image', depth_image)
# Exit the loop if 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Stop the pipeline and close all windows
pipeline.stop()
cv2.destroyAllWindows()
This code uses the pyrealsense2 library to initialize the camera, capture depth frames, colorize them, and display the resulting depth image. It also includes a loop to continuously update the depth image until the user presses ‘q’ to exit.
Option 2: Using the OpenCV library
If you prefer to use the OpenCV library for depth visualization, you can achieve the same result with the following code:
import cv2
# Initialize the camera
cap = cv2.VideoCapture(0)
while True:
# Read a frame from the camera
ret, frame = cap.read()
# Convert the frame to grayscale
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Apply a colormap to the grayscale image
depth_image = cv2.applyColorMap(gray, cv2.COLORMAP_JET)
# Display the depth image
cv2.imshow('Depth Image', depth_image)
# Exit the loop if 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Release the camera and close all windows
cap.release()
cv2.destroyAllWindows()
This code uses the OpenCV library to initialize the camera, capture frames, convert them to grayscale, apply a colormap, and display the resulting depth image. It also includes a loop to continuously update the depth image until the user presses ‘q’ to exit.
Option 3: Using the pyrealsense2 and OpenCV libraries together
If you want to combine the functionalities of both libraries, you can use the following code:
import pyrealsense2 as rs
import numpy as np
import cv2
# Initialize the camera
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
pipeline.start(config)
while True:
# Wait for a new frame
frames = pipeline.wait_for_frames()
# Get the depth frame
depth_frame = frames.get_depth_frame()
# Convert the depth frame to a numpy array
depth_image = np.asanyarray(depth_frame.get_data())
# Apply a colormap to the depth image
depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)
# Display the depth image
cv2.imshow('Depth Image', depth_colormap)
# Exit the loop if 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Stop the pipeline and close all windows
pipeline.stop()
cv2.destroyAllWindows()
This code combines the functionalities of the pyrealsense2 and OpenCV libraries to initialize the camera, capture depth frames, convert them to a numpy array, apply a colormap, and display the resulting depth image. It also includes a loop to continuously update the depth image until the user presses ‘q’ to exit.
After exploring these three options, it is evident that Option 3, which combines the functionalities of the pyrealsense2 and OpenCV libraries, provides the most comprehensive solution for advanced depth visualization using Intel RealSense camera in Python. It offers more flexibility and control over the depth image processing, making it the better choice for this task.
2 Responses
Option 2 seems like a solid choice for simplicity, but can Option 3 offer better results? 🤔
Option 3 seems like a no-brainer! Why settle for one library when you can have the best of both worlds?