How to read VideoDevice into 1D array?

2 ビュー (過去 30 日間)
Jeremy
Jeremy 2024 年 4 月 24 日
回答済み: prabhat kumar sharma 2024 年 5 月 6 日
We have a custom frame grabber that is recognized as a video device by
v=imaq.VideoDevice("winvideo",DeviceID)
The ReturnedDataType defaults to 'single' but can be set to 'uint16'. The frame grabber outputs 16 bits per image sensor pixel
The ReturnedColorSpace does not show 'Bayer' as an option, only 'rgb' 'grayscale' and 'YCbCr'. But the frame grabber outputs the 2D image sensor pixels in row-major order (ie row1, then row2, etc) which can be transposed and then demosaiced using a 'gbrg' BayerSensorAlignment.
The ROI defaults to the [1, 1, Height, Width] of the sensor
It seems that step(v) reshapes the data in column-major order.
Since the ReturnedColorSpace does not offer 'Bayer' as an option, and seems to default to colum-major reshaping of the output of step(v), is there a way to execute step(v) such that the output is a 1D vector with length=Height*Width? This would allow the image data to be reshaped in row-major order into a 2D 'Bayer' image, transposed, and then demosaiced.
For reference, a video capture object can be generated in python
v=cv2.VideoCapture(DeviceID)
and the reshaping of the v.capture() output can be halted using
v.set(cv2.CAP_PROP_CONVERT_RGB, 0)
v.capture() results in a 1D vector (although the length is then 2*Height*Width of 'uint8' values that can be typecast to uint16)

回答 (1 件)

prabhat kumar sharma
prabhat kumar sharma 2024 年 5 月 6 日
Hi Jeremy,
In the this documentation link for imaq.VideoDevice : https://www.mathworks.com/help/imaq/imaq.videodevice.html#btb2zi3
ReturnedColorSpace property specify the color space of the returned image. The default value of the property depends on the device and the video format selected. Possible values are {rgb|grayscale|YCbCr} when the default returned color space for the device is not grayscale. Possible values are {rgb|grayscale|YCbCr|bayer} when the default returned color space for the device is grayscale.
As a workaround you might consider fetching the data in the closest format available ('uint16' as you mentioned) and then manually adjusting the data to suit your requirements. Unfortunately, without a direct option to fetch data as a 1D vector or in a raw Bayer format, this workaround involves additional steps:
  1. Fetch the Frame in the Closest Available Format: Fetch the frame in 'uint16' format and the closest available color space, likely 'grayscale' if you want to avoid unnecessary color space conversion computations.
  2. Reshape and Transpose: Since MATLAB uses column-major order but the image sensor pixels are in row-major order, after fetching the frame, you may need to reshape and transpose the data to correctly represent the sensor's pixel layout.
  3. Manual Demosaicing: After reshaping, you can then apply a demosaicing algorithm based on the 'gbrg' Bayer pattern.
Here is a template code piece you can see for reference:
% Assuming DeviceID is defined and corresponds to your frame grabber
DeviceID = 1; % Example device ID
v = imaq.VideoDevice("winvideo", DeviceID, 'ReturnedDataType', 'uint16', 'ReturnedColorSpace', 'grayscale');
% Set ROI if necessary (assuming Height and Width are known)
% v.ROI = [1, 1, Width, Height];
% Acquire a frame
frame = step(v);
% Since step(v) might reshape the data, you need to manually adjust it
% If step(v) does not reshape as expected, adjust the following line accordingly
reshapedFrame = reshape(frame, [Width, Height]).';
% At this point, reshapedFrame should be in the correct orientation
% but still in grayscale (or rather, single-channel Bayer data)
% Apply demosaicing Since MATLAB does not have a built-in function for 'gbrg', you might need to implement
% your own demosaicing algorithm or find a third-party implementation.
release(v);
I hope it helps!

タグ

製品


リリース

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by