How to build a simulink model which can generate HDL coder to resize images or videos?

1 回表示 (過去 30 日間)
I want to resize images or videos by FPGA. I only know the resize block but it cannot generate HDL code. Anybody can teach me or give me some ideas?

採用された回答

Seth Popinchalk
Seth Popinchalk 2011 年 3 月 25 日
If you are scaling down, a simple method to re-sample the signal could be used. You could resize to 1/2, or 1/3, etc by just selecting every 2nd, or every 3rd sample.
In order to achieve ratios like 2/3 or 4/5, you can implement interpolation algorithms that pick the points in the signal and calculate an interpolant. This could be computationally expensive depending on the algorithm chosen.
Another option is to use the Direct Lookup Table (N-d). With some cleverness, you should be able to achieve both downsampling and upsampling behavior from the block.
  2 件のコメント
smiczh ZHOU
smiczh ZHOU 2011 年 3 月 26 日
Thanks for your comprehensive answer.
I implement bilinear interpolation algorithm. I will try the other methods you said later.
Ashlesha S
Ashlesha S 2019 年 3 月 2 日
how to implement bilinear interpolation?

サインインしてコメントする。

その他の回答 (3 件)

Huibao
Huibao 2011 年 3 月 30 日
The Resize block in the Video and Image Processing Blockset does not generate HDL code. But you can use Simulink HDL Coder to implement the algorithm yourself.
If you want to resize a static image, i.e., your input image is stored in the memory and your output image will be saved to memory too, the design is simpler. You just need a few steps.
  1. Compute the locations of output pixels. If we assume your scaling factor is s, then the (i, j)th pixel will locate at (i*s, j*s).
  2. Compute the interpolation weights. There are several methods you can pick, for example, nearest neighbor, bilinear, bicubic, etc. As an example, the weights for bilinear method will be (ceil(i*s) – i*s)) and (i*s – floor(i*s)) in the first dimension.
  3. Convolute the interpolation weights with the image for the output pixel values.
Note that the output pixel locations and interpolation weights are repeated in horizontal and vertical directions. So you can reduce computation by computing the whole image in one direction, and then the other direction. As Seth pointed out, you can use look up table to speed up computation. You can also generate code from the Resize block to see the implementation details.
If you want to resize an image stream, i.e., you receive a pixel of the input image at a certain clock rate and you want to output a pixel of the resized image at a different clock rate, the design will be much more challenging. Because you mentioned FPGA, I guess this is the design you want to achieve. For this design, you can use the same weights computed for the static image, but you need to be careful on clock rate, buffer, and pixel location. You can try the following steps.
  1. At the input clock rate, save a few lines of pixels of the input image. The number of buffer lines is the same as your filter length.
  2. At an intermediate clock rate, interpolate the pixels in horizontal direction. The intermediate clock rate can be computed from your input clock rate and the scaling factor, for example, s*clk_in. The results are saved to another line buffer.
  3. At the output clock rate, interpolate the pixels in vertical direction.
What I didn’t mention above is that you need to count the pixels in order to know where the current pixel is, and you need to implement a buffer read/write controller.

Kiran Kintali
Kiran Kintali 2019 年 3 月 4 日
Sharing a HDL Coder friendly model to convert 720x1280 HDMI input signal to 240x320 video signal within Zynq PL side on ZC702, and transfer resized image to ARM for further image processing on ARM. Hence the resized image is only available on ARM side. This model uses the reference design in the CVST support package for Zynq which requires the same image size for HDMI input and HDMI output.
Hope this is helpful.
  1 件のコメント
sukumar nagineni
sukumar nagineni 2019 年 5 月 12 日
How to read a image through simulink model and also using HDL coder blocks

サインインしてコメントする。


Steve Kuznicki
Steve Kuznicki 2019 年 3 月 5 日
For HDL code generation - it really depends on if you want resize up or down. If you are using a live video signal (with horizontal and vertical blanking) then you will need to also re-generate these control signals appropriately. In this case you can't just re-sample the input as was suggested. Like also mentioned above, you need to determine if you need to 1) maintain the same "framerate" or fps, and 2) maintain the same output frame size (zoom in/out). If you need to resize down by an integer factor, you can just adjust your "valid" control signals (make valid every nth pixel and every mth line). Resizing "up" is a bit more difficult which requires you to buffer lines and perform interpolation. It would help if you can share the starting resolution and target resolution.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by