faster lean bilinear imresize / improved gpuArray/imresize
1 回表示 (過去 30 日間)
古いコメントを表示
Hi,
I'm currently processing lots of images in a convolutional neural network, and for that the resize.m function is currently the major bottleneck (Googling this resulted in a few other people complaining about imresize as well). However, digging a bit into the code there is about 60% overhead for this function by checking arguments and performing extra function calls. So I made a leaner version, but this requires access to the private imresizemex function. Is it possible in a future release to, for example:
- create a lean imresize_bilinear function (as attached here)?
- move imresizemex outside of the private directory so it can be accessed? (I work on several different servers, often with a different matlab version, so copying imresizemex does not work too well for me).
Related:
- Are you working on making gpuArray/imresize work in format: out = imresize(im, [numRows numCols])?
Example code is attached. Output:
>> testImresize
original bilinear resize: 2.364755
lean bilinear resize: 0.922745
Thanks, Jasper
0 件のコメント
回答 (0 件)
参考
カテゴリ
Help Center および File Exchange で Introduction to Installation and Licensing についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!