Arrayfun/gpuArray CUDA kernel need to be able to remember previous steps
1 回表示 (過去 30 日間)
古いコメントを表示
Background
- The problem can be separated into a large number of independent sub-problems.
- All sub-problems share the same matrix parameters.
- Each sub-problem needs to remember the indices itself has visited up to this point.
- The goal is to process the sub-problems in paralell on the GPU.
Array indexing and memory allocation is not supported in this context. Is this function possible to achieve?
0 件のコメント
回答 (1 件)
Joss Knight
2024 年 3 月 29 日
This is a bit too vague to answer. Without indexing, how can each subproblem retrieve its subset of the data? If you just mean indexed assignment is not allowed then sure, you could write an arrayfun perhaps that solves some independent problem for a subset of an array, as long as all the operations are scalar and the output is scalar. Not if the subproblems are completely different algorithms though.
Anyway, sorry, but not enough information to help.
0 件のコメント
参考
カテゴリ
Help Center および File Exchange で Matrix Indexing についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!