Yes, it is possible. Much of the time it is not beneficial.
You need the Parallel Computing Toolbox. You would create a parpool with two members. You would use one of:
- spmd
- parfor
- parfeval or parfevalOnAll
If you use spmd then the two workers can communicate with each other using labSend() and labReceive(), but you cannot communicate with the controller until the spmd completely finishes.
If you use parfor then the workers cannot directly communicate with each other. It is possible to create parallel data queues to send results back from the worker to the controller, and it is possible (but usually awkward) to have the workers create parallel data queues and send those back to the controller and that permits the controller to send data to the workers. The workers cannot communicate directly: they would have to send data to the controller and the controller would have to send it to the other workers. Normal control over activity in the controller does not resume until the parfor completely finishes on all workers.
If you use parfeval or parfevalOnAll, then you can continue on in the controller while a worker processes the task, asynchronous execution. However, it becomes awkward to communicate with the workers during execution.
A lot of the time it turns out that the overhead of sending data to the workers and getting results back, and any communication in the meanwhile, adds up to make using parallel processing slower in many cases. Also, if the individual tasks involve heavy mathematical calculation, then unless you allocate a bunch of cores to each worker, then the workers default to running on a single core each, and so cannot take advantage of the high performance built-in parallel operations such as matrix multiplication that is tuned to be cache-friendly.
Only one worker at a time can use a GPU. and a worker can only use one GPU at a time. If you only have one GPU, then if both workers tried to access it, MATLAB would need to continually steal it away from the other worker, forcing a full state synchronization each time, which is one of the most expensive GPU operations.