Quantcast
Channel: Intel® Many Integrated Core Architecture
Viewing all articles
Browse latest Browse all 1347

multiple asynchronous computational kernel offload launches

$
0
0

In Nvidia CUDA, the Kepler GPU can support concurrent execution of 16 kernels. One can use multiple streams to feed multiple kernels to the GPU and let the hardware schedule the work. One can also use the grid dimension and block dimension in the kernel launch to roughly influence the amount of computing resources.  For example, one can use [1x1x1] grid [512x1x1] threads so that the kernel executes on a single SMX unit.



 I am wondering whether or how one might achieve something similar for the MIC.



Can multiple threads on the host issue separate non-blocking asynchronous offload computation commands with signals to run code or call MKL BLAS?

Is there a way to influence the amount of computing resources in the offload MKL BLAS operation? One may assume for simplicity the data is already on the MIC.



If each offload command uses all 240 virtual cores already, then there may not be much performance gains with multiple asynchronous offload computation commands.

 

 The idea is there may be many matrix operations but each matrix block is not that large that can saturate the MIC. If we can make multiple concurrent non-blocking offload operations, then this may be a way to make effective use of the MIC.   

This approach might  also be relevant to mapping computation and dependencies described as a Directed Acyclic Graph (DAG) to the MIC.

 

 

 For example, launch  12 concurrent threads on host, each thread perform asynchronous launches with different signals of  offload MKL BLAS using 20 (separate) hyper-thread cores on MIC.  

 

 

Certainly we do NOT want all 12 concurrent threads to use the SAME 20 hyper-thread cores.   Perhaps the runtime system on MIC will "do the right thing" in scheduling the work on idle or available cores with separate asynchronous offload computation launches with signals?

 

In the Forum  there is an example to use multiple "-env MIC_KMP_AFFINITY" options for  mpiexec command  to associate or "pin" different MPI tasks with cores on the MIC but it is not clear to me how to achieve something similar with threads.

 

https://software.intel.com/en-us/forums/topic/360754

 

 


Viewing all articles
Browse latest Browse all 1347

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>