Hello,
I'm doing some financial computations (Monte Carlo, massively parralel algorithm) in a benchmark case and I wanted to analyze the potential difference of time computation between the use of single and double precision. My problem is that I don't observe at all any difference between float and double. So my question is is there a real difference ? Or I'm just doing something wrong ?
Computations are done on Server and on Xeon-Phi. I got the same results in term of performance: float ~ double. Loops are vectorised + OpenMP. I'm doing computation over large aligned arrays (few arithmetic operations + one exp per iteration..)
I'm using Intel compiler v15 updt 1. MPSS 3.2.1
Thanks