I'm working on a statistical application containing approximately 10 - 30 million floating point values in an array.
Several methods performing different, but independent, calculations on the array in nested loops, for example:
Dictionary<float, int> noOfNumbers = new Dictionary<float, int>(); for (float x = 0f; x < 100f; x += 0.0001f) { int noOfOccurrences = 0; foreach (float y in largeFloatingPointArray) { if (x == y) { noOfOccurrences++; } } noOfNumbers.Add(x, noOfOccurrences); }
The current application is written in C#, runs on an Intel CPU and needs several hours to complete. I have no knowledge of GPU programming concepts and APIs, so my questions are:
Using the CUDA Toolkit you can accelerate your C or C++ applications by updating the computationally intensive portions of your code to run on GPUs. To accelerate your applications, you can call functions from drop-in libraries as well as develop custom applications using languages including C, C++, Fortran and Python.
It is technically possible to do that. However, GPUs are intended to calculate many similar calculations in parallel, and since compiling programs is typically more of a serial process than a parallel one the GPU would likely not be an efficient use of hardware.
The GPU has no direct access on any memory that is mapped by the OS to be accessed within client code (i.e. code, which is executed in user-mode while the instructions are executed on the CPU). In addition the GPU is not supposed to perform stuff like this, it aims to perform floating point arithmetic at a high speed.
Thus, running a python script on GPU can prove to be comparatively faster than CPU, however, it must be noted that for processing a data set with GPU, the data will first be transferred to the GPU's memory which may require additional time so if data set is small then CPU may perform better than GPU.
UPDATE GPU Version
__global__ void hash (float *largeFloatingPointArray,int largeFloatingPointArraySize, int *dictionary, int size, int num_blocks) { int x = (threadIdx.x + blockIdx.x * blockDim.x); // Each thread of each block will float y; // compute one (or more) floats int noOfOccurrences = 0; int a; while( x < size ) // While there is work to do each thread will: { dictionary[x] = 0; // Initialize the position in each it will work noOfOccurrences = 0; for(int j = 0 ;j < largeFloatingPointArraySize; j ++) // Search for floats { // that are equal // to it assign float y = largeFloatingPointArray[j]; // Take a candidate from the floats array y *= 10000; // e.g if y = 0.0001f; a = y + 0.5; // a = 1 + 0.5 = 1; if (a == x) noOfOccurrences++; } dictionary[x] += noOfOccurrences; // Update in the dictionary // the number of times that the float appears x += blockDim.x * gridDim.x; // Update the position here the thread will work } }
This one I just tested for smaller inputs, because I am testing in my laptop. Nevertheless, it is working, but more tests are needed.
UPDATE Sequential Version
I just did this naive version that executes your algorithm for an array with 30,000,000 element in less than 20 seconds (including the time taken by function that generates the data).
This naive version first sorts your array of floats. Afterward, will go through the sorted array and check the number of times a given value
appears in the array and then puts this value in a dictionary along with the number of times it has appeared.
You can use sorted
map, instead of the unordered_map
that I used.
Heres the code:
#include <stdio.h> #include <stdlib.h> #include "cuda.h" #include <algorithm> #include <string> #include <iostream> #include <tr1/unordered_map> typedef std::tr1::unordered_map<float, int> Mymap; void generator(float *data, long int size) { float LO = 0.0; float HI = 100.0; for(long int i = 0; i < size; i++) data[i] = LO + (float)rand()/((float)RAND_MAX/(HI-LO)); } void print_array(float *data, long int size) { for(long int i = 2; i < size; i++) printf("%f\n",data[i]); } std::tr1::unordered_map<float, int> fill_dict(float *data, int size) { float previous = data[0]; int count = 1; std::tr1::unordered_map<float, int> dict; for(long int i = 1; i < size; i++) { if(previous == data[i]) count++; else { dict.insert(Mymap::value_type(previous,count)); previous = data[i]; count = 1; } } dict.insert(Mymap::value_type(previous,count)); // add the last member return dict; } void printMAP(std::tr1::unordered_map<float, int> dict) { for(std::tr1::unordered_map<float, int>::iterator i = dict.begin(); i != dict.end(); i++) { std::cout << "key(string): " << i->first << ", value(int): " << i->second << std::endl; } } int main(int argc, char** argv) { int size = 1000000; if(argc > 1) size = atoi(argv[1]); printf("Size = %d",size); float data[size]; using namespace __gnu_cxx; std::tr1::unordered_map<float, int> dict; generator(data,size); sort(data, data + size); dict = fill_dict(data,size); return 0; }
If you have the library thrust installed in you machine your should use this:
#include <thrust/sort.h> thrust::sort(data, data + size);
instead of this
sort(data, data + size);
For sure it will be faster.
Original Post
I'm working on a statistical application which has a large array containing 10 - 30 millions of floating point values.
Is it possible (and does it make sense) to utilize a GPU to speed up such calculations?
Yes, it is. A month ago, I ran an entirely Molecular Dynamic simulation on a GPU. One of the kernels, which calculated the force between pairs of particles, received as parameter 6
array each one with 500,000
doubles, for a total of 3
Millions doubles (22 MB)
.
So if you are planning to put 30
Million floating points, which is about 114 MB
of global Memory, it will not be a problem.
In your case, can the number of calculations be an issue? Based on my experience with the Molecular Dynamic (MD), I would say no. The sequential MD version takes about 25
hours to complete while the GPU version took 45
Minutes. You said your application took a couple hours, also based in your code example it looks softer than the MD.
Here's the force calculation example:
__global__ void add(double *fx, double *fy, double *fz, double *x, double *y, double *z,...){ int pos = (threadIdx.x + blockIdx.x * blockDim.x); ... while(pos < particles) { for (i = 0; i < particles; i++) { if(//inside of the same radius) { // calculate force } } pos += blockDim.x * gridDim.x; } }
A simple example of a code in CUDA could be the sum of two 2D arrays:
In C:
for(int i = 0; i < N; i++) c[i] = a[i] + b[i];
In CUDA:
__global__ add(int *c, int *a, int*b, int N) { int pos = (threadIdx.x + blockIdx.x) for(; i < N; pos +=blockDim.x) c[pos] = a[pos] + b[pos]; }
In CUDA you basically took each for iteration and assigned to each thread,
1) threadIdx.x + blockIdx.x*blockDim.x;
Each block has an ID
from 0
to N-1
(N the number maximum of blocks) and each block has a 'X'
number of threads with an ID
from 0
to X-1
.
ID
and the block ID
which the thread is in; the blockDim.x is the number of threads that a block has.So if you have 2 blocks each one with 10
threads and N=40
, the:
Thread 0 Block 0 will execute pos 0 Thread 1 Block 0 will execute pos 1 ... Thread 9 Block 0 will execute pos 9 Thread 0 Block 1 will execute pos 10 .... Thread 9 Block 1 will execute pos 19 Thread 0 Block 0 will execute pos 20 ... Thread 0 Block 1 will execute pos 30 Thread 9 Block 1 will execute pos 39
Looking at your current code, I have made this draft of what your code could look like in CUDA:
__global__ hash (float *largeFloatingPointArray, int *dictionary) // You can turn the dictionary in one array of int // here each position will represent the float // Since x = 0f; x < 100f; x += 0.0001f // you can associate each x to different position // in the dictionary: // pos 0 have the same meaning as 0f; // pos 1 means float 0.0001f // pos 2 means float 0.0002f ect. // Then you use the int of each position // to count how many times that "float" had appeared int x = blockIdx.x; // Each block will take a different x to work float y; while( x < 1000000) // x < 100f (for incremental step of 0.0001f) { int noOfOccurrences = 0; float z = converting_int_to_float(x); // This function will convert the x to the // float like you use (x / 0.0001) // each thread of each block // will takes the y from the array of largeFloatingPointArray for(j = threadIdx.x; j < largeFloatingPointArraySize; j += blockDim.x) { y = largeFloatingPointArray[j]; if (z == y) { noOfOccurrences++; } } if(threadIdx.x == 0) // Thread master will update the values atomicAdd(&dictionary[x], noOfOccurrences); __syncthreads(); }
You have to use atomicAdd
because different threads from different blocks may write/read noOfOccurrences
concurrently, so you have to ensure mutual exclusion.
This is just one approach; you can even assign the iterations of the outer loop to the threads instead of the blocks.
Tutorials
The Dr Dobbs Journal series CUDA: Supercomputing for the masses by Rob Farmer is excellent and covers just about everything in its fourteen installments. It also starts rather gently and is therefore fairly beginner-friendly.
and anothers:
Take a look on the last item, you will find many link to learn CUDA.
OpenCL: OpenCL Tutorials | MacResearch
I don't know much of anything about parallel processing or GPGPU, but for this specific example, you could save a lot of time by making a single pass over the input array rather than looping over it a million times. With large data sets you will usually want to do things in a single pass if possible. Even if you're doing multiple independent computations, if it's over the same data set you might get better speed doing them all in the same pass, as you'll get better locality of reference that way. But it may not be worth it for the increased complexity in your code.
In addition, you really don't want to add a small amount to a floating point number repetitively like that, the rounding error will add up and you won't get what you intended. I've added an if statement to my below sample to check if inputs match your pattern of iteration, but omit it if you don't actually need that.
I don't know any C#, but a single pass implementation of your sample would look something like this:
Dictionary<float, int> noOfNumbers = new Dictionary<float, int>(); foreach (float x in largeFloatingPointArray) { if (math.Truncate(x/0.0001f)*0.0001f == x) { if (noOfNumbers.ContainsKey(x)) noOfNumbers.Add(x, noOfNumbers[x]+1); else noOfNumbers.Add(x, 1); } }
Hope this helps.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With