Stochastic GPU-based Multithread Implementation of Multiple Back-Propagation



Graphics Processing Units (GPUs) have evolved into a highly parallel, multi-threaded, many-core processor with enormous computational power. The GPU is especially well suited to address pattern recognition problems that can be expressed as data-parallel computations. Thus it provides a viable alternative to the use of dedicated hardware in the neural network (NN) field, where the long training times have always been a major drawback. In this paper, we propose a GPU implementation of the online (stochastic) training mode of the Multiple Back- Propagation (MBP) algorithm and compare it with corresponding standalone CPU version and with the batch training mode GPU implementation. For a fair and unbiased comparison we run the experiments with benchmarks from machine learning and pattern recognition field and we show that the GPU performance excel the CPU results in particular for high complex problems.


GPU Computing, Parallel Programming, Neural Networks


GPU Computing, Neural networks


Second International Conference on Agents and Artificial Intelligence (ICAART 2010), pp. 271-276, January 2010

Cited by

No citations found