The CPU and GPU are treated as separate devices that have their own memory spaces. As such, CUDA can be incrementally applied to existing applications.
Serial portions of applications are run on the CPU, and parallel portions are offloaded to the GPU. Support heterogeneous computation where applications use both the CPU and GPU. With CUDA C/C++, programmers can focus on the task of parallelization of the algorithms rather than spending time on their implementation. Provide a small set of extensions to standard programming languages, like C, that enable a straightforward implementation of parallel algorithms. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU).ĬUDA was developed with several design goals in mind: Introduction ĬUDA ® is a parallel computing platform and programming model invented by NVIDIA. The installation instructions for the CUDA Toolkit on Microsoft Windows systems.
CUDA Installation Guide for Microsoft Windows