A Comprehensive Guide
The Central Processing Unit (CPU) handles all the tasks required for all software on the computer or server to run correctly. A Graphic Processing Unit (GPU), on the other hand, supports the CPU to perform concurrent calculations. A GPU can complete simple and repetitive tasks much faster because it can break the task down into smaller components and finish them in parallel. These cores were initially designed to process images, video game computer graphics, and visual data. General Purpose Graphic Processor Units (GPGPUs) were adopted to enhance other computational processes, such as transformers and deep learning. More recently, AI is driving GPU tensor cores that achieve significantly higher throughput compared to traditional cores. The course comprises over 160 informative slides with several programming exercises using the NVIDIA CUDA parallel computing platform and application programming interface (API) that allows software developers to use GPGPUs for general-purpose processing.
What you’ll learn
- Study GPGPU internal architecture.
- Review scientific problems GPGPUs solve well.
- Understand graphics pipeline and steps to construct a scene.
- Study how GPGPUs are applied to neural networks and video decoding.
- Learn GPGPU memory structure and optimization techniques.
- Learn principles of practical algorithms to parallelize an implementation.
- Be able to write C/C++, Fortran, and MATLAB simulation code to execute on CUDA GPGPU for a specific application.
- Be cognizant of CUDA GPGPU programming quirks.
Course Content
- Introduction –> 1 lecture • 12min.
- Graphics Pipeline and GPU Internals –> 1 lecture • 19min.
- GPGPU Applications –> 1 lecture • 14min.
- NVIDIA CUDA and GPU Kernels –> 1 lecture • 10min.
- GPU Memory Structure –> 1 lecture • 5min.
- CUDA Programming Languages –> 1 lecture • 16min.
- GPU Optimization –> 1 lecture • 8min.
- Parallelization Techniques and Programming Exercises –> 1 lecture • 2min.
Requirements
The Central Processing Unit (CPU) handles all the tasks required for all software on the computer or server to run correctly. A Graphic Processing Unit (GPU), on the other hand, supports the CPU to perform concurrent calculations. A GPU can complete simple and repetitive tasks much faster because it can break the task down into smaller components and finish them in parallel. These cores were initially designed to process images, video game computer graphics, and visual data. General Purpose Graphic Processor Units (GPGPUs) were adopted to enhance other computational processes, such as transformers and deep learning. More recently, AI is driving GPU tensor cores that achieve significantly higher throughput compared to traditional cores. The course comprises over 160 informative slides with several programming exercises using the NVIDIA CUDA parallel computing platform and application programming interface (API) that allows software developers to use GPGPUs for general-purpose processing.
Course Highlights
- Study GPGPU internal architecture
- Review scientific problems GPGPUs solve well
- Understand graphics pipeline and steps to construct a scene
- Study how GPGPUs are applied to neural networks and video decoding
- Learn GPGPU memory structure and optimization techniques
- Learn principles of practical algorithms to parallelize an implementation
- Be able to write C/C++, FORTRAN, and MATLAB simulation code to execute on CUDA GPGPU for a specific application
- Be cognizant of CUDA GPGPU programming quirks
About the Instructor
The instructor has developed 6 U.S. video patents that were licensed to industry and received the National Association of Broadcasters Technology Innovation Award for demonstrations of advanced media technologies. He has conducted many custom video courses for Comcast/NBC, Qualcomm, Motorola, universities, and the IEEE, and has served as an expert witness on cable TV and video streaming.