In this tutorial, I’ll show you everything you need to know about CUDA programming so that you could make use of GPU parallelization, thru simple modifications of your already existing code, running on a boring CPU. The following tutorial was recorded on NVIDIA’s Jetson Orin supercomputer. CUDA stands for Compute Unified Device Architecture, and is a parallel computing platform and application programming interface that enables software to use certain types of graphics processing units for general purpose processing, an approach called general-purpose computing on GPUs.
First, I will start by writing a simple function that does a vector multiplication, which is going to run on a CPU. Then we get the same job done using CUDA parallelization on a GPU. Keep in mind that GPU’s have more cores than CPU and hence when it comes to parallel computing of data, GPUs perform exceptionally better than CPUs even though GPUs have lower clock speed and lack several core management features as compared to CPUs. An example reveals that running 64 million massive multiplications on a GPU takes about seconds, as opposed to 31.4 seconds when running on a CPU. This translates to a x50 gain in terms of speed, thanks to the parallelization on such a huge number of cores. Amazing ! This means that running a complex program on CPU taking about a month, could be executed in 14 hrs. This could be also faster given more cores.
Then, I’ll show you the gains in filling arrays on python on a CPU vs on a GPU. Another example reveals that the amount of time it took to fill the array on a CPU is about seconds, as opposed to seconds on a GPU, which is a gain of about . The last fundamental section of this video is to show the gains in rendering images (or videos) on python. We will demonstrate why you see some film producers or movie makers rendering and editing their content on a GPU. GPU rendering delivers with a graphics card rather of a CPU, which may substantially speed up the rendering process because GPUs are primarily built for fast picture rendering. GPUs were developed in response to graphically intensive applications that taxed CPUs and slowed processing speed. I will use the Mandelbrot set to perform a comparison between CPU and GPU power. This example reveals that only 1.4 seconds of execution is needed on a GPU as opposed to 110 seconds on a CPU, which is a 78x gain. This simply means that instead of rendering a 4K resolution video over a week on a CPU, you could get the same video in 8K resolution rendered in 2 hours on a GPU, if you are using 32 threads. So imagine if you doubled the threads and blocks involved in GPU optimization.
⏲Outline⏲
00:00 Introduction
00:33 Multiplication gains on GPUs vs CPUs
08:31 Filling an array on GPUs vs CPUs
11:55 Rendering gains on GPU vs CPU
12:35 What is a Mandelbrot set ?
13:39 Mandelbrot set rendering on CPU
17:01 Mandelbrot set rendering on GPU
20:54 Outro
📚Related Lectures
Jetson Orin Supercomputer -
Quick Deploy: Object Detection via NGC on Vertex AI Workbench Google Cloud -
Voice Swap using NVIDIA’s NeMo -
🔴 Subscribe for more videos on CUDA programming
👍 Smash that like button, in case you find this tutorial useful.
👁🗨 Speak up and comment, I am all ears.
💰 Donate to help the channel
Patreon -
BTC wallet - 3KnwXkMZB4v5iMWjhf1c9B9LMTKeUQ5viP
ETH wallet - 0x44F561fE3830321833dFC93FC1B29916005bC23f
DOGE wallet - DEvDM7Pgxg6PaStTtueuzNSfpw556vXSEW
API3 wallet - 0xe447602C3073b77550C65D2372386809ff19515b
DOT wallet - 15tz1fgucf8t1hAdKpUEVy8oSR8QorAkTkDhojhACD3A4ECr
ARPA wallet - 0xf54bEe325b3653Bd5931cEc13b23D58d1dee8Dfd
QNT wallet - 0xDbfe00E5cddb72158069DFaDE8Efe2A4d737BBAC
AAVE wallet - 0xD9Db74ac7feFA7c83479E585d999E356487667c1
AGLD wallet - 0xF203e39cB3EadDfaF3d11fba6dD8597B4B3972Be
AERGO wallet - 0xd847D9a2EE4a25Ff7836eDCd77E5005cc2E76060
AST wallet - 0x296321FB0FE1A4dE9F33c5e4734a13fe437E55Cd
DASH wallet - XtzYFYDPCNfGzJ1z3kG3eudCwdP9fj3fyE
#cuda #cudaprogramming #gpu
42 views
9
2
1 month ago 00:32:36 1
InPost w opałach, tymczasem powstaje 49 ośrodków dla przybyszów.
2 months ago 00:33:05 39
CUDA Programming Course – High-Performance Computing with GPUs - Part 1
2 months ago 11:55:11 35
CUDA Programming Course – High-Performance Computing with GPUs
3 months ago 00:06:51 22
i made this baby monitor mine bitcoin (free money?)
4 months ago 00:03:19 1
Introducing fVDB: Deep Learning Framework for Generative Physical AI with Spatial Intelligence
4 months ago 01:26:24 1
Od džungle UMA do bašte SRCA /Aktuelna dešavanja iz energetske perspektive - Biljana Quaan
5 months ago 00:06:43 8
How to self-host and hyperscale AI with Nvidia NIM
5 months ago 00:04:14 1
Mojo Lang… a fast futuristic Python alternative
5 months ago 00:01:05 1
Intro to HIP Programming
5 months ago 00:18:30 1
CUDA и cuDNN - КАК СКАЧАТЬ, УСТАНОВИТЬ И ПРОВЕРИТЬ
6 months ago 00:03:24 1
JAX in 100 Seconds
6 months ago 00:04:01 1
Mind-bending new programming language for GPUs just dropped...
6 months ago 00:04:56 1
УСКОРЯЕМ СКОРОСТЬ РЕНДЕРА ВАШЕГО ВИДЕО ПРИ МОНТАЖЕ - ADOBE PREMIER PRO 2019-2023
8 months ago 00:05:26 1
Alice in Wonderland Pop-Up Book by Robert Sabuda
8 months ago 00:03:13 26
Nvidia CUDA in 100 Seconds
8 months ago 00:16:05 1
Science show. Выпуск 51. Уравнение Навье - Стокса
9 months ago 01:48:18 1
Ako želiš doći u kontakt s dušom, prati gdje se dobro osjećaš/ Mir je prirodno stanje - Željko Favro
10 months ago 00:29:18 1
Mistrz QiDong NOWE CUDA LUDZKIEGO CIAŁA Emil Piasecki KONFERENCJA RADOM 2019
10 months ago 00:02:50 1
ŽELJKO SAMARDŽIĆ - MOJA SI (OFFICIAL VIDEO 2023)
10 months ago 00:02:38 1
After Effects Multiple GPU render settings. GPU Affinity for faster render.
11 months ago 00:15:32 1
Writing Code That Runs FAST on a GPU
12 months ago 00:17:10 1
CUDA Developer Tools | SOL Analysis with NVIDIA Nsight Compute
1 year ago 01:01:01 1
Nikola Tesla: Nova čuda nove civilizacije - Intervju 1/3