Русские видео

Сейчас в тренде

Иностранные видео


Скачать с ютуб Tim Besard - GPU Programming in Julia: What, Why and How? в хорошем качестве

Tim Besard - GPU Programming in Julia: What, Why and How? 9 месяцев назад


Если кнопки скачивания не загрузились НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием, пожалуйста напишите в поддержку по адресу внизу страницы.
Спасибо за использование сервиса savevideohd.ru



Tim Besard - GPU Programming in Julia: What, Why and How?

This talk will introduce the audience to GPU programming in Julia. It will explain why GPUs can be useful for scientific computing, and how Julia makes it really easy to use GPUs. The talk is aimed at people who are familiar with Julia and want to learn how to use GPUs in their Julia code. Resources AMDGPU.jl package repository: https://github.com/JuliaGPU/AMDGPU.jl CUDA.jl package repository: https://github.com/JuliaGPU/CUDA.jl Metal.jl package repository: https://github.com/JuliaGPU/Metal.jl oneAPI.jl package repository: https://github.com/JuliaGPU/oneAPI.jl Contents 00:00 Introduction 00:31 Back to the basics: What are GPUs? 01:26 Why you should use GPUs? 02:01 All toolkits provided by vendors are using low level languages. So, time to switch to Julia 02:20 We now have Julia packages for creating code for GPUs of all major vendors 02:48 Funding principles of JuliaGPU ecosystem 03:23 Principle 1: Userfriendlines 04:54 Principle 2: Multiple programming interfaces 05:24 Main interface to program on GPU: GPU's arrays 06:43 The main power of Julia comes from higher-order abstractions, this is also true on GPUs 07:47 Array programming is powerful 08:23 Kernel programming give us performance & flexibility 09:30 We don't want to put too many abstraction into kernel code, here is why 10:04 We want to keep consistency across Julia GPU's ecosystem 10:47 Kernel programming features that we support 11:24 Support of more advanced features 11:37 What is JIT doing behind the scene? 12:37 Benchmarking and profiling 12:51 How to benchmark your GPU's code correctly? 13:46 You can't profile your GPU's code using standard methods, you must use vendor-specific tools 14:24 How do we ACTUALLY use all this? 15:15 We don't need to use `CUDA.@sync`, here is why 15:32 We disable scalar iteration 16:09 Optimizing array operations for the GPU 17:13 Pro tip: Write generic array code! 18:21 Contrived example of using generic code 19:05 Let's write a kernel 19:36 Writing fast GPU code isn't trivial 21:02 Let's write a PORTABLE kernel 21:36 Pros and cons of kernel abstractions 22:07 Kernel abstractions and high-performance code 22:35 Conclusion 24:07 Q&A: Do you implemented dummy GPU type that actually runs on GPU? 25:51 Q&A: What about support for vendor-agnostic backends like Vulkan? 27:12 Q&A: What is a status of project like OpenCL? 28:45 Q&A: How easy is to use multiple GPUs at once? 29:45 Closing applause Want to help add timestamps to our YouTube videos to help with discoverability? Find out more here: https://github.com/JuliaCommunity/You... Interested in improving the auto generated captions? Get involved here: https://github.com/JuliaCommunity/You...

Comments