Русские видео

Сейчас в тренде

Иностранные видео


Скачать с ютуб QLoRA Is More Than Memory Optimization. Train Your Models With 10% of the Data for More Performance. в хорошем качестве

QLoRA Is More Than Memory Optimization. Train Your Models With 10% of the Data for More Performance. 1 год назад


Если кнопки скачивания не загрузились НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием, пожалуйста напишите в поддержку по адресу внизу страницы.
Спасибо за использование сервиса savevideohd.ru



QLoRA Is More Than Memory Optimization. Train Your Models With 10% of the Data for More Performance.

Today we explore the groundbreaking innovation in fine-tuning large language models - QLoRAs or Quantized Low-Rank Adapters. Delving into its rationale, underlying mathematics, and the advantages it holds over the previous versions of LoRAs, we present a comprehensive guide to understanding and leveraging this new technology. We start with a quick recap of LoRAs and their role in making training more accessible, followed by examining their limitations, particularly with larger models. Then, we introduce QLoRAs, and their potential to further reduce memory requirements and enhance performance. The video provides an in-depth understanding of the statistical concepts underpinning QLoRAs, including mean, median, standard deviation, quartiles, quantiles, and the importance of zero-centered distributed matrices. We also delve into how QLoRAs work and how they can be used to reduce memory usage and increase the portability of models using double quantization. Stay tuned for our next video to demonstrate using QLoRAs to fine-tune and train a large language model. Remember to like, comment, and subscribe for more updates and deep dives into the world of AI! #QLoRA #FineTuning #LargeLanguageModels #LanguageModels #Quantization #LoRA 0:00 Intro 0:24 Rational Behind QLoRA 3:13 Statistics and Special Case Matrices 6:29 The K-Bit NormalFloat 8:49 Double Quantization 12:16 What QLoRA Gives Us 14:25 Outro QLoRA Paper: https://arxiv.org/abs/2305.14314 QLoRA Repo: https://github.com/artidoro/qlora

Comments