Русские видео

Сейчас в тренде

Иностранные видео


Скачать с ютуб PEFT LoRA Finetuning With Oobabooga! How To Configure Other Models Than Alpaca/LLaMA Step-By-Step. в хорошем качестве

PEFT LoRA Finetuning With Oobabooga! How To Configure Other Models Than Alpaca/LLaMA Step-By-Step. 1 год назад


Если кнопки скачивания не загрузились НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием, пожалуйста напишите в поддержку по адресу внизу страницы.
Спасибо за использование сервиса savevideohd.ru



PEFT LoRA Finetuning With Oobabooga! How To Configure Other Models Than Alpaca/LLaMA Step-By-Step.

This is my most request video to date! A more detailed walk-through of how to perform LoRA Finetuning! In this comprehensive tutorial, we delve into the nitty-gritty of leveraging LoRAs (Low-Rank Adaption) to fine-tune large language models, utilizing Oogabooga and focusing on models like Alpaca and StableLM. The video begins with a discussion about the important considerations to bear in mind before beginning the fine-tuning process. It then transitions into the practical aspects, starting with the transformation of data sets (both structured and unstructured) into a suitable format for fine-tuning. Through detailed examples, we show how to structure the fine-tuning file, the selection of the appropriate language model, data preprocessing techniques for various data types, and the challenges associated with each type. This includes the use of regex, OCR, and other tools for data extraction from unstructured sources. The tutorial then moves on to a real-time demonstration using the MedQuAD medical Q&A dataset, where the host explains how to convert XML data into the Alpaca-supported JSON structure, how to upload the dataset for training, and ultimately, how to train the LoRAs using Lambda Labs. The video concludes with a discussion on setting up training for different large language models and how to prepare data for them. This video serves as a guide for those interested in harnessing the power of LoRAs for fine-tuning their own language models, with a particular emphasis on practical application and a step-by-step approach. The next video promises to explore the 'Hyena' paper and its potential impacts on the world of large language models. 0:00 Intro 0:27 How to Choose a LLM 2:15 Preparing Data For Finetuning 4:03 Creating a Dataset 4:57 LoRA Training with Oobabooga 7:24 Validating Chat Results 9:00 Setting up Different LLM's 10:04 Outro Open source LLM's: https://docs.google.com/spreadsheets/... Dataset example: https://github.com/Aemon-Algiz/LoRA-F... StableLM Training Documentation: https://replicate.com/stability-ai/st... Q&A Datasets: https://github.com/ad-freiburg/large-... OCR Dataset: https://www.kaggle.com/datasets/preat... Unstructured Dataset: https://pubmed.ncbi.nlm.nih.gov/downl... #AI #MachineLearning #LanguageModels #FineTuning #LoRAs #AlpacaModel #StableLM #LambdaLabs #OogaBooga #DataPreprocessing #LargeLanguageModels

Comments