Русские видео

Сейчас в тренде

Иностранные видео


Скачать с ютуб Accelerate Transformer inference on CPU with Optimum and ONNX в хорошем качестве

Accelerate Transformer inference on CPU with Optimum and ONNX 2 года назад


Если кнопки скачивания не загрузились НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием, пожалуйста напишите в поддержку по адресу внизу страницы.
Спасибо за использование сервиса savevideohd.ru



Accelerate Transformer inference on CPU with Optimum and ONNX

In this video, I show you how to accelerate Transformer inference with Optimum, an open source library by Hugging Face, and ONNX. I start from a DistilBERT model fine-tuned for text classification, export it to ONNX format, then optimize it, and finally quantize it. Running benchmarks on an AWS c6i instance (Intel Ice Lake architecture), we speed up the original model more than 2.5x and divide its size by two, with just a few lines of simple Python code and without any accuracy drop! ⭐️⭐️⭐️ Don't forget to subscribe to be notified of future videos ⭐️⭐️⭐️ ⭐️⭐️⭐️ Want to buy me a coffee? I can always use more :) https://www.buymeacoffee.com/julsimon ⭐️⭐️⭐️ Optimum: https://github.com/huggingface/optimum Optimum docs: https://huggingface.co/docs/optimum/o... ONNX: https://onnx.ai/ Original model: https://huggingface.co/juliensimon/di... Code: https://gitlab.com/juliensimon/huggin...

Comments