Русские видео

Сейчас в тренде

Иностранные видео


Скачать с ютуб How to solve Classification Problems in Deep Learning with Tensorflow & Keras в хорошем качестве

How to solve Classification Problems in Deep Learning with Tensorflow & Keras 3 года назад


Если кнопки скачивания не загрузились НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием, пожалуйста напишите в поддержку по адресу внизу страницы.
Спасибо за использование сервиса savevideohd.ru



How to solve Classification Problems in Deep Learning with Tensorflow & Keras

Access all tutorials at https://www.muratkarakaya.net Code: https://colab.research.google.com/dri... Classification Tutorials:    • Classification with Keras / Tensorflow   Keras Tutorials:    • Keras Tutorials   In this tutorial, we will focus on how to solve Classification Problems in Deep Learning with Tensorflow & Keras. When we design a model in Deep Neural Networks, we need to know how to select proper Label Encoding, Activation and Loss functions, along with Accuracy Metric according to the classification task at hand. Thus, in this tutorial, we will first investigate the types of Classification Problems. Then, we will see the most frequently used label encodings in Keras. We will learn how select Activation & Loss functions according to the given classification type and label encoding. Moreover, we will examine the details of accuracy metrics in TensorFlow / Keras. At the end of the tutorial, I hope that we will have a good understanding of these concepts and their implementation in Keras. References Keras API reference / Losses / Probabilistic losses Keras Activation Functions Tensorflow Data pipeline (tf.data) guide How does tensorflow sparsecategoricalcrossentropy work? Cross-entropy vs sparse-cross-entropy: when to use one over the other Why binary_crossentropy and categorical_crossentropy give different performances for the same problem?Types of Classification Tasks In general, there are three main types/categories for Classification Tasks in machine learning: A. binary classification two target classes. Is it dog in the picture? Is it dog or cat in the picture? B. multi-class classification more than two exclusive targets, only one class can be assigned to an input Which animal is in the picture: cat, dog, lion, horse? C. multi-label classification more than two non exclusive targets, one input can be labeled with multiple target classes. Which animals are in the picture: cat, dog, lion, horse? Types of Label Encoding In general, we can use different encodings for true (actual) labels (y values) : a floating number (e.g. in binary classification: 1.0 or 0.0) one-hot encoding (e.g. in multi-class classification multi-hot encoding (e.g. in multi-label classification: integers (e.g. in multi-class classification: Types of Activation Functions for Classification Tasks In Keras, there are several Activation Functions. Below I summarize two most related of them: Sigmoid or Logistic Activation Function: Sigmoid function maps any input to an output ranging from 0 to 1. Each number in the vector is handled independently. For small values (5), sigmoid returns a value close to zero, and for large values (5) the result of the function gets close to 1. Sigmoid is equivalent to a 2-element Softmax, where the second element is assumed to be zero. Sigmoid is mostly used for binary or multi-label classification. Example: Assume the last layer of the model is as: outputs = keras.layers.Dense(1, activation=tf.keras.activations.sigmoid)(x) Let's see how this layer functions as Keras operations: Softmax function : Softmax converts a real vector to a vector of categorical probabilities. The elements of the output vector are in range (0, 1) and sum to 1. Each vector is handled independently. Softmax is often used to convert last layer outputs to a probability distribution. The sum of the converted vector is 1. In other words, after applying softmax onto outputs, only 1 output woull be greater than 0.5 Softmax is mostly used for multi-class classification. BinaryCrossentropy: Computes the cross-entropy loss between true labels and predicted labels. We use this cross-entropy loss: when there are only two classes (assumed to be 0 and 1). For each sample, there should be a single floating-point value per prediction when there are two or more labels with multi-hot encoded labels. For each sample, there should be a single floating-point value per label CategoricalCrossentropy: Computes the crossentropy loss between the labels and predictions. We use this crossentropy loss function: when there are two or more label classes. We expect labels to be provided in a one_hot representation. There should be # classes floating point values per sample. If you want to provide labels as integers, please use SparseCategoricalCrossentropy loss. SparseCategoricalCrossentropy: Computes the crossentropy loss between the labels and predictions. We use this crossentropy loss function: when there are two or more label classes. We expect labels to be provided as integers. There should be # classes floating point values per class for y_pred and a single floating point value per class for y_true. If you want to provide labels using one-hot representation, please use CategoricalCrossentropy loss. Categorical Accuracy: Calculates how often predictions match one-hot labels.

Comments