Русские видео

Сейчас в тренде

Иностранные видео


Скачать с ютуб Ido Nachum - A Johnson--Lindenstrauss Framework for Randomly Initialized CNNs в хорошем качестве

Ido Nachum - A Johnson--Lindenstrauss Framework for Randomly Initialized CNNs 8 месяцев назад


Если кнопки скачивания не загрузились НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием, пожалуйста напишите в поддержку по адресу внизу страницы.
Спасибо за использование сервиса savevideohd.ru



Ido Nachum - A Johnson--Lindenstrauss Framework for Randomly Initialized CNNs

Presented on Thursday, January 4th, 2024, 10:30 AM, room C221 Speaker Ido Nachum (EPFL, Haifa university) Title A Johnson--Lindenstrauss Framework for Randomly Initialized CNNs Abstract: Fix a dataset $\{ x_i \}_{i=1}^n \subset R^d$. The celebrated Johnson--Lindenstrauss (JL) lemma shows that a random projection preserves geometry: with high probability, $(x_i,x_j) \approx (W \cdot x_i , W \cdot x_j )$. How does the JL lemma relate to neural networks? A neural network is a sequential application of a projection that is followed by a non-linearity $\sigma: R \rightarrow R$ (applied coordinate-wise). For example, a fully connected network (FCN) with two hidden layers has the form: $N(x)=W_3 \cdot \sigma( W_2 \cdot \sigma( W_1 \cdot x ) )$. By the JL lemma, any layer of a random linear FNN ($\sigma(x)=x$) essentially preserves the original geometry of the dataset. How does the quantity $(\sigma( W \cdot x_i ) , \sigma( W \cdot x_j ))$ change with other non-linearities or a convolution ($*$) instead of matrix multiplication ($\cdot$)? For FCNs with the prevalent ReLU activation ($ReLU(x):=\max\{ x , 0 \}$), the angle between two inputs contracts according to a known mapping. The question for non-linear convolutional neural networks (CNNs) becomes much more intricate. To answer this question, we introduce a geometric framework. For linear CNNs, we show that the Johnson--Lindenstrauss lemma continues to hold, namely, that the angle between two inputs is preserved. For CNNs with ReLU activation, on the other hand, the behavior is richer: The angle between the outputs contracts, where the level of contraction depends on the nature of the inputs. In particular, after one layer, the geometry of natural images is essentially preserved, whereas, for Gaussian correlated inputs, CNNs exhibit the same contracting behavior as FCNs with ReLU activation. Bio: Beginning of Spring 2024, Ido Nachum will start as a senior lecturer in the department of statistics at the University of Haifa. He began his career as an aerospace engineer and worked in RAFAEL Ltd. (Atuda military service) while completing his MSc in pure math, studying measured group theory. He continued in pure math for his PhD and studied learning theory questions from an information theory perspective, and was a postdoctoral researcher in the School of Computer Science at EPFL, focusing on mathematical questions that arise from artificial neural computation. Link for the Panopto Meeting https://huji.cloud.panopto.eu/Panopto... Link to past lectures    / @hujimachinelearningclub8982   Online Calendar Learning Club @ HUJI https://www.google.com/calendar/embed... Calendar ID: [email protected] Mailing List subscription-manager: http://mailman.cs.huji.ac.il/mailman/...

Comments