github

AvrahamRaviv / Deep-Learning-in-Hebrew

  • суббота, 6 марта 2021 г. в 00:29:04
https://github.com/AvrahamRaviv/Deep-Learning-in-Hebrew


ספר מלא בעברית בנושאים של למידת מכונה ולמידה עמוקה



Deep-Learning-in-Hebrew

למידת מכונה ולמידה עמוקה בעברית

Add a star if the repository helped you :)

MDLH.png

Peoples

Avraham Raviv

Mike Erlihson

Nava (Reinitz) Leibovich

Ron Levy

Table of contents

Part I:

Part II:

Part III:



1. Introducion to Machine Learning

1.1 What is Machine Learning?

  • 1.1.1 The Basic Concept

  • 1.1.2 Data, Tasks and Learning

1.2 Applied Math

  • 1.2.1 Linear Algebra

  • 1.2.2 Calculus

  • 1.2.3 Probability

2. Machine Learning Algorithms

2.1 Supervised Learning Algorithms

  • 2.1.1 Support Vector Machines (SVM)

  • 2.1.2 Naive Bayes

  • 2.1.3 K-nearest neighbors (K-NN)

  • 2.1.4 Linear Discriminant Analysis (LDA)

2.2 Unsupervised Learning Algorithms

  • 2.2.1 K-means

  • 2.2.2 Mixture Models

  • 2.2.3 Expectation–maximization (EM)

  • 2.2.4 Hierarchical Clustering

  • 2.2.5 Local Outlier Factor

2.3 Dimensionally Reduction

  • 2.3.1 Principal Components Analysis (PCA)

  • 2.3.2 t-distributed Stochastic Neighbor Embedding (t-SNE)

  • 2.3.3 Uniform Manifold Approximation and Projection (UMAP)

2.4 Ensemble Learning

  • 2.4.1 Introduction to Ensemble Learning

  • 2.4.2 Basic Ensemble Techniques

  • 2.4.3 Stacking and Blending

  • 2.4.4 Bagging

  • 2.4.5 Boosting

  • 2.4.6 Algorithms based on Bagging

  • 2.4.7 Algorithms based on Boosting

3. Linear Neural Networks (Regression problems)

3.1 Linear Regression

  • 3.1.1 The Basic Concept

  • 3.1.2 Gradient Descent

  • 3.1.3 Regularization and Cross Validation

  • 3.1.4 Linear Regression as Classifier

3.2 Softmax Regression

  • 3.2.1 Logistic Regression

  • 3.2.2 Cross Entropy and Gradient descent

  • 3.2.3 Optimization

  • 3.2.4 SoftMax Regression – Multi Class Logistic Regression

  • 3.2.5 SoftMax Regression as Neural Network

4. Deep Neural Networks

4.1 MLP – Multilayer Perceptrons

  • 4.1.1 From a Single Neuron to Deep Neural Network

  • 4.1.2 Activation Function

  • 4.1.3 Xor

4.2 Computational Graphs and propagation

  • 4.2.1 Computational Graphs

  • 4.2.2 Forward and Backward propagation

4.3 Optimization

  • 4.3.1 Data Normalization

  • 4.3.2 Weight Initialization

  • 4.3.3 Batch Normalization

  • 4.3.4 Mini Batch

  • 4.3.5 Gradient Descent Optimization Algorithms

4.4 Generalization

  • 4.4.1 Regularization

  • 4.4.2 Weight Decay

  • 4.4.3 Model Ensembles and Drop Out

  • 4.4.4 Data Augmentation

5. Convolutional Neural Networks

5.1 Convolutional Layers

  • 5.1.1 From Fully-Connected Layers to Convolutions

  • 5.1.2 Padding, Stride and Dilation

  • 5.1.3 Pooling

  • 5.1.4 Training

  • 5.1.5 Convolutional Neural Networks (LeNet)

5.2 CNN Architectures

  • 5.2.1 AlexNet

  • 5.2.2 VGG

  • 5.2.3 GoogleNet

  • 5.2.4 Residual Networks (ResNet)

  • 5.2.5 Densely Connected Networks (DenseNet)

  • 5.2.6 U-Net

  • 5.2.7 Transfer Learning

6. Recurrent Neural Networks

6.1 Sequence Models

  • 6.1.1 Recurrent Neural Networks

  • 6.1.2 Learning Parameters

6.2 RNN Architectures

  • 6.2.1 Long Short-Term Memory (LSTM)

  • 6.2.2 Gated Recurrent Units (GRU)

  • 6.2.3 Deep RNN

  • 6.2.4 Bidirectional RNN

  • 6.2.5 Sequence to Sequence Learning

7. Deep Generative Models

7.1 Variational AutoEncoder (VAE)

  • 7.1.1 Dimensionality Reduction

  • 7.1.2 Autoencoders (AE)

  • 7.1.3 Variational AutoEncoders (VAE)

7.2 Generative Adversarial Networks (GANs)

  • 7.2.1 Generator and Discriminator

  • 7.2.2 DCGAN

  • 7.2.3 Pix2Pix

  • 7.2.4 CycleGAN

  • 7.2.5 StyleGAN

  • 7.2.6 Wasserstein GAN

7.3 Auto-Regressive Generative Models

  • 7.3.1 PixelRNN

  • 7.3.2 PixelCNN

  • 7.3.3 Gated PixelCNN

  • 7.3.4 PixelCnn ++

8. Attention Mechanism

8.1 Sequence to Sequence Learning and Attention

  • 8.1.1 Attention in Seq2Seq Models

  • 8.1.2 Bahdanau Attention and Luong Attention

8.2 Transformer

  • 8.2.1 Positional Encoding

  • 8.2.2 Self-Attention Layer

  • 8.2.3 Multi Head Attention

  • 8.2.4 Transformer End to End

  • 8.2.5 Transformer Applications

9. Computer Vision

9.1 Classification

9.2 Segmentation

9.3 Object Detection

  • 9.3.1 R-CNN

  • 9.3.2 You Only Look Once (YOLO)

  • 9.3.3 DE⫶TR: End-to-End Object Detection with Transformers

9.4 Image Captioning

9.5 Pose Estimation and Face Recognition

10. Natural Language Process

10.1 Text Classification

10.2 Sequence Tagging

10.3 Seq2Seq

10.4 Misc.

11. Reinforcement Learning



References

Stanford cs231

Machine Learning - Andrew Ng

Dive into Deep Learning

Deep Learning Book

כל הזכויות שמורות @