AvrahamRaviv / Deep-Learning-in-Hebrew
- суббота, 6 марта 2021 г. в 00:29:04
ספר מלא בעברית בנושאים של למידת מכונה ולמידה עמוקה
למידת מכונה ולמידה עמוקה בעברית
Add a star if the repository helped you :)
Part I:
Part II:
Part III:
1.1.1 The Basic Concept
1.1.2 Data, Tasks and Learning
1.2.1 Linear Algebra
1.2.2 Calculus
1.2.3 Probability
2.1.1 Support Vector Machines (SVM)
2.1.2 Naive Bayes
2.1.3 K-nearest neighbors (K-NN)
2.1.4 Linear Discriminant Analysis (LDA)
2.2.1 K-means
2.2.2 Mixture Models
2.2.3 Expectation–maximization (EM)
2.2.4 Hierarchical Clustering
2.2.5 Local Outlier Factor
2.3.1 Principal Components Analysis (PCA)
2.3.2 t-distributed Stochastic Neighbor Embedding (t-SNE)
2.3.3 Uniform Manifold Approximation and Projection (UMAP)
2.4.1 Introduction to Ensemble Learning
2.4.2 Basic Ensemble Techniques
2.4.3 Stacking and Blending
2.4.4 Bagging
2.4.5 Boosting
2.4.6 Algorithms based on Bagging
2.4.7 Algorithms based on Boosting
3.1.1 The Basic Concept
3.1.2 Gradient Descent
3.1.3 Regularization and Cross Validation
3.1.4 Linear Regression as Classifier
3.2.1 Logistic Regression
3.2.2 Cross Entropy and Gradient descent
3.2.3 Optimization
3.2.4 SoftMax Regression – Multi Class Logistic Regression
3.2.5 SoftMax Regression as Neural Network
4.1.1 From a Single Neuron to Deep Neural Network
4.1.2 Activation Function
4.1.3 Xor
4.2.1 Computational Graphs
4.2.2 Forward and Backward propagation
4.3.1 Data Normalization
4.3.2 Weight Initialization
4.3.3 Batch Normalization
4.3.4 Mini Batch
4.3.5 Gradient Descent Optimization Algorithms
4.4.1 Regularization
4.4.2 Weight Decay
4.4.3 Model Ensembles and Drop Out
4.4.4 Data Augmentation
5.1.1 From Fully-Connected Layers to Convolutions
5.1.2 Padding, Stride and Dilation
5.1.3 Pooling
5.1.4 Training
5.1.5 Convolutional Neural Networks (LeNet)
5.2.1 AlexNet
5.2.2 VGG
5.2.3 GoogleNet
5.2.4 Residual Networks (ResNet)
5.2.5 Densely Connected Networks (DenseNet)
5.2.6 U-Net
5.2.7 Transfer Learning
6.1.1 Recurrent Neural Networks
6.1.2 Learning Parameters
6.2.1 Long Short-Term Memory (LSTM)
6.2.2 Gated Recurrent Units (GRU)
6.2.3 Deep RNN
6.2.4 Bidirectional RNN
6.2.5 Sequence to Sequence Learning
7.1.1 Dimensionality Reduction
7.1.2 Autoencoders (AE)
7.1.3 Variational AutoEncoders (VAE)
7.2.1 Generator and Discriminator
7.2.2 DCGAN
7.2.3 Pix2Pix
7.2.4 CycleGAN
7.2.5 StyleGAN
7.2.6 Wasserstein GAN
7.3.1 PixelRNN
7.3.2 PixelCNN
7.3.3 Gated PixelCNN
7.3.4 PixelCnn ++
8.1.1 Attention in Seq2Seq Models
8.1.2 Bahdanau Attention and Luong Attention
8.2.1 Positional Encoding
8.2.2 Self-Attention Layer
8.2.3 Multi Head Attention
8.2.4 Transformer End to End
8.2.5 Transformer Applications
9.3.1 R-CNN
9.3.2 You Only Look Once (YOLO)
9.3.3 DE⫶TR: End-to-End Object Detection with Transformers
כל הזכויות שמורות @