G
gstvtrp
Banned
-
- Joined
- Nov 7, 2017
- Posts
- 2,281
My Deeplearning skills are growing every day.
Nice. Do you prefer deep learning over machine learning?
so u claimMy Deeplearning skills are growing every day.
blobNice.
so u claim
blob
u didnt execute my planoh no, you too? Did the lowiq bug get you as well?
I spent like 3 hours last week trying to understand back propagation lol
You get used to it. Try watching the 3blue1brown videos on neural networks.
Basically, we have a loss function right? We want to minimize the loss. If we imagine the loss as a hilly terrain, then we want to find the low points in the terrain (local minima). To find the direction we should move in, we need to know the gradient of the loss with respect to the parameters. This is where the chain rule comes in. As we backprop we get the gradients for each layer.
Then with SGD, you simply update the parameters W of a layer by W := W - learning_rate * dW.
I used to watch everything from 3blue1brown, he puts a lot of effort.
I understood it a little better after his videos and a few from udacity, I guess I had a few problems with the backprop calc
yup, was planning to after I refreshed on linear algebraKeep at it man. When I first tried learning backprop, I was like wtf? Now it makes complete sense to me but it took a while. You should try the Coursera course by Andrew Ng. Even in that course Ng says sometimes he felt like he didn't understand backprop. Of course by now he probably does understand it completely. I think this is a very common thing among beginner ML practitioners.
I spent like 3 hours last week trying to understand back propagation lol
Nice. Do you prefer deep learning over machine learning?
dropout = ?
i still dont understand kekA technique where some neurons are probabilistically turned off during training time. This has the effect of helping a neural network generalize better.