Categories
data science FAQ's

Why don’t gradient descent methods always converge to the same point?

This is because, in some cases, they reach to local or local optima point. The methods don’t always achieve global minima. This is also dependent on the data, the descent rate and origin point of descent.

Leave a Reply

Your email address will not be published. Required fields are marked *