The first issue on the Foundations of Machine Learning series, exploring the most basic concepts in the field.
Very easy to understand, and not the most simple subject. Nicely done, Daniel and Ale!
Thisnwas great, very well explained and from a point of view I didn't read before.
Just a thought before I forget to mention, that could be interesting to discuss in the future. I would like if at some point there is a discussion on how the theory of ML might fit with Complexity theory. For instance, most of us believe P!=NP. However, if Metric TSP has a polynomial approximation then P=NP.
So a question I ask myself usually is what happen if we find a ML algorithm that approximates TSP amazingly well, it cannot be categorized as a polynomial algorithm unless is a Decider of the problem, which means 100% accuracy.
So I always have the question, is it possible to reach this 100% in a problem like this (probably not) and if not, then Why? What would be a mathematical proof for that, again from theorethical point of view, assuming we have all the data that we might need, all the features that we need, etc.
Thank you for such a great article
Truly masterful. It provides a truly intuitive approach to understanding the basics of ML.