One may choose to use the following set of rules to make their decision:

Consider this dataset:
Fit a Decision tree?

The following decision tree is achieved:

Rationale is perfectly captured by a decision tree with depth=7
Consider this dataset:

depth=7 found sufficient to capture structure of datasetDecision trees make the following presumption about the structure of data:
Can figure class out based on a series of binary questions (yes/no) on individual features
Inductive Bias: Anything which makes an algorithm learn one pattern over another
In our dataset, we define similarity to be inveresely proportional to the distance between datapoints; i.e
The closer the datapoints, the more similar they are
The following decision boundaries are achieved by the KNN algorithm:

Consider the following dataset:

KNN Results in the following decision boundary:

