Map of Italy example - choosing the value of k

In our data, we are given some points (about 1%) from the map of Italy and its surroundings. The blue points represent water and the green points represent land; white points are not known. From the partial information given, we would like to predict whether there is water or land in the white areas.

Drawing only 1% of the map data in the picture would be almost invisible. If, instead, we were given about 33 times more data from the map of Italy and its surroundings and drew it in the picture, it would look like below:

Analysis:

For this problem, we will use the k-NN algorithm - k here means that we will look at k closest neighbors. Given a white point, it will be classified as a water area if the majority of its k closest neighbors are in the water area, and classified as land if the majority of its k closest neighbors are in the land area. We will use the Euclidean metric for the distance: given two points X=[x0,x1] and Y=[y0,y1], their Euclidean distance is defined as dEuclidean = sqrt((x0-y0)2+(x1-y1)2).

The Euclidean distance is the most common metric. Given two points on a piece of paper, their Euclidean distance is just the length between the two points, as measured by a ruler, as shown in the diagram:

To apply the k-NN algorithm to an incomplete map, we have to choose the value of k. Since the resulting class of a point is the class of the majority of the k closest neighbors of that point, k should be odd. Let us apply the algorithm for the values of k=1,3,5,7,9.

Applying this algorithm to every white point of the incomplete map will result in the following completed maps:

As you will notice, the higher value of k results in a completed map with smoother boundaries. The actual complete map of Italy is here:

We can use this real completed map to calculate the percentage of the incorrectly classified points for the various values of k to determine the accuracy of the k-NN algorithm for different values of k:

k

% of incorrectly classified points

1

2.97

3

3.24

5

3.29

7

3.40

9

3.57

Thus, for this particular type of classification problem, the k-NN algorithm achieves the highest accuracy (least error rate) for k=1.

However, in real-life, problems we wouldn't usually not have complete data or a solution. In such scenarios, we need to choose k appropriate to the partially available data. For this, consult problem 1.4.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset