• Nenhum resultado encontrado

aula knn

N/A
N/A
Protected

Academic year: 2021

Share "aula knn"

Copied!
37
0
0

Texto

(1)

Lazy Learners (or Learning

from Your Neighbors)

Ronnie Alves (alvesrco@gmail.com)

Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign &

Simon Fraser University

©2011 Han, Kamber & Pei. All rights reserved.

Adaptação

slides

:

https://sites.google.com/site/alvesrco/

(2)

Machine Leaning: The Basis

l 

Understand the domain and goals

l 

Data integration, selection and cleaning

l 

Learning models

l 

Interpreting results, aligned with the goals

l 

Deploying models

LEARNING = REPRESENTATION +

EVALUATION +

OPTIMIZATION

(3)

Goal: to distinguish between the sequences that contain

highly imperfect TRs and the aperiodic sequences without TRs

Learning classification models

Domain > Data !

Deploy a score!

(4)

Process (1): Model Construction

Training Data

NAME RANK YEARS TENURED

Mike Assistant Prof 3 no Mary Assistant Prof 7 yes

Bill Professor 2 yes

Jim Associate Prof 7 yes Dave Assistant Prof 6 no Anne Associate Prof 3 no

Classification Algorithms

IF rank = ‘professor’ OR years > 6

THEN tenured = ‘yes’ Classifier

(Model)

(5)

Process (2): Using the Model in Prediction

Classifier Testing

Data

NAME RANK YEARS TENURED

Tom Assistant Prof 2 no Merlisa Associate Prof 7 no

George Professor 5 yes

Joseph Assistant Prof 7 yes

Unseen Data (Jeff, Professor, 4)

Tenured?

(6)

Holdout method Given data is randomly par11oned into two independent sets Training set (e.g., 2/3) for model construc1on Test set (e.g., 1/3) for accuracy es1ma1on Random sampling: a varia1on of holdout Repeat holdout k 1mes, accuracy = avg. of the accuracies obtained Cross-valida2on (k-fold, where k = 10 is most popular) Randomly par11on the data into k mutually exclusive subsets, each approximately equal size At i-th itera1on, use Di as test set and others as training set Leave-one-out: k folds where k = # of tuples, for small sized data *Stra2fied cross-valida2on*: folds are stra1fied so that class dist. in each fold is approx. the same as that in the ini1al data

Process (3): Model Evaluation

(7)

Process (3): Model Evaluation

Actual class\Predicted

class buy_computer = yes buy_computer = no Total

buy_computer = yes 6954 46 7000

buy_computer = no 412 2588 3000

Total 7366 2634 10000

Given m classes, an entry, CMi,j in a confusion matrix indicates # of tuples in class i that were labeled by the classifier as class j May have extra rows/columns to provide totals Confusion Matrix: Actual class\Predicted class C1 ¬ C1 C1 True Posi2ves (TP) False Nega2ves (FN) ¬ C1 False Posi2ves (FP) True Nega2ves (TN) Example of Confusion Matrix:

(8)

Classifier Accuracy, or recogni1on rate: percentage of test set tuples that are correctly classified Accuracy = (TP + TN)/All Error rate: 1 – accuracy, or Error rate = (FP + FN)/All n  Class Imbalance Problem: n  One class may be rare, e.g. fraud, or HIV-posi1ve n  Significant majority of the nega8ve class and minority of the posi1ve class n  Sensi2vity: True Posi1ve recogni1on rate n  Sensi2vity = TP/P n  Specificity: True Nega1ve recogni1on rate n  Specificity = TN/N A\P C ¬C C TP FN P ¬C FP TN N P’ N’ All

Process (3): Model Evaluation

(9)

Bias and Variance

Bias is a learner’s tendency to consistently learn the same wrong thing.

Variance is the tendency to learn random things irrespective of the real signal.

Very different decision boundaries can yield similar class predictions.

A more powerful learner is not necessarily better than a less powerful one

(10)

Over-fitting vs Under-fitting

ELLABORATE A PROPER VALIDATION SCHEME TO BEST UNDERSTAND BIAS AND VARIANCE.

(11)

Lazy Learner: Instance-Based Methods

Instance-based learning:

Store training examples and delay the processing (“lazy evaluation”) until a new instance must be classified

Typical approaches

k-nearest neighbor approach

•  Instances represented as points in a Euclidean space.

Locally weighted regression

•  Constructs local approximation Case-based reasoning

•  Uses symbolic representations and knowledge-based inference

(12)

Lazy Learner: Iris Data set

setosa

virginica

versicolor

The Iris flower data set or Fisher's Iris data set is a multivariate data set introduced by Ronald Fisher in his 1936 paper The use of multiple measurements in taxonomic problems as an example of linear discriminant analysis.[1]

The data set consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor).

Four features were measured from each sample:

(13)

Lazy Learner: Iris Data set

The data set consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor).

Four features were measured from each sample:

the length and the width of the sepals and petals, in centimetres.

Histograms

Boxplots

(14)

Lazy Learner: Iris Data set

The data set consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor).

Four features were measured from each sample:

the length and the width of the sepals and petals, in centimetres.

(15)

Sepals

X

Lazy Learner: Instance-Based Methods

02/09/16 Lazy Learners -- Ronnie Alves 15

What kind of flower (X) is this?

i) setosa, (ii) versicolor, iii) virginica

Petals

(16)

Lazy Learner: Nearest-neighbor

classifiers

Fix and Hodges introduced a non-parametric method for pa_ern classifica1on that has since become known the k-nearest neighbor rule (1951). Nearest Neighbors have been used in sta1s1cal es1ma1on and paTern recogni2on already in the beginning of 1970’s (non-parametric techniques). Dynamic Memory: A theory of Reminding and Learning in Computer and People (Schank, 1982). Nearest-neighbor classifiers are based on learning by analogy, that is, by comparing a given test tuple with training tuples that are similar to it.

(17)

Lazy Learner: Nearest-neighbor

classifiers

The training tuples are described by n a_ributes. Each tuple represents a point in a n-dimensional paTern space. Thus, all of training tuples are stored in a n-dimensional pa_ern space. When given an unknown tuple, a k-nearest-neighbor classifier searches the pa_ern space for the training tuples that are closest to the unknown tuple. These k training tuples are the k “nearest neighbors” of the unknown tuple.

(18)

Lazy Learner: Closeness

Closeness is defined in terms of a distance metric, such as Euclidean distance. Instance x (ogen called a feature vector) <a1(x), a2(x), …, an(x)> Distance between two instances xi and xj

(19)

Lazy Learner: Discrete Valued Target

Function

Training algorithm: For each training example <x, class(x)>, add the example to the list Training Classifica2on algorithm: Let V ={v1, …, vl} be a set of classes Given a query instance xq to be classified Let X={x1, …, xk} denote the k instances from Training that are nearest to xq Return vi such that |votei| is largest

(20)

Lazy Learner: Continuous valued target

function

Algorithm computes the mean value of the k nearest training tuples rather than the most common value Replace fine line in previous algorithm with

02/09/16 Lazy Learners -- Ronnie Alves 20

(21)

Lazy Learner: Missing values

If the value of given a_ribute A is missing in tuple X1 and/or in tuple X2, we assume the maximum possible difference. Suppose a_ributes have been mapped to the range [0,1]. For categorical variables: We take the difference value to be 1 if either one or both of the corresponding values of A is missing

(22)

Lazy Learner: Missing values

If the value of given a_ribute A is missing in tuple X1 and/or in tuple X2, we assume the maximum possible difference. Suppose a_ributes have been mapped to the range [0,1] For numeric variables: If values are missing in both (X1, X2), the difference is 1. If only one value is missing and the other (which we'll call v') is present and normalized, then we can take the difference to be either |1-v'| or |0-v'|, whichever is great

(23)

The

k

-Nearest Neighbor Algorithm

All instances correspond to points in the n-D space

The nearest neighbor are defined in terms of Euclidean distance, dist(X1, X2)

Target function could be discrete- or real- valued

For discrete-valued, k-NN returns the most common

value among the k training examples nearest to xq

Vonoroi diagram: the decision surface induced by 1-NN for a typical set of training examples

. _ + _ x q _ _ + _ +

.

.

.

.

02/09/16 Lazy Learners -- Ronnie Alves 23

(24)

Discussion on the

k

-NN Algorithm

k-NN for real-valued prediction for a given unknown tuple

Returns the mean values of the k nearest neighbors

Distance-weighted nearest neighbor algorithm

Weight the contribution of each of the k neighbors according to their distance to the query xq

•  Give greater weight to closer neighbors

(25)

Discussion on the

k

-NN Algorithm

Robust to noisy data by averaging k-nearest neighbors

Curse of dimensionality: distance between neighbors could be dominated by irrelevant attributes

To overcome it, axes stretch or elimination of the least relevant attributes

(26)

Lazy Learner: KNN

02/09/16 Lazy Learners -- Ronnie Alves 26

Sepals

X

02/09/16 Lazy Learners -- Ronnie Alves 26

What kind of flower is this?

i) setosa, (ii) versicolor, iii) virginica

Petals

(27)

Lazy Learner: 1-NN

library(class) summary(iris) head(iris)

(28)

Lazy Learner: 1-NN

pairs(iris[1:4], main="Iris Data", panel=panel.smooth, pch=23, bg=c("red","green","black")[unclass(iris$Species)],lwd=2,lty=2)

02/09/16 Lazy Learners -- Ronnie Alves 28

What pairs produce

the best decision

boundaries?

(29)

Lazy Learner: 1-NN

tes1dx <- which(1:length(iris[,1])%%5 == 0) iristrain <- iris[-tes1dx,] iristest <- iris[tes1dx,] train_input <- as.matrix(iristrain[,-5]) train_output <- as.vector(iristrain[,5]) test_input <- as.matrix(iristest[,-5]) test_output <- as.vector(iristest[,5])

02/09/16 Lazy Learners -- Ronnie Alves 29

120 training tuples

30 test tuples

feature vectors

class vector

(30)

Lazy Learner: 1-NN

predic1on <- knn(train_input, test_input, train_output, k=1) table(predic1on, test_output) test_output predic1on setosa versicolor virginica setosa 10 0 0 versicolor 0 10 1 virginica 0 0 9 misclass <- (nrow(test_input)-sum(diag(table(predic1on, test_output))))/ nrow(test_input)

02/09/16 Lazy Learners -- Ronnie Alves 30

Misclassification error

Rate = 0.03

Confusion matrix

KNN function

(31)

Lazy Learner: How can I determine a

good number of neighbors?

Algorithm: Star1ng with k = 1, we use a test set to es1mate the misclassifica1on error rate. Repeat this processes n 1mes, incremen1ng k to allow one more neighbor. Return k with lowest error rate. The large the number of of training tuples is, the larger the value of k will be. If the number of points is fairly large, then the error rate of Nearest Neighbor is less that twice the Bayes error rate

(32)

Lazy Learner: How can I determine a

good number of neighbors?

k=c(1:9) p=rep(0,9) summary=cbind(k,p) colnames(summary)=c("k","Percent misclassified") for(i in 1:9){ result=knn(train_input, test_input, train_output, k=i) summary[i,2]=(nrow(test_input)-sum(diag(table(predic2on, test_output))))/ nrow(test_input) } plot(summary)

(33)

Lazy Learner: Distance-based

comparison

Nearest-neighbor classifiers intrinsically assign equal weight to each a_ribute. Thus, they can suffer from poor accuracy when given noisy or irrelevant a_ributes. The choice of a distance metric can be cri1cal. TASK 1: List other distance measurements and poten1al scenario of applica1on (prac1cal examples).

(34)

Lazy Learner: Complexity

If D is a training database of |D| tuples and k = 1, then O(|D|) comparisons are required in order to classify a given test tuple. By presor1ng and arranging the stored tuples into search trees, the number of comparisons can be reduced to O(log(|D|) TASK-2: Parallel implementa1on can reduce the running 1me to O(1). List other techniques which could speed up knn running 1me.

(35)

Explore kNN with the Wine data set: h_p://archive.ics.uci.edu/ml/datasets/Wine These data are the results of a chemical analysis of wines grown in the same region in Italy but derived from three different cul1vars. The analysis determined the quan11es of 13 cons1tuents found in each of the three types of wines. TASK-3: Explore a op1mum k value for the wine data set through a five-fold cross valida1on training.

Lazy Learner: Performance assesment

(36)

Lazy Learner: Summary

02/09/16 Lazy Learners -- Ronnie Alves 36

(37)

Lazy Learner: Summary

KNN is conceptually simple, yet able to solve complex problems Can work with rela1vely li_le informa1on Learning is simple (no learning at all!) Memory and CPU cost Feature selec1on problem, Sensi1ve to representa1on

Referências

Documentos relacionados

11.1 A inscrição do candidato importará no conhecimento e na aceitação tácita das condições do Concurso Público, estabelecidas nesse edital em relação às quais não

A função de segunda ordem da densidade de vizinhos (NDF) foi utilizada para determinar se o padrão de distribuição espacial da espécie é agregado (análise univariada)

Four samples of each species ( Arachis pintoi, Boerhavia diffusa , Heliotropium ternatum , Aristida adscensionis ) were collected in each area and from a pool of species

Os profissionais devem conhecer os princípios da Norma Regulamentadora 32 - NR 32, que estabelece medidas de proteção à segurança e a saúde dos trabalhadores

In this paper we propose an iris segmentation method using Hough transform and active contour to detect a circular approximation of the outer iris boundary and to

Immune reconstitution inflammatory syndrome (IRIS) results from an exuberant immune response against residual antigens (paradoxical IRIS) or viable pathogens (unmasking IRIS) in

The data base has 150 data points, each point has four- Dimensional (flower, namely petal length, petal width, sepal length and sepal width). Then, the Iris data can

Ao servir o suco, fale sobre Deus derramando o Es- pírito Santo sobre os discípulos.. Dobre um pedaço de papel em forma de leque e