• Nenhum resultado encontrado

aula4

N/A
N/A
Protected

Academic year: 2021

Share "aula4"

Copied!
39
0
0

Texto

(1)

Decision Tree : Pruning

Ronnie Alves (alvesrco@gmail.com)

Jiawei Han, Micheline Kamber, and Jian Pei

University of Illinois at Urbana-Champaign &

Simon Fraser University

©2011 Han, Kamber & Pei. All rights reserved.

Adaptação

slides

:

https://sites.google.com/site/alvesrco/

(2)

Decision tree: An Example

27/09/16

2

> iris[1:4,]

Sepal.Length Sepal.Width Petal.Length Petal.Width Species 1 5.1 3.5 1.4 0.2 setosa 2 4.9 3.0 1.4 0.2 setosa 3 4.7 3.2 1.3 0.2 setosa 4 4.6 3.1 1.5 0.2 setosa

class

root

test on attribute

Decision trees -- Ronnie Alves

> table(subset(iris,iris$Petal.Length<2.45)$Species) setosa versicolor virginica

(3)

Sepals

X

What kind of flower is this?

27/09/16 Decision trees -- Ronnie Alves

3

i) setosa, (ii) versicolor, iii) virginica

Petals

(4)

Decision Tree:

tree

R Package

####### Decision trees for classification

####### By Ronnie Alves

library(tree)

attach(iris)

iris[1:4,]

xtabs(~Species,data=iris)

> xtabs(~Species,data=iris) Species

setosa versicolor virginica 50 50 50 > iris[1:4,]

Sepal.Length Sepal.Width Petal.Length Petal.Width Species 1 5.1 3.5 1.4 0.2 setosa 2 4.9 3.0 1.4 0.2 setosa 3 4.7 3.2 1.3 0.2 setosa 4 4.6 3.1 1.5 0.2 setosa

(5)

Decision Tree:

tree

R Package

dt1 = tree(Species ~ ., iris, split="gini")

dt1

node), split, n, deviance, yval, (yprob) * denotes terminal node

1) root 150 329.600 setosa ( 0.33333 0.33333 0.33333 ) 2) Petal.Length < 1.35 11 0.000 setosa ( 1.00000 0.00000 0.00000 ) * 3) Petal.Length > 1.35 139 303.600 versicolor ( 0.28058 0.35971 0.35971 ) 6) Sepal.Width < 2.35 7 5.742 versicolor ( 0.00000 0.85714 0.14286 ) * 7) Sepal.Width > 2.35 132 288.900 virginica ( 0.29545 0.33333 0.37121 ) 14) Sepal.Width < 2.55 11 14.420 versicolor ( 0.00000 0.63636 0.36364 ) 28) Petal.Length < 4.25 6 0.000 versicolor ( 0.00000 1.00000 0.00000 ) * 29) Petal.Length > 4.25 5 5.004 virginica ( 0.00000 0.20000 0.80000 ) * 15) Sepal.Width > 2.55 121 265.000 virginica ( 0.32231 0.30579 0.37190 ) 30) Petal.Width < 0.25 26 0.000 setosa ( 1.00000 0.00000 0.00000 ) * 31) Petal.Width > 0.25 95 188.700 virginica ( 0.13684 0.38947 0.47368 ) 62) Petal.Width < 1.75 52 79.640 versicolor ( 0.25000 0.69231 0.05769 ) 124) Petal.Length < 2.7 13 0.000 setosa ( 1.00000 0.00000 0.00000 ) * 125) Petal.Length > 2.7 39 21.150 versicolor ( 0.00000 0.92308 0.07692 ) 250) Petal.Length < 4.95 34 0.000 versicolor ( 0.00000 1.00000 0.00000 ) * 251) Petal.Length > 4.95 5 6.730 virginica ( 0.00000 0.40000 0.60000 ) * 63) Petal.Width > 1.75 43 9.499 virginica ( 0.00000 0.02326 0.97674 ) 126) Sepal.Length < 6.05 7 5.742 virginica ( 0.00000 0.14286 0.85714 ) * 127) Sepal.Length > 6.05 36 0.000 virginica ( 0.00000 0.00000 1.00000 ) *

(6)

Decision Tree:

tree

R Package

summary(dt1)

plot(dt1)

text(dt1)

> summary(dt1) Classification tree:

tree(formula = Species ~ ., data = iris, split = "gini") Number of terminal nodes: 10

Residual mean deviance: 0.1658 = 23.22 / 140 Misclassification error rate: 0.03333 = 5 / 150

(7)

Decision Tree:

tree

R Package

summary(dt1)

plot(dt1)

text(dt1)

(8)

Decision Tree:

tree

R Package

dt2 = tree(Species ~ ., iris, split="deviance")

summary(dt2)

plot(dt2)

text(dt2)

Classification tree:

tree(formula = Species ~ ., data = iris, split = "deviance") Variables actually used in tree construction:

[1] "Petal.Length" "Petal.Width" "Sepal.Length" Number of terminal nodes: 6

Residual mean deviance: 0.1253 = 18.05 / 144 Misclassification error rate: 0.02667 = 4 / 150

= =

=

⎟⎟⎠

⎜⎜⎝

=

n i i n i i

f

y

y

f

LogL

1 1

)

(

log

2

)

(

log

2

2

(9)

Tree Pruning

(10)

10

Tree Pruning

When decision tree is built, many of the branches will reflect

anomalies in the training data due to noise or outliers.

Tree pruning methods address this problem of overfitting the

data.

The main goal is to use statistical measures to remove the

least reliable branches.

Pruned trees tend to be smaller and less complex and, thus

easier to comprehend. Furthermore, they are faster and

better at correctly classifying independent test data.

(11)

Tree Pruning

(12)

Overfitting: decomposing

generalization error

Bias is a learner's tendency to consistently learn the same

wrong thing.

Variance is the tendency to learn random things irrespec5ve of

the real signal.

12

(13)

Overfitting: decomposing

generalization error

Parametric or linear machine learning algorithms often

have a high bias but a low variance.

Non-parametric or non-linear machine learning

algorithms often have a low bias but a high variance.

13

(14)

Overfitting: decomposing

generalization error

A linear learner has high bias, because when the fron5er

between two classes is not a hyper- plane the learner is unable

to induce it.

Decision trees don’t have this problem because they can

represent any Boolean func5on, but on the other hand they

can suffer from high variance: decision trees learned on

different training sets generated by the same phenomenon are

o@en very different, when in fact they should be the same.

The k-nearest neighbors algorithm has low bias and high

variance, but the trade-off can be changed by increasing the

value of k which increases the number of neighbors that

contribute t the predic5on and in turn increases the bias of the

model.

14

(15)

Prepruning

A tree is pruned by hal:ng its construc5on early. Upon hal5ng,

the node becomes a leaf.

Measures like sta5s5cal significance, informa5on gain, Gini

index, and so on can be used to asses the goodness of a split.

– 

The split should falls bellow a threshold

– 

High threshold could result in oversimplified trees,

whereas low threshold could result in very li@le

simplifica:on

15

(16)

Postpruning

Removes subtrees from a fully grown tree. A subtree at a given

node is pruned by removing its branches and replacing it with

a leaf.

The leaf is labelled with the most frequent class among the

replaced subtree.

16

27/09/16 Decision trees -- Ronnie Alves

Subtree of

node A3

Class B

is the most

(17)

Postpruning: CART

The approach considers the cost complexity of a tree to be a

func5on of the number of leaves in the tree and the error rate

of the tree.

Star5ng from the boJom of the tree, for each internal node N,

it computes the cost complexity of the subtree at N, and the

cost complexity of the subtree at N if it were to be pruned. If

pruning the subtree result in smaller cost complexity, then

subtree is pruned.

A pruning set of class-labeled tuples is used to es5mate cost

complexity. The smallest decision tree that minimizes the cost

complexity is preferred.

17

(18)

Cost Complexity

CC(T) = cost complexity of a tree

Err(T) = proportion of misclassified records

a

= penalty factor attached to tree size (set by

user)

Among trees of given size, choose the one with

lowest CC

Do this for each size of tree

CC(T) = Err(T) +

a

L(T)

18

(19)

Using Validation Error to Prune

Pruning process yields a set of trees of different

sizes and associated error rates

Two trees of interest:

—  Minimum error tree

Has lowest error rate on validation data

—  Best pruned tree

Smallest tree within one std. error of min. error

This adds a bonus for simplicity/parsimony

19

(20)

Error rates on pruned trees

20

(21)

Postpruning

C4.5 uses a pessimis:c pruning, similar to the cost complexity

but, without a pruning set. It uses the training set to es5mate

error rates.

– 

Es5mate accuracy/error based on training data is overly

op5mis5c, thus, strongly biased.

Other measures could be used such as the Minimum

Descrip5on Length (MDL). The simple solu5on is preferred.

Postpruning requires more computa5on than prepruning,

though leads to a more reliable tree.

21

(22)

Postpruning

Avoid overfiXng by exploring k-fold cross valida:on and then

pruning.

22

27/09/16 Decision trees -- Ronnie Alves

k=4

training

test

(23)

Decision Tree:

tree

R Package

####### Decision trees for classification

####### By Ronnie Alves

library(tree)

attach(iris)

iris[1:4,]

xtabs(~Species,data=iris)

> xtabs(~Species,data=iris) Species

setosa versicolor virginica 50 50 50 > iris[1:4,]

Sepal.Length Sepal.Width Petal.Length Petal.Width Species 1 5.1 3.5 1.4 0.2 setosa 2 4.9 3.0 1.4 0.2 setosa 3 4.7 3.2 1.3 0.2 setosa 4 4.6 3.1 1.5 0.2 setosa

(24)

Decision Tree:

tree

R Package

dt2 = tree(Species ~ ., iris, split="deviance")

summary(dt2)

plot(dt2)

text(dt2)

Classification tree:

tree(formula = Species ~ ., data = iris, split = "deviance") Variables actually used in tree construction:

[1] "Petal.Length" "Petal.Width" "Sepal.Length" Number of terminal nodes: 6

Residual mean deviance: 0.1253 = 18.05 / 144 Misclassification error rate: 0.02667 = 4 / 150

(25)

Decision Tree:

tree

R Package

ct = cv.tree(dt2, FUN=prune.tree)

plot(ct)

Prune with 4 nodes!

Deviance ~ entropy like

(26)

Decision Tree:

tree

R Package

pt = prune.tree(dt2, best=4)

pt

dt2

node), split, n, deviance, yval, (yprob) * denotes terminal node

1) root 150 329.600 setosa ( 0.33333 0.33333 0.33333 ) 2) Petal.Length < 2.45 50 0.000 setosa ( 1.00000 0.00000 0.00000 ) * 3) Petal.Length > 2.45 100 138.600 versicolor ( 0.00000 0.50000 0.50000 ) 6) Petal.Width < 1.75 54 33.320 versicolor ( 0.00000 0.90741 0.09259 ) 12) Petal.Length < 4.95 48 9.721 versicolor ( 0.00000 0.97917 0.02083 ) * 13) Petal.Length > 4.95 6 7.638 virginica ( 0.00000 0.33333 0.66667 ) * 7) Petal.Width > 1.75 46 9.635 virginica ( 0.00000 0.02174 0.97826 ) * 1) root 150 329.600 setosa ( 0.33333 0.33333 0.33333 ) 2) Petal.Length < 2.45 50 0.000 setosa ( 1.00000 0.00000 0.00000 ) * 3) Petal.Length > 2.45 100 138.600 versicolor ( 0.00000 0.50000 0.50000 ) 6) Petal.Width < 1.75 54 33.320 versicolor ( 0.00000 0.90741 0.09259 ) 12) Petal.Length < 4.95 48 9.721 versicolor ( 0.00000 0.97917 0.02083 ) 24) Sepal.Length < 5.15 5 5.004 versicolor ( 0.00000 0.80000 0.20000 ) * 25) Sepal.Length > 5.15 43 0.000 versicolor ( 0.00000 1.00000 0.00000 ) * 13) Petal.Length > 4.95 6 7.638 virginica ( 0.00000 0.33333 0.66667 ) * 7) Petal.Width > 1.75 46 9.635 virginica ( 0.00000 0.02174 0.97826 ) 14) Petal.Length < 4.95 6 5.407 virginica ( 0.00000 0.16667 0.83333 ) * 15) Petal.Length > 4.95 40 0.000 virginica ( 0.00000 0.00000 1.00000 ) *

6 nodes

4 nodes

(27)

Decision Tree:

tree

R Package

pt = prune.tree(dt2, best=4)

plot(pt)

(28)

Decision Tree:

tree

R Package

plot(iris$Petal.Width, iris$Petal.Length,col=iris$Species,pch=20, xlab="Petal_W",ylab="Petal_L") partition.tree(pt,ordvars = c("Petal.Width","Petal.Length"), add = TRUE,cex=0.9)

(29)

Decision Tree:

tree

R Package

pt = prune.tree(dt2, best=4)

predict(pt, iris[1,-5])

setosa versicolor virginica

1 1 0 0

iris[1,]

Sepal.Length Sepal.Width Petal.Length Petal.Width

Species

1 5.1 3.5 1.4 0.2

setosa

(30)

Decision tree: Pruning

Decision trees usually overfitting the data.

To avoid overfitting k-fold cross validation can be employed to prune

not reliable branches.

Postpruning is more effective than prepruning but it requires more

(31)

https://inclass.kaggle.com/c/back-to-the-prediction-of-music-genres

(32)

Join competition

(33)

The material is divided into six categories: classical music, jazz, blues,

pop, rock and heavy metal. For each of the performers 15-20 music

pieces have been collected. All music pieces are partitioned into 20

segments and parameterized. The descriptors used in parametrization

also those formulated within the MPEG-7 standard, are only listed here

since they have already been thoroughly reviewed and explained in many

studies.

The feature vector consists of 191 parameters, the first 127

parameters are based on the MPEG-7 standard, the remaining ones are

cepstral coefficients descriptors and time-related dedicated parameters

(34)

Data Files

File Name

Available Formats

genresTrain

.csv (21.83 mb)

genresTest2

.csv (9.05 mb)

About the data!

> dim(train)

[1] 12495 192

> dim(test)

[1] 5225 191

(35)

Your submission file should be in the following format: For each

of the 5225 genres in the test set, output a single line with the code of

the genre you predict. For example, if you predict that the first genre is

Blue (decode to 1), the second genre is Rock (decode to 6), and the third

genre is Metal (decode to 4), then your submission file would look like:

Id Genres

1 1

2 6

3 4

The complete "decoding" for each genre is:

Blues=1, Classical=2, Jazz=3, Metal=4, Pop=5, Rock=6

The evaluation metric for this contest is the categorization accuracy, or the

proportion of test genres that are correctly classified. For example, a

categorization accuracy of 0.97 indicates that you have correctly classified all

but 3% of the genres.

(36)
(37)

There is no free lunch!

submissões

0.21

(38)

Um artigo curto, 6-8 pgs (no template SBC). Equipe de 3 alunos (1 lider,

responsável pelas submissões)

Resumo

1-Introdução

2-Preparação dos dados

3-Modelagem

4-Avaliação dos modelos

5-Conclusão

(39)

Kaggle Competition

Referências

Documentos relacionados

Sendo a alimentação um aspecto tão importante e decisivo para o atleta, e como são relativamente pou- cos os estudos de avaliação da ingestão nutricional dos atletas de

No âmbito do Mestrado em História e Património pela Faculdade de Letras da Universidade do Porto, ramo Arquivos Históricos e em colaboração com os professores da

Diante do estudo realizado, foi possível identificar que a obra não apresenta medidas de segurança tanto para os trabalhadores ou terceiros e que método APR facilitou na

Outro aspecto relevante a ser retomado é que, conforme Atzori, Iera e Morabito (2010), atualmente as interações entre pessoas ocorrem de um modo diferente graças à internet

Desta forma, o limite inferior para cada uma dessas variáveis foi considerado como sendo os padrões de potabilidade estabelecidos na Portaria 518/04 do Ministério da Saúde e para

Assim, o uso de subprodutos de indústrias distintas e o incentivo para a obtenção de produtos diferenciados para a alimentação humana motivou a realização desta pesquisa que

The Erasmus­funded Lived experience of climate change: e­learning and virtual mobility (LECH­e) project is developing learning resources on the topic of climate change that will