• Nenhum resultado encontrado

ε 0.00.20.40.60.81.0

N/A
N/A
Protected

Academic year: 2022

Share "ε 0.00.20.40.60.81.0"

Copied!
106
0
0

Texto

(1)

Analysis of hierarchical metric-tree indexing schemes

for similarity search in high-dimensional datasets

Vladimir Pestov

vpest283@uottawa.ca

http://aix1.uottawa.ca/ ˜ vpest283

Department of Mathematics and Statistics

University of Ottawa

(2)

General setting

Workload: W = (Ω, X, Q) , where:

(3)

General setting

Workload: W = (Ω, X, Q) , where:

is the domain,

(4)

General setting

Workload: W = (Ω, X, Q) , where:

is the domain,

X ⊂ Ω finite subset (dataset, or instance), and

(5)

General setting

Workload: W = (Ω, X, Q) , where:

is the domain,

X ⊂ Ω finite subset (dataset, or instance), and

Q ⊆ 2 is the set of queries.

(6)

General setting

Workload: W = (Ω, X, Q) , where:

is the domain,

X ⊂ Ω finite subset (dataset, or instance), and Q ⊆ 2 is the set of queries.

Answering a query Q ∈ Q

(7)

General setting

Workload: W = (Ω, X, Q) , where:

is the domain,

X ⊂ Ω finite subset (dataset, or instance), and Q ⊆ 2 is the set of queries.

Answering a query Q ∈ Q is listing all x ∈ X ∩ Q .

(8)

General setting

Workload: W = (Ω, X, Q) , where:

is the domain,

X ⊂ Ω finite subset (dataset, or instance), and Q ⊆ 2 is the set of queries.

Answering a query Q ∈ Q is listing all x ∈ X ∩ Q .

A (dis)similarity measure s : Ω × Ω → R ,

(9)

General setting

Workload: W = (Ω, X, Q) , where:

is the domain,

X ⊂ Ω finite subset (dataset, or instance), and Q ⊆ 2 is the set of queries.

Answering a query Q ∈ Q is listing all x ∈ X ∩ Q . A (dis)similarity measure s : Ω × Ω → R ,

e.g. a metric, or a pseudometric.

(10)

General setting

Workload: W = (Ω, X, Q) , where:

is the domain,

X ⊂ Ω finite subset (dataset, or instance), and Q ⊆ 2 is the set of queries.

Answering a query Q ∈ Q is listing all x ∈ X ∩ Q . A (dis)similarity measure s : Ω × Ω → R ,

e.g. a metric, or a pseudometric.

A range similarity query centred at ω ∈ Ω :

Q = {x ∈ Ω : s(ω, x) < ε}

(11)

Similarity workloads

ε

W = (Ω, d, X, {B ε (x)})

(12)

Similarity workloads

ε

W = (Ω, d, X, {B ε (x)})

k -nearest neighbours ( k -NN ) query centred at x ∈ Ω ,

where k ∈ N .

(13)

Example

(14)

Example

Ω = strings of length m = 10 from the alphabet Σ of 20

standard amino acids: Ω = Σ 10 .

(15)

Example

Ω = strings of length m = 10 from the alphabet Σ of 20 standard amino acids: Ω = Σ 10 .

X = all peptide fragments of length 10 in the SwissProt

database (as of 19-Oct-2002). |X | = 23, 817, 598 .

(16)

Example

Ω = strings of length m = 10 from the alphabet Σ of 20 standard amino acids: Ω = Σ 10 .

X = all peptide fragments of length 10 in the SwissProt database (as of 19-Oct-2002). |X | = 23, 817, 598 .

Similarity measure given by the most common scoring matrix in sequence comparison, BLOSUM62, by

s(a, b) = P m

i=1 s(a i , b i ) (the ungapped score).

(17)

Example

Ω = strings of length m = 10 from the alphabet Σ of 20 standard amino acids: Ω = Σ 10 .

X = all peptide fragments of length 10 in the SwissProt database (as of 19-Oct-2002). |X | = 23, 817, 598 .

Similarity measure given by the most common scoring matrix in sequence comparison, BLOSUM62, by

s(a, b) = P m

i=1 s(a i , b i ) (the ungapped score).

Converted into quasi-metric d(a, b) = s(a, a) − s(a, b) , generating the same set of queries (range and k -NN).

(joint with A. Stojmirovi´c)

(18)

Inner vs outer

(19)

Inner vs outer

Inner workload if X = Ω ,

(20)

Inner vs outer

Inner workload if X = Ω ,

Outer workload if |X | ≪ |Ω| .

(21)

Inner vs outer

Inner workload if X = Ω ,

Outer workload if |X | ≪ |Ω| . Fragment example: outer,

|X |/|Ω| = 23, 817, 598/20 10 ≈ 0.0000023

(22)

Inner vs outer

Inner workload if X = Ω ,

Outer workload if |X | ≪ |Ω| . Fragment example: outer,

|X |/|Ω| = 23, 817, 598/20 10 ≈ 0.0000023

Most points ω ∈ Ω have NN x ∈ X within ε = 25 (high biological relevance).

0.00.20.40.60.81.0

µ ( N

ε

( X ))

Quasi−metric Metric

(23)

Hierarchical tree index structures

(24)

Hierarchical tree index structures

A sequence of refining partitions of the domain:

(25)

Hierarchical tree index structures

A sequence of refining partitions of the domain:

(26)

Hierarchical tree index structures

A sequence of refining partitions of the domain:

Space O(n) .

(27)

Hierarchical tree index structures

A sequence of refining partitions of the domain:

Space O(n) .

To process a range query B ε (ω) , we traverse the tree all the

way down to the leaf level.

(28)

Hierarchical tree index structures

A sequence of refining partitions of the domain:

Space O(n) .

To process a range query B ε (ω) , we traverse the tree all the way down to the leaf level.

What happens in each node?

(29)

Pruning

(30)

Pruning

• If B ε (ω) ∩ B = ∅ , the sub-tree descending from the node B

can be pruned:

(31)

Pruning

• If B ε (ω) ∩ B = ∅ , the sub-tree descending from the node B can be pruned:

B ω

A B

A ε ε

ε

(32)

Pruning

• If B ε (ω) ∩ B = ∅ , the sub-tree descending from the node B can be pruned:

B ω

A B

A ε ε

ε

that is, if it can be certified that

ω / ∈ B ε = {x ∈ Ω : d(x, B ) < ε} .

(33)

Pruning

• If B ε (ω) ∩ B = ∅ , the sub-tree descending from the node B can be pruned:

B ω

A B

A ε ε

ε

A

ω

B

A B

ε ε

ε

that is, if it can be certified that ω / ∈ B ε = {x ∈ Ω : d(x, B ) < ε} .

• Otherwise the search branches out.

(34)

Pruning

• If B ε (ω) ∩ B = ∅ , the sub-tree descending from the node B can be pruned:

B ω

A B

A ε ε

ε

A

ω

B

A B

ε ε

ε

that is, if it can be certified that ω / ∈ B ε = {x ∈ Ω : d(x, B ) < ε} .

• Otherwise the search branches out.

How to “certify” that B ε (ω) ∩ B = ∅ ?

(35)

Decision functions

(36)

Decision functions

Let f : Ω → R be a 1-Lipschitz function,

|f (x) − f (y)| ≤ d(x, y) ∀x, y ∈ Ω,

(37)

Decision functions

Let f : Ω → R be a 1-Lipschitz function,

|f (x) − f (y)| ≤ d(x, y) ∀x, y ∈ Ω,

such that f ↾ B ≤ 0 .

(38)

Decision functions

Let f : Ω → R be a 1-Lipschitz function,

|f (x) − f (y)| ≤ d(x, y) ∀x, y ∈ Ω,

such that f ↾ B ≤ 0 . Then f ↾ B ε < ε ,

(39)

Decision functions

Let f : Ω → R be a 1-Lipschitz function,

|f (x) − f (y)| ≤ d(x, y) ∀x, y ∈ Ω, such that f ↾ B ≤ 0 . Then f ↾ B ε < ε ,

x f

B 0

f(x) ε

y

(40)

Decision functions

Let f : Ω → R be a 1-Lipschitz function,

|f (x) − f (y)| ≤ d(x, y) ∀x, y ∈ Ω, such that f ↾ B ≤ 0 . Then f ↾ B ε < ε ,

x f

B 0

f(x) ε

y

that is, f (ω) ≥ ε

(41)

Decision functions

Let f : Ω → R be a 1-Lipschitz function,

|f (x) − f (y)| ≤ d(x, y) ∀x, y ∈ Ω, such that f ↾ B ≤ 0 . Then f ↾ B ε < ε ,

x f

B 0

f(x) ε

y

(42)

Metric trees

A metric tree for a metric similarity workload (Ω, ρ, X ) : a binary rooted tree T ,

a collection of partially defined 1 -Lipschitz functions f t : B t → R for every inner node t (decision functions), a collection of bins B t ⊆ Ω for every leaf node t ,

containing pointers to elements X ∩ B t , such that

B root(T ) = Ω ,

∀ inner node t and child nodes t , t + , B t ⊆ B t − ∪ B t + . When processing a range query B ε (ω) ,

t [ t + ] is accessed ⇐⇒ f t (ω) < ε [resp. f t (ω) > −ε ].

(43)

What happens in practice?

(44)

What happens in practice?

The best indexing schemes for exact similarity search in high-dimensional outer datasets are often (not always!) outperformed by linear scan.

∗ ∗ ∗

(45)

What happens in practice?

The best indexing schemes for exact similarity search in high-dimensional outer datasets are often (not always!) outperformed by linear scan.

∗ ∗ ∗

The emphasis has shifted towards approximate similarity

search:

(46)

What happens in practice?

The best indexing schemes for exact similarity search in high-dimensional outer datasets are often (not always!) outperformed by linear scan.

∗ ∗ ∗

The emphasis has shifted towards approximate similarity search:

given ε > 0 and ω ∈ Ω , return a point that is [with high

probability] at a distance < (1 + ε)d N N (ω) from ω .

(47)

The curse of dimensionality conjecture

(48)

The curse of dimensionality conjecture

Conjecture.

(49)

The curse of dimensionality conjecture

Conjecture. Let X ⊆ {0, 1} d be a dataset with n points, where the Hamming cube is equipped with the Hamming ( ℓ 1 ) distance:

d(x, y) = ♯{i : x i 6= y i }.

(50)

The curse of dimensionality conjecture

Conjecture. Let X ⊆ {0, 1} d be a dataset with n points, where the Hamming cube is equipped with the Hamming ( ℓ 1 ) distance:

d(x, y) = ♯{i : x i 6= y i }.

Suppose d = n o(1) , but d = ω(log n) .

(51)

The curse of dimensionality conjecture

Conjecture. Let X ⊆ {0, 1} d be a dataset with n points, where the Hamming cube is equipped with the Hamming ( ℓ 1 ) distance:

d(x, y) = ♯{i : x i 6= y i }.

Suppose d = n o(1) , but d = ω(log n) . Any data structure for

exact nearest neighbour search in X ,

(52)

The curse of dimensionality conjecture

Conjecture. Let X ⊆ {0, 1} d be a dataset with n points, where the Hamming cube is equipped with the Hamming ( ℓ 1 ) distance:

d(x, y) = ♯{i : x i 6= y i }.

Suppose d = n o(1) , but d = ω(log n) . Any data structure for exact nearest neighbour search in X , with d O(1) query

time,

(53)

The curse of dimensionality conjecture

Conjecture. Let X ⊆ {0, 1} d be a dataset with n points, where the Hamming cube is equipped with the Hamming ( ℓ 1 ) distance:

d(x, y) = ♯{i : x i 6= y i }.

Suppose d = n o(1) , but d = ω(log n) . Any data structure for exact nearest neighbour search in X , with d O(1) query

time, must use n ω(1) space.

∗ ∗ ∗

(54)

The curse of dimensionality conjecture

Conjecture. Let X ⊆ {0, 1} d be a dataset with n points, where the Hamming cube is equipped with the Hamming ( ℓ 1 ) distance:

d(x, y) = ♯{i : x i 6= y i }.

Suppose d = n o(1) , but d = ω(log n) . Any data structure for exact nearest neighbour search in X , with d O(1) query

time, must use n ω(1) space.

∗ ∗ ∗

The cell probe model: Ω(d/ log n) lower bound

(Barkol–Rabani, 2000).

(55)

Concentration of measure

(56)

Concentration of measure

The phenomenon of concentration of measure on high-

dimensional structures (“Geometric LLN”):

(57)

Concentration of measure

The phenomenon of concentration of measure on high-dimensional structures (“Geometric LLN”):

for a typical “high-dimensional” structure Ω , if A is a subset containing at least half of all points, then the measure of the ε -neighbourhood A ε of A is

overwhelmingly close to 1 already for small ε > 0 .

(58)

Concentration of measure

The phenomenon of concentration of measure on high-dimensional structures (“Geometric LLN”):

for a typical “high-dimensional” structure Ω , if A is a subset containing at least half of all points, then the measure of the ε -neighbourhood A ε of A is

overwhelmingly close to 1 already for small ε > 0 .

ε A

at least half of all points

contains

Α Ω

α(Ω,ε) Ω \ A ε

0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000

1111111111111111111111111 1111111111111111111111111 1111111111111111111111111 1111111111111111111111111 1111111111111111111111111 1111111111111111111111111 1111111111111111111111111 1111111111111111111111111 1111111111111111111111111 1111111111111111111111111 1111111111111111111111111 1111111111111111111111111 1111111111111111111111111 1111111111111111111111111 1111111111111111111111111 1111111111111111111111111 1111111111111111111111111 1111111111111111111111111 1111111111111111111111111 1111111111111111111111111

(59)

Concentration function

(60)

Concentration function

Let Ω = (Ω, d, µ) be a metric space with measure.

(61)

Concentration function

Let Ω = (Ω, d, µ) be a metric space with measure.

The concentration function of Ω :

α(ε) =

( 1

2 , if ε = 0 ,

1 − min

µ (A ε ) : A ⊆ Ω, µ (A) ≥ 1 2 , if ε > 0 .

(62)

Concentration function

Let Ω = (Ω, d, µ) be a metric space with measure.

The concentration function of Ω :

α(ε) =

( 1

2 , if ε = 0 ,

1 − min

µ (A ε ) : A ⊆ Ω, µ (A) ≥ 1 2 , if ε > 0 . For Ω = Σ n , the Hamming cube (normalized distance + unif. measure):

α Σ n (ε) ≤ e −2ε 2 n .

(63)

Concentration function

Let Ω = (Ω, d, µ) be a metric space with measure.

The concentration function of Ω :

α(ε) =

( 1

2 , if ε = 0 ,

1 − min

µ (A ε ) : A ⊆ Ω, µ (A) ≥ 1 2 , if ε > 0 . For Ω = Σ n , the Hamming cube (normalized distance + unif. measure):

α Σ n (ε) ≤ e −2ε 2 n .

Gaussian estimates are typical

(64)

Example: the Hamming cube

0 0.2 0.4 0.6 0.8 1

0 0.05 0.1 0.15 0.2

Concentration function versus Chernoff’s bound, n = 101

Concentration function Chernoff bound

Concentration function α(Σ 101 , ε) versus Chernoff bound

(65)

Effects of concentration on branching

(66)

Effects of concentration on branching

A

A B

< α (C, ε)

< α (C, ε)

C

ω

ε ε B

ε

(67)

Effects of concentration on branching

A

A B

< α (C, ε)

< α (C, ε)

C

ω

ε ε B ε

For all query points ω ∈ C except a set of measure

≤ 2α(C, ε),

(68)

Search radius

ε N N (ω) is a 1 -Lipschitz function, so concentrates near the median value, ε M ;

ε M → E µ⊗µ d(x, y) = O(1) .

Example: 1000 pts ∼ [0, 1] 10 , the ℓ 2 - ε N N :

ε M = 0.69419 E d(x, y) = 1.2765 .

(69)

A naive average O (n) lower bound

(70)

A naive average O (n) lower bound

Suppose datapoints are distributed according to µ ∈ P (Ω) ...

(71)

A naive average O (n) lower bound

Suppose datapoints are distributed according to µ ∈ P (Ω) ...

...as well as query points.

(72)

A naive average O (n) lower bound

Suppose datapoints are distributed according to µ ∈ P (Ω) ...

...as well as query points.

A balanced metric tree of depth O(log n) , with O(n) bins of

roughly equal size ( µ -measure).

(73)

A naive average O (n) lower bound

Suppose datapoints are distributed according to µ ∈ P (Ω) ...

...as well as query points.

A balanced metric tree of depth O(log n) , with O(n) bins of roughly equal size ( µ -measure).

in 1/2 the cases, ε N N ≥ ε M = O(1) , the median NN dist.

(74)

A naive average O (n) lower bound

Suppose datapoints are distributed according to µ ∈ P (Ω) ...

...as well as query points.

A balanced metric tree of depth O(log n) , with O(n) bins of roughly equal size ( µ -measure).

in 1/2 the cases, ε N N ≥ ε M = O(1) , the median NN dist.

For every element A of level t partition,

α(A, ε M ) ≤ 2µ(A) −1 α(Ω, ε M /2) = O(2 t )e −O(1)ε 2 M d .

(75)

A naive average O (n) lower bound

Suppose datapoints are distributed according to µ ∈ P (Ω) ...

...as well as query points.

A balanced metric tree of depth O(log n) , with O(n) bins of roughly equal size ( µ -measure).

in 1/2 the cases, ε N N ≥ ε M = O(1) , the median NN dist.

For every element A of level t partition,

α(A, ε M ) ≤ 2µ(A) −1 α(Ω, ε M /2) = O(2 t )e −O(1)ε 2 M d .

branching at every node occurs for all ω except

(76)

A naive average O (n) lower bound

Suppose datapoints are distributed according to µ ∈ P (Ω) ...

...as well as query points.

A balanced metric tree of depth O(log n) , with O(n) bins of roughly equal size ( µ -measure).

in 1/2 the cases, ε N N ≥ ε M = O(1) , the median NN dist.

For every element A of level t partition,

α(A, ε M ) ≤ 2µ(A) −1 α(Ω, ε M /2) = O(2 t )e −O(1)ε 2 M d .

branching at every node occurs for all ω except

♯( nodes ) × 2 sup

A

α(A, ε) = O(n 2 )e −O(1)d = o(1),

because d = ω(log n) , e −O(1)d is superpoly (n) .

(77)

What’s wrong?

(78)

What’s wrong?

A dataset X is modeled by a sequence of i.i.d. r.v. X i ∼ µ .

(79)

What’s wrong?

A dataset X is modeled by a sequence of i.i.d. r.v. X i ∼ µ .

Implicit assumption: empirical measure µ n (A) = |A| n ≈ µ(A) .

(80)

What’s wrong?

A dataset X is modeled by a sequence of i.i.d. r.v. X i ∼ µ .

Implicit assumption: empirical measure µ n (A) = |A| n ≈ µ(A) .

But the scheme is chosen after seeing an instance X !

(81)

What’s wrong?

A dataset X is modeled by a sequence of i.i.d. r.v. X i ∼ µ . Implicit assumption: empirical measure µ n (A) = |A| n ≈ µ(A) . But the scheme is chosen after seeing an instance X !

0 0.2 0.4 0.6 0.8 1

0 0.2 0.4 0.6 0.8 1

(82)

VC dimension

(83)

VC dimension

Let A be a family of subsets of Ω (a concept class).

B ⊆ Ω is shattered by A if for each C ⊆ B there is A ∈ A such that

A ∩ B = C.

(84)

VC dimension

Let A be a family of subsets of Ω (a concept class).

B ⊆ Ω is shattered by A if for each C ⊆ B there is A ∈ A such that

A ∩ B = C.

A Ω

B

C

(85)

VC dimension

Let A be a family of subsets of Ω (a concept class).

B ⊆ Ω is shattered by A if for each C ⊆ B there is A ∈ A such that

A ∩ B = C.

A Ω

B

C

(86)

Statistical learning bounds

(87)

Statistical learning bounds

Let A ⊆ 2 be a concept class of finite VC dimension, d .

(88)

Statistical learning bounds

Let A ⊆ 2 be a concept class of finite VC dimension, d .

Then for all ǫ, δ > 0 and every probability measure µ on Ω ,

(89)

Statistical learning bounds

Let A ⊆ 2 be a concept class of finite VC dimension, d . Then for all ǫ, δ > 0 and every probability measure µ on Ω , if n datapoints in X are drawn randomly and independently acoording to µ , then with confidence 1 − δ

∀A ∈ A ,

µ(A) − X ∩ A n

< ǫ,

(90)

Statistical learning bounds

Let A ⊆ 2 be a concept class of finite VC dimension, d . Then for all ǫ, δ > 0 and every probability measure µ on Ω , if n datapoints in X are drawn randomly and independently acoording to µ , then with confidence 1 − δ

∀A ∈ A ,

µ(A) − X ∩ A n

< ǫ,

provided n is large enough:

n ≥ 128 ε 2

d log

2e 2

ε log 2e ε

+ log 8 δ

.

(91)

Bin access lemma

(92)

Bin access lemma

Let δ > 0 , and let γ be a collection of subsets A ⊆ Ω of

measure µ(A) ≤ α(δ ) ≤ 1 4 each, satisfying µ(∪γ) ≥ 1/2 .

(93)

Bin access lemma

Let δ > 0 , and let γ be a collection of subsets A ⊆ Ω of measure µ(A) ≤ α(δ ) ≤ 1 4 each, satisfying µ(∪γ) ≥ 1/2 .

Then the 2δ -neighbourhood of every point ω ∈ Ω , apart from a set of measure at most 1 2 α(δ ) 1 2 , meets at least ⌈ 1 2 α(δ) 1 2 ⌉ elements of γ .

∗ ∗ ∗

(94)

Bin access lemma

Let δ > 0 , and let γ be a collection of subsets A ⊆ Ω of measure µ(A) ≤ α(δ ) ≤ 1 4 each, satisfying µ(∪γ) ≥ 1/2 .

Then the 2δ -neighbourhood of every point ω ∈ Ω , apart from a set of measure at most 1 2 α(δ ) 1 2 , meets at least ⌈ 1 2 α(δ) 1 2 ⌉ elements of γ .

∗ ∗ ∗

If we can now guarantee that the bins are not too large, we

get a lower bound on the number of bin accesses.

(95)

Bin complexity estimates

(96)

Bin complexity estimates

Let F be a class of 1 -Lipschitz functions used for

constructing a metric tree of a particular type.

(97)

Bin complexity estimates

Let F be a class of 1 -Lipschitz functions used for constructing a metric tree of a particular type.

Let A be the concept class of all solution sets to inequalities

f R a, f ∈ F , a ∈ R .

(98)

Bin complexity estimates

Let F be a class of 1 -Lipschitz functions used for constructing a metric tree of a particular type.

Let A be the concept class of all solution sets to inequalities

f R a, f ∈ F , a ∈ R . Suppose

p = VC-dim ( A ) < ∞

(pseudodimension of F in the sense of Vapnik).

(99)

Bin complexity estimates

Let F be a class of 1 -Lipschitz functions used for constructing a metric tree of a particular type.

Let A be the concept class of all solution sets to inequalities

f R a, f ∈ F , a ∈ R . Suppose

p = VC-dim ( A ) < ∞

(pseudodimension of F in the sense of Vapnik).

Denote B the class of all bins of all possible metric trees of depth ≤ h built using F . Then

VC-dim ( B ) ≤ 2hp log(hp) = O(hp).

(100)

Rigorous lower bounds

(101)

Rigorous lower bounds

thm. Let F be a class of 1 -Lipschitz functions on {0, 1} d

with VC dimension of the class of sets given by inequalities

f R a being poly (d) .

(102)

Rigorous lower bounds

thm. Let F be a class of 1 -Lipschitz functions on {0, 1} d with VC dimension of the class of sets given by inequalities f R a being poly (d) .

With probability approaching 1 , every metric tree indexing scheme for a random sample X of {0, 1} d containing n

points, where d = n o(1) and d = ω(log n) ,

(103)

Rigorous lower bounds

thm. Let F be a class of 1 -Lipschitz functions on {0, 1} d with VC dimension of the class of sets given by inequalities f R a being poly (d) .

With probability approaching 1 , every metric tree indexing scheme for a random sample X of {0, 1} d containing n

points, where d = n o(1) and d = ω(log n) , will have the

worst-case performance d ω(1) .

(104)

Rigorous lower bounds

thm. Let F be a class of 1 -Lipschitz functions on {0, 1} d with VC dimension of the class of sets given by inequalities f R a being poly (d) .

With probability approaching 1 , every metric tree indexing scheme for a random sample X of {0, 1} d containing n

points, where d = n o(1) and d = ω(log n) , will have the worst-case performance d ω(1) .

⊳ Can suppose every bin contains poly (d) datapoints, and

the tree depth is poly (d) . The VC-dim of all possible bins

is poly (d) = o(n) . If ǫ = n 1/2−γ , by learning estimates the

measure of each bin of the scheme is O(n −1/2+γ ) , so there

(105)

Example: vp-tree

The vp-tree (Yianilos) uses decision functions of the form f t (ω) = (1/2)(ρ(x t + , ω) − ρ(x t − , ω)),

where

t ± are two children of t and

x t ± are the vantage points for the node t .

If Ω = R d , VC dimension is d + 1 .

(106)

Example: M -tree

The M-tree (Ciaccia, Patella, Zezula) employs decision functions

f t (ω) = ρ(x t , ω) − sup

τ ∈B t

ρ(x t , τ ),

where

B t is a block corresponding to the node t ,

x t is a datapoint chosen for each node t , and

suprema on the r.h.s. are precomputed and stored.

If Ω = R d , VC-dim is d + 1 ; for Ω = {0, 1} d , it is O(d) .

Referências

Documentos relacionados

(2001) The Dark Side of Valuation: Valuing Old Tech, New Tech, and New Economy Companies, Prentice Hall. &amp; Esperança (2014) Empreendedorismo e Planeamento Financeiro,

estrutura e ação – em contraposição ao antigo modelo de partido de quadros – serviu de suporte à organização da entidade partido na contemporaneidade. Na sua nova

(PMFS) ou das Autorizações de Desmatamento, fornecidos pelo Instituto Brasileiro do Meio Ambiente e dos Recursos Naturais Renováveis (Iba- ma), mediante a elaboração de projetos

Essa decisão liminar, que garantiu a manutenção dos convênios para a Geap, vale tanto para os servidores fi lia- dos à Fenasps, como para os servidores fi liados a

Bem como agregar conteúdo nas redes sociais (Facebook, Twit- ter, Linkedin e outros). Também vai circular simultanea- mente o “Guia da Zona Leste”, a mais completa publicação

Global do Design Biofílico no Lugar de Trabalho , comissionado pelos expertos em revestimentos modulares a nível mundial da Interface e liderada pelo renomado

6 DESENVOLVIMENTO DA ANÁLISE PARA AUTOMAÇÃO DE EIXOS DIRECIONAIS.... Esse cenário mostra que a economia brasileira ainda é bastante dependente do

Gári Veiga Glass, Secretaria Municipal de Saúde de Pelotas, RS/Brasil Genivalda Cândido da Silva, Universidade Federal da Bahia, BA/Brasil Giovani Vilmar Comerlatto, Instituto