• Nenhum resultado encontrado

Theory discovery: the role of selectionist thinking

2 Collaborative theory construction

2.1.3 A framework for collaborative theory building

2.1.3.2 Theory discovery: the role of selectionist thinking

“an objective consensus on a fundamental truth that has been agreed upon by a substantial number of people, e.g.: is no doubting the fact that the Earth orbits the Sun.”

“information about a particular subject, e.g.: the facts about space travel.”

Since the notion of fact cannot be disentangled neither from the observer, nor from the description language, a ‘fact for us’ is not necessarily a ‘fact for them’; more precisely, having the same facts is both a condition and a consequence of sharing the same institutional reality39.

At some level of description, our brain is a complex connected graph full of selected reentrant maps chained in evolving neural circuits that we call ‘structures’; we propose to consider that level for the following statements:

- whenever the process of perceptual categorization associates widely overlapping structures to distinct events, such events are interpreted as similar events;

- linguistic expressions emerge from re-occurring response of individual maps to similar events; we can therefore consider the equivalence relation between events defined in the following way: events are equivalent if they are associated in the brain

structures to the same linguistic expression e.g. “frog eating dragonfly”;

- facts can therefore be understood as equivalence classes of events for this relation; this allows us to represent an event by a linguistic expression as illustrated in Figure 9. Facts can be accurate (with only one event in the equivalence class) like: “a frog is eating a dragonfly at position (x, y, z, t)” or generic like: “grass is green”.

It is interesting to note that, although our starting viewpoint is of a single mind, the definition of Figure 9 allows us to consider a “shared fact” when some equivalent events are associated to the same linguistic expression by distinct minds. Besides, if the notion of “truth” stands for the degree of correspondence between a representation and what is being represented, the facts that we consider are true facts, with respect to those who observe the associated events and express them.

39 as explained when we investigated about social patterns and language in 2.1.2.1

Figure 9: "a fact" in the framework

While facts are grounded on events, hypotheses are not! Hypotheses are higher level constructions of the mind grounded on the linguistic expressions representing facts.

Except in the close context of a pre-defined conceptualization, hypotheses cannot be logically deduced by an algorithm, they have to be selectively thought by human minds.

This mainly happens according to two patterns called ‘induction’ and ‘abduction’; we propose to briefly analyze them hereafter:

Selective induction of hypotheses

Let “John arrived first two years ago”, “John arrived first last year”, “John arrived first this year”, be three different single-event-facts, then the expression “John always arrived first” is not an event, but the result of some generalization upon the previous facts; this is a creative process as early expressed in [Mill, 1843]: “Induction is an inferring process; it starts from the known towards the unknown, and any operation which does not imply an inference, any process in which what seems to be the conclusion does not go beyond the initial premises, should not be called induction”.

This ability to go beyond the premises was later analyzed and extended to the validation of hypotheses in [Peirce, 1955]: “a common feature of all kinds of induction is the ability to

b l u e

abduction

&

induction

l i n g u i s t i c e x p r e s s i o n s

r e a l w o r l d i n d i v i d u a l

m i n d

II

II

g r e e n

o r a n g e a frog eating

a dragonfly

II II

A A

a fact is an equivalence class of events modulo a linguistic expression

compare individual statements: using induction it is possible to synthesize individual statements in general laws – inductive generalizations – but it is also possible to confirm or to discount hypotheses”. Induction has its biological roots in what [Edelman & Tononi, 2000]” call ‘concept formation’: “Concept formation is the ability of the brain to combine different perceptual categorization related to a scene or an object and to construct a “universal” reflecting the abstraction of some common feature across a variety of such percepts”. In order to avoid the confusion between true (‘selectionist’) induction and the mere application of formal rules in a close world, we proposed in [Lemoisson and Cerri, 2005] two patterns for induction:

- the inferring of a new concept (meaning an interesting collection of

“features” which can be verified on a set of examples and which deserves a name “as a special aggregate”) WITHOUT giving a closure to the set of all possible features;

- the inferring of a new hypothesis (meaning a well-formed proposition in a given language which can be verified in a set of situations) WITHOUT

restricting the “background knowledge” with which the well-formed proposition would have to be consistent.

Selective abduction of hypotheses

Now let us suppose that the one who arrives first raises a cup. There are all kinds of occasions for raising a cup; under certain circumstances, it will make an experimented observer think that the person doing so is neither a priest celebrating a mass, nor a VIP at lunch, but the winner of some race, ALTHOUGH the observer was not able to watch the race. [Magnani, 2001] calls selective abduction the generic following pattern:

- the surprising fact, “A”, is observed;

- but if “B” were true, “A” would be a matter of course;

- hence, there is reason to suspect that “B” is true!

The point here is not to give a theory for the phenomenon of abduction (we shall just emit a likely hypothesis); but to show that the very basic biological mechanisms described by Edelman are potentially sufficient for explaining it, and also to insists upon the necessary associative power and selection mechanisms which are required.

If we come back to our example, the following pattern can explain the reasoning relation between the fact ‘A’ “John raises a cup” and the emission of the likely hypothesis ‘B’

John arrived first”, called abduction:

i. past occurrence of equivalent events have led to overlapping structures in the mind of the observer associated to the expression of a generic fact ‘A’:

John raises a cup

ii. through communication between the observer and some friends, some of these past events have been synchronized with other structures associated to the generic fact ‘B’ expressed by: “John arrived first

iii. subsequent reinforced reentry between maps supporting linguistic expressions ‘A’ and ‘B’ has occurred as a consequence of this past synchronization

iv. a new event, firstly interpreted as a re-occurrence of fact ‘A’: “John raises a cup! ” happens

v. this new event indirectly activates the expression ‘B’: “John arrived firstCan selectionist thinking be operated by machines?

There have been several attempts for delegating the ‘theory discovery’ process to computers, some of them mimicking ‘selectionist thinking’; but any codification within a formal system carries expectable limitations:

- we have previously evoked [Edelman and Reeke, 1982] and [Reeke and Edelman, 1984] for their work on artificial associative recall, and many artificial neural networks exist and have powerful applications in ‘pattern recognition’, but they remain highly specialized artefacts;

- another ‘connectionist example’ can be found in [Holland et al, 1986] where competing parallel computations are ruled by a selection based on “market laws”

but it is explained in [Alai, 2004] that this has not produced any really new discovery (see [8.1]);

- there is also a whole field in AI relying on the use of “selection-based algorithms” but those are ruled by an artificially settled “utility function” in the context of a closed data space [Gruber, 1992] [Alliott, 1993].

As stated in [Langley, 2000], these promising approaches are still not powerful enough for addressing theory building; this is mostly due to their pre-defined and irreversible specialization, either in the perception process, or in the ‘utility evaluation’, and we may have to wait for some more decades before machines can pass the Turing test40.

40 The Turing test is a proposal for a test of a machine's capability to perform human-like conversation. Described by Alan Turing in the 1950 paper "Computing machinery and intelligence", it proceeds as follows: a human judge engages in a natural language conversation with two other parties, one a human and the other a machine; if the judge cannot