• Nenhum resultado encontrado

About conversations involving non-intentional artefacts

2 Collaborative theory construction

2.1.2 Green, the “world of linguistic expressions”

2.1.2.2 About conversations involving non-intentional artefacts

The idea of programs aimed at interacting with people or other programs through speech acts was first proposed by John McCarthy24 in 1989. He wrote in 1989 an article which would be published 9 years later, about a first set of specifications for programs aimed at interacting with people or other programs through speech acts: “Elephant 2000”

[McCarthy, 1998]. We give the whole abstract below, as it can be found at:

http://www-formal.stanford.edu/jmc/elephant/elephant.html

I meant what I said, and I said what I meant. An elephant's faithful, one hundred percent! Moreover, an elephant never forgets!

Abstract:

Elephant 2000 is a proposed programming language good for writing and verifying programs that interact with people (e.g. transaction processing) or interact with programs belonging to other organizations (e.g. electronic data interchange)

24 John McCarthy is among the most famous computer scientists; he was responsible for the coining of the term

"Artificial Intelligence" in1955, invented the LISP programming language in 1960 and received the Turing Award in 1971.

1. Communication inputs and outputs are in an I-O language whose sentences are meaningful speech acts identified in the language as questions, answers, offers, acceptances, declinations, requests, permissions and promises.

2. The correctness of programs is partly defined in terms of proper performance of the speech acts. Answers should be truthful and responsive, and promises should be kept. Sentences of logic expressing these forms of correctness can be generated automatically from the form of the program.

3. Elephant source programs may not need data structures, because they can refer directly to the past. Thus a program can say that an airline passenger has a reservation if he has made one and hasn't cancelled it.

4. Elephant programs themselves can be represented as sentences of logic. Their extensional properties follow from this representation without an intervening theory of programming or anything like Hoare axioms.

5. Elephant programs that interact non-trivially with the outside world can have both input- output specifications, relating the programs inputs and outputs, and accomplishment specifications concerning what the program accomplishes in the world. These concepts are respectively generalizations of the philosophers' illocutionary and perlocutionary speech acts.

6. Programs that engage in commercial transactions assume obligations on behalf of their owners in exchange for obligations assumed by other entities. It may be part of the specifications of Elephant 2000 programs that these obligations are exchanged as intended, and this too can be expressed by a logical sentence.

7. Human speech acts involve intelligence. Elephant 2000 is on the borderline of AI, but the article emphasizes the Elephant usages that do not require AI.”

To our knowledge, this first draft of specifications has not yet given birth to a real programming language fulfilling all of them, although it has probably been a source of inspiration for many. With McCarthy’s propositions in mind, let us illustrate a conversation involving non-intentional artefacts by the following imaginary case:

the case: In order to increase human satisfaction, a start-up called “Convivial-Coffee”

proposes a new kind of coffee machines: at first look, they have the usual aspect and truly remain stupid machines … but they communicate with humans through speech acts, and in case one of them runs out of coffee, it will automatically transfer the request to another one, through a machine-to-machine conversation aimed at fulfilling the initial request.

Our question becomes: which “speech acts” are needed for this conversation? What about the turn taking, the models of one another, the topic structure?

- the case starts when a human desiring a cup of coffee meets a commissive written on a coffee-machine: “I shall give coffee if you insert one euro”. We consider that such commitments have the status of promises, therefore “cannot be true or false, but they can be carried out, kept, or broken.” In a computational model where promises are never broken (as pointed out in Elephant2000 abstract/2.), the commissives become a useful way to “transfer information about future information to be transferred”.

- the case follows up with a choice based on assertives usually expressed by lit indicators: “I have coffee”, “I have tea”, “I have no more sugar” … In the desired computational model, assertives directly support information transfer (as pointed out in Elephant2000 abstract/3.).

- what about expressives? We may imagine the coffee machine emitting: “I am so terribly sorry … I have no more sugar!”, which might look like an expressive, but is not: such a “speech act” coming from a non-intentional artefact would have no real better effect compared to the more abrupt: “no more sugar!”. People wanting coffee usually do not care of internal states of machines … except if suddenly all lights are off and it smells of burnt electronics … the point of “program alert given through a expressive” will be addressed in Chapter 4.

- the word “declarative” is quite often used in Computer Science, and the affectation of values to variables exactly “brings about a change in the world by representing it as having been changed”! In fact, any illocutionary act of a formal language may turn out to become some declaration at a programming level … but the world which is changed is the electronic world of the machine, not the world about which the conversation is25. Therefore, the only declarations we shall consider will concern the dialog between machine and client. Whereas a declaration of the type: “I declare that you have drunk your coffee!” would look a bit strange, a declaration like “Your coffee is ready!” associated to the corresponding change in the “state of the world”

should be appreciated… the point of “program termination expressed through a declaration” will be addressed in Chapter 4.

- to end with, the necessity of directives in a language for interactions has been emphasized in [Cerri et al, 2000] as an echo to “Elephant 2000”: “When some

25 Even in languages like SCHEME, the ‘define’ expression which is a declaration (of a new function) consists in fact in affecting a value to a variable standing for the “first order object called function”.

information is not known, a transaction is started aiming at winning the value corresponding to the expression denoting the information. The aphorism is: when in doubt, do not compute, ask (and wait: do something else …). Someone will resume your suspended evaluation, sometimes providing you with what you need in order to go on”. In fact, directives are exactly what coffee machines from “Convivial-Coffee” need to master for “orders, commands, and requests”!

If we concentrate on machine-to-machine conversations applying the principles of

“Elephant 2000”, we expect them to be mostly based on commissives, assertives and directives respectively corresponding to reliable promises, information transfer and orders or requests. We are going to exemplify this in more detail, since it will play an important role in our calculus in Chapter 3.

Let us imagine for instance a shortage of sugar and the subsequent conversation between

‘H’ wanting a cup of coffee and coffee-machines ‘A’, ‘B’, and ‘C’:

- machines A, B, C/ commissive: “I shall give coffee if you insert one euro”

- human H/ directive/ to machine A: “give me coffee with sugar”

- machine A/ directive/ to machine A: “have I got coffee?”

- machine A/ directive/ to machine A: “have I got sugar?”

- machine A/ assertive / to machine A: “I have coffee”

- machine A/ assertive / to machine A: “I have no sugar”

- machine A/ directive/ to machines B and C: “have you got coffee?”

- machine A/ directive/ to machines B and C: “have you got sugar?”

- machine B/ assertive/ to machine A: “I have coffee”

- machine B/ assertive/ to machine A: “I have no sugar”

- machine C/ assertive/ to machine A: “I have coffee”

- machine C/ assertive/ to machine A: “I have sugar”

- machine A/ directive/ to machine C: “give H coffee with sugar”

- machine A/ assertive/ human H: “machine C will give you coffee with sugar”

- …

- machine C/ declaration/ human H: “Your coffee is ready!”

Now can this really work, what are the conversational patterns hard-wired in the non- intentional machines?

… we list hereafter is the main difficulties to be solved:

1. in order to fulfil the initial requirement, A addresses a question to all other machines.

It might happen that other machines are engaged in other conversations; it might even happen that some other machines cannot answer directly but must “introspect26” through intermediary questions … at which moment can A take the turn again, and

“consider26” that the question has (positively or not) been answered?

2. how does a machine “know” which questions can be understood and answered by the others: does each machine necessarily store models of other machines or is there another way where machines can remain independent of one another?

3. how can machines A, B, C adapt to all possible combinations of human desires crossed with presence/absence of ingredients and dynamically introduce the adequate topics?

4. how can we guarantee the fact that the conversation will end … before H angrily phones to “Convivial-Coffee” engineers?

5. how can we guarantee an issue of the conversation which does not depend on the order in which answers from B and C have reached A?

Before attempting an answer to McCarthy’s requirements, and fully exposing the secret of “Convivial-Coffee” engineers (this will be done in Chapter 3), we would like to put the emphasis on one point:

In this toy-example of conversation between machines, the nature of the algorithms in charge of computing assertives and directives has not yet been evoked; what we emphasize is that they operate on expressions which are meaningful for humans i.e. they use a high level language, exactly as initial commissives or final assertives … this may seem a detail for ‘H’ who is only interested in his cup of coffee, but it is extremely important for “Convivial-Coffee” engineers who have designed and tested the machines, separately at first, and then as a (convivial) group of artefacts. These engineers have solved the “give-me-coffee-with-sugar” problem in a compositional way, therefore formalizing fragments of some “theory supporting the delivery of coffee” within multi- steps algorithms, in such a way that these fragments can auto-organize; we emphasize that such a compositionality must be reflected in the computation model.

26 We use quotes not to forget that A, B and C are non intentional artefacts!