• Nenhum resultado encontrado

Augmenting the Affordance of Online Help Content

N/A
N/A
Protected

Academic year: 2022

Share "Augmenting the Affordance of Online Help Content"

Copied!
22
0
0

Texto

(1)

Augmenting the Affordance of Online Help Content

Milene Selbach Silveira

†‡

, Simone D

J Barbosa

†‡

& Clarisse Sieckenius de Souza

†‡

Informatics Department, PUC-Rio, R. Marquês de São Vicente, 225, Gávea, Rio de Janeiro, 22453–900, Brazil

Tel:

+55 21 512 2299

Email: {milene,sim,clarisse}@inf.puc-rio.br

Informatics Faculty, PUCRS, Av. Ipiranga, 6681, Porto Alegre, RS, 90619–900, Brazil

TeCGraf/PUC-Rio, R. Marquês de São Vicente, 225, Gávea, Rio de Janeiro, 22453–900, Brazil

Traditional online help is often function-oriented, unrelated to users’ tasks and not situated at users’ current context of interaction. This is due mainly to the help development strategy, and to the lack of a help model that takes into account typical users’ task flows. In addition to a help model, we also need adequate access structures to the various components of the help system, each providing a different perspective on help content. This paper proposes the integration of a help model with communicability concepts, both built upon the principles of Semiotic Engineering. Our goal is to provide users with better help access and content, which are designed to clarify users’ specific doubts, as expressed by users themselves, during interaction.

Keywords: help systems design, communicability utterances, help systems model and architecture, semiotic engineering.

(2)

1 Introduction

Help systems are typically used as a last resort. Users may not see an immediate benefit from accessing online help, because of past frustrating experiences. As designers, we must take great care in providing both clear access and relevant information to users via online help systems.

In previous work (Silveira et al., 2000), we have proposed a model and a corresponding architecture for online help systems. In our view, the designer’s knowledge about the application must be elicited throughout the application development process. This view is based on a theoretical framework called Semiotic Engineering (de Souza, 1993), which aims at conveying, through the user interface, designers’ intentions and design decisions.

But, why should we capture these design decisions and convey them to users? Users frequently work with an application without knowing the underlying objectives and technological concepts. Their only goal is to perform their tasks correctly and efficiently. It is when users can’t seem to be able to do it, i.e. when a breakdown occurs, that the design rationale may play a major role in fostering users’ understanding of the application. The design rationale may help users to recover from a breakdown and learn more about the application and the reasons things are the way they are. We were first inspired by an idea presented in Winograd

& Flores (1986), that the design should “anticipate the forms of breakdowns and provide a space of possibilities for action when they occur”. Additional studies about improvisation, carried out by Dourish (1997), have also pointed to the need for providing users with resources and information for supporting local decision- making processes.

According to Semiotic Engineering, the interface is a message from designers to users. It is a special kind of message, because it is a message about how users exchange messages with the application interface, i.e. it is a meta-message. It represents, implicitly or explicitly, how the designers convey the application and how they built it, and why. As part of this meta-message, the online help system is an important component, because there designers are better able to explicitly convey how they conceived the application.

In previous research, we felt the need to further investigate the kinds of information users may require while interacting with an application. A sample request for contextual information may be illustrated by the expression What’s this?, available in many existing applications.

In this paper, we propose an attempt to solve users’ contextual needs for information, by using communicability utterances (Prates et al., 2000b; 2000a) to access help information at various levels of affordance (Norman, 1988; 1999):

operational, tactical, and strategic — for a discussion about different levels of affordance, see de Souza et al. (2000).

2 Existing Help Model

Our first approach for designing online help (Silveira et al., 2000) is based on Semiotic Engineering, which considers help systems as a distinguished meta- message from designers to users (de Souza, 1993). In this case, the designer is

(3)

e

xplicitly saying what he/she believes are the users’ problems or tasks, what he/she think is the best solution for them, and how he/she intends to make it available to users’ practical use.

We propose that this designer’s vision — sent to users through the help system

— be captured during the design and development processes. This knowledge elicitation (capturing the designers’ knowledge about the application designed and developed by themselves) is based upon questions for the designers, classified into three major topics. From the designers’ point-of-view:

1. What are the users’ problems/needs?

2. What is the best solution for these problems? And what are the alternatives?

3. How was this made available for operational use?

These questions summarise our conclusions after relating research about available technical literature (taxonomies for online help systems (Roesler &

McLellan, 1995); context-sensitive help (Marx & Schmandt, 1996; Sleeter, 1996);

help for the web (Chamberland, 1999; Priestley, 1998; Rintjema & Warburton, 1998) and user-system dialogues (Chu-Carrol & Carberry, 1998; Hansen et al., 1996; Johnson & Erdem, 1997; Kedar et al., 1993; Mittal & Moore, 1995; Raskutti

& Zukerman, 1997), among others) and practice in the design and development of online help systems for groupware applications on the web.

Each topic can be extended into subtopics, whose answers constitute the semantic dimension of the message from designers to users about the application.

These are:

1. What are the users’ problems and needs?

What is the application domain?

What is the nature of work in this domain?

Who are the actors?

What role do they carry out?

What tasks do they do?

2. What are the best solutions for these problems and needs?

What is the application?

How will this technology affect the domain?

What is possible to do with it (goals)?

What is the application useful for?

What are the advantages of the application?

Technology

What computational environment is presumed for the full operation of the application?

What does the user need to know in order to use this application?

(4)

Activities

What activities (tasks) can be carried out in the application environment?

What are the available options in the current version?

3. How can all of this be put to operational use?

Computer–Human Interaction Analogy

What is the basic computer–human interaction analogy used?

Tasks

What does each task mean?

How can/must users do that? When?

Where in the application can users do this task?

How can users do and undo (parts of) tasks?

Why is it necessary to do this or that task?

Examples of performing the task (scenarios) Who is or isn’t affected by a task or part of a task?

What do we do after finishing a task? Until when can we do that?

How do we know if we have (successfully) finished the task?

Given an actual context of interaction, the user must be able to answer:

What can I do now?

Where am I?

Where can I go?

Where did I come from?

What happened?

We can analyse these questions from four different perspectives: Domain, Tasks, Agent (inspired by van der Veer & van Welie (n.d.) and Application. These perspectives are used to define a preliminary help model (Figure 1). The expressions in boldface represent the corresponding help information. In parentheses, we present the questions for the designers, whose answers will be used to build the actual content of the help topics.

In this model, we find the answers to almost all of the described questions, except the contextualised ones (What can I do now? Where am I? Where can I go? Where did I come from? What happened?). The entities’ attributes become the answers to the preceding questions, which are listed under each corresponding entity.

The questions of a contextual nature are generated at execution time, according to the task and the actual application state.

3 Communicability Concepts for Help Systems

In the communicability evaluation method (Prates et al., 2000b), utterances are used as an attempt to characterise users’ reactions when a communicative breakdown occurs during interaction. It is argued that these breakdowns occur when the user cannot perceive the designers’ intended affordances. It is an indication that the designer has failed in conveying his/her message through the interface application.

(5)

!" #$&%(')+*,-)%(./'100(2*34'5)*6(798(6:/'*7<;=

>?(@1A4!BDC!" #$&%('()E*,F)%./7(')G(HI.96(JK6HIL-*7&)%(*,86(:/'*7M;=

N+>"1CO? #$&%(69'(H./)%./'(34)6HI,P)%.1H.(QKR%'5)EHI62.1SSST;=

AUV> #$&%(6/'HI.9)%.9'(34)65H,)%.1HI.(QKR%'1)+H6(2.5,4SSSU;=

W?5!@FX"Y9V>N> #$&%('()+86.5,)%.9G(,4.(H+7..58&)6&L5761KSSST;Z

B+BV!@(TC!" #$[%('()+*,-)%(.9'5002*34')*6(7&\).(34%(76(26(]^/_98(6(:9'(*7=;=

`

C5!V!Ca #$&%'()+*,-0(6(,4,4*b(2.9)6/86/KR*)%&*)D;=

1cM"DCN>? \$&%('()E'(HI.9)%(.9'(8(d('7()'(](.,;=

BVTCOeAD #$&%*34%&346(:/0G)'()*6(7('(2.17d(*H67(:9.(7)+*,'(,4,4G:/.(8SSST;=

@fCO!cM!C5!>? #$&%*3D%['(34)*d(*)*.(,:9'(^9b(./34'1HHI*.(896(G)+*7&)%*,R.(7d1*H6(7:9.(7()D;=

B1CO!"? #$&%'()+'(H.9)%./'1d('(*2'b(2.-60()*67(,*7/)%.-34GHHI.7)d(.HI,4*6(7+;=

"+VNa \I$&%'()+*,)%.9b('(,4*3346:/0G().(Hg%(G:/'7&*7().(H'(34)*6(7&'17'(26(](^;=

"+ >

>?(@5A4!B1C5!" #$&%('()E8(6(.(,P.('531%9)'(,4L-:&.5'7<;=

`

?1>A4?h@CO!"+? \i6(Kj34'(7(k:&G(,4)lG,4.5H,m8(6n)%'()T;

o

7pKR%(*34% 0'5H)6JM)%. '100(2*34'1)*67 ,4%(6G(28

G(,4.(H,KR6(HLE;=

AU>Tc@(TCO!" #iE6(Kq)6/86(kUG(7869\0'1H),6(J=R)'(,4L1,;=

DCO!cMDC!"

(

$&%^/,4%(6G289G,4.5H,869)%(*,6(H)%(')D;=

>Tr BV> #s(_('(:90(2.5,P6(J0.1HJ6(H:/*7(]9)%.9)'1,DLS=

"+>TrCF?fC5>B #$&%('()+869K./86&'5J).(H J*7*,4*7]/'9)'(,4LE;Z

!"eV`

>"@(> #$&%(69*,61HR*,D7+t)'5JJ.(34).58SSSD;Z

@1"C5>DrC #$&%(.HI.9'5:

o;

$[%(.(H./34'(7

o

]6<;

$[%(.(H./8(*8 o 346(:9./JH6(:u;

$[%('()+%'(00(.(7(.8<;Z

v1w x

v/

x( 1u(w y

zUyzU{

yy1U

5xEv1

Figure 1: A model for representing information in help systems.

The utterances used for communicability evaluation are: Where is? What now?

What’s this? Oops! I can’t do it this way. Where am I? What happened? Why doesn’t it? Looks fine to me. I can’t do it. I can do otherwise. Thanks, but no, thanks. Help!

The symptoms that allow us to identify each utterance in the context they occur is presented in Prates et al. (2000b).

In de Souza et al. (2000), we see that the designer has fostered a successful communication with users when they can perceive the intended application affordances. These affordances may be classified at three levels: operational, tactical, and strategical.

Affordances at the operational level are related to users’ immediate and individual actions they need to perform. They are closely related to the interactive codes employed in the application. We may consider questions such as What’s this?

as being answered at this level.

Tactic-level affordances are related to a plan, or sequence of actions, for executing a certain task. In general terms, information at this level answers questions such as How?

Finally, there are strategic-level affordances, which are related to conceptualisations and decisions involved in certain problem-solving processes and in the embedded technology.

Communicability utterances, besides being used in communicability evaluation of human–computer interaction, may allow us to provide a novel technique for accessing online help. Users would be able to express themselves using these utterances whenever they experienced a communicative breakdown during interaction.

(6)

Question Communicability Utterance

What is each task? What’s this?

How can it be performed? When? I can’t do it.

What now?

How do I do and undo (parts of) tasks? (*) Oops!

Why do I have to do this or that task? (*) Sample usage (scenarios). (*) Whom does this task affect? (*) What do I do after the end of this task? What now?

Until when can this be done?

Contextual questions

Where am I? Where am I?

Where can I go to? What now?

Where did I come from? (*)

What happened? What happened?

Table 1: Users’ questions and the corresponding communicability utterances.

It may be argued that many applications already provide access to specific help information by means of the expressions such as What’s this?. This is typically afforded by specific interface elements, such as popup menus, for instance. The ideas presented here extend this approach to all levels of help content, providing relevant, context-sensitive information at varying granularity.

At first, we have tried to relate users’ questions from the model (representing their doubts) to existing communicability utterances. Due to the nature of these utterances, we have considered mostly task-related questions. Strategic questions, related to the domain and to the application as a whole, will not be considered in this analysis. Table 1 summarises the utterances that cover the remaining questions dealt with in our model.

The left column presents the questions whose answers will make up the help content that may be accessed via the corresponding communicability utterance in the right column. Since not all the questions presented in the model were covered by the existing communicability utterances (as marked with an asterisk [*]), we proceeded the analysis in the opposite direction: from the utterances to the corresponding breakdowns and levels of affordance.

4 Communicability and Help Content

In this section, we analyze the relation between each communicability utterance, the corresponding communicative breakdown, and the affordance level in which it occurs. Our goal is to gain insights for designing coherent help systems that provide content at various levels, and for designing consistent access for each piece of content.

(7)

[interface]

Where is?

[response at an operational level]

Where is _____?

Where is?

[response at a tactical level]

search for synonyms

Figure 2: Response for a Where is? question at operational and tactical levels.

4.1 Communicability Utterances

4.1.1 Oops!

Oops! occurs when the user realises he/she has just done something wrong, and wants to reverse the previous action(s). The help response is either operational or tactical, depending on the complexity of the steps to be performed. For instance, a single Undo would be considered an operational response, whereas a sequence of interaction steps that lead to the desired state would be tactical.

4.1.2 Where Is?

The problem here is that the user has an idea of what he/she needs, but cannot find the corresponding interface level. The help response is, at first, operational: it tells where the element is. It may be necessary to actually show where the element is, depending on the interaction steps required for reaching it. In this case, the response is considered tactical, showing how the user may reach the element (Figure 2).

It is important to note that, in the case of this utterance, the user must specifically describe or identify the element he/she is querying about, since its location is unknown. It is possible that the user does not know the specific name or expression the designer chose to refer to the element, so we should be aware of this problem and provide a variety of synonyms for the terms employed throughout the application.

Where is? utterances can also occur from within help content that was first accessed via other utterances. In this case, the user has been told what to do, but does not know where the necessary interface element is, for carrying out the instructions.

For instance, let us consider some help content related to an Oops! utterance, which tells the user what to do to reverse the previous actions. If there is an element the user cannot find in the interface, he/she would further utter Where is? in order to find out the location of (operational) and/or interaction steps required for accessing (tactical) the element (Figure 3).

4.1.3 What Now?

This utterance occurs in two different situations:

(8)

[interface]

Oops!

[response at operational or tactical level]

[response at operational or tactical level]

Where is?

Figure 3: Sequence of Oops! and Where is? utterances.

1. when the user has carried out a few interaction steps but does not know how to proceed with the task at hand; or

2. when the user needs to perform task that he/she cannot even formulate in terms of the available interface elements.

In the first case, the response is operational, showing the user what to do (what’s the next step). If the user utters What now? again, in this context, the response becomes tactical, showing the user how to do what is needed.

In the second case, the response would be strategic, presenting to the user the tasks the application was designed to support, from a user’s perspective, i.e. using the terms he/she should be familiarised with, according to the corresponding domain.

4.1.4 What’s This?

The user utters What’s this? when he/she needs a description of an interface element or its usage. The response is, at first, operational, describing the element. If the user wants further information, such as how, where, and when the element is used, we have another What’s this?, this time at a tactical level, describing the element’s usage. In many cases, both levels can be presented at once, saving users’

an additional interaction step (Figure 4).

Another usage of What’s this? may occur when the user has heard about some application object or functionality, but could not identify or locate it. In this case, the access to the utterance would be like the one for Where is?, in which the user is prompted for additional input.

4.1.5 What Happened?

This utterance occurs when the user does an action expecting a response, and he/she gets another response, or no response at all. The corresponding help content is both operational and tactical. It reveals what happened, and how it resulted from the previous interaction steps.

(9)

[interface]

What’s this?

[operational-level response]

What’s this?

[tactical-level response]

Figure 4: What’s this? utterance at various levels of affordance.

4.1.6 Why Doesn’t It?

Why doesn’t it? occurs when a user retries an operation more than once, because he/she is convinced that he/she is doing the right thing. The response is both tactical and strategic. It shows the consequences of the interaction steps taken, and why they provide those results.

4.1.7 I Can’t Do It

I can’t do it. could be accessed from a piece of help content, whenever the user fails in following procedural instructions. The response is presented at a tactical level, guiding the user through the preconditions necessary for that task to be performed.

It may also present operational instructions, at a finer level of interaction detail.

4.1.8 Help!

This utterance gives access to a traditional facet of the help system, when uttered within the application interface. Moreover, it can also be uttered within the help system itself, and in this case, provides information about all kinds of help that are available to users. In this context, it can be used again to ask when and how to access the various portions of the help system.

4.2 Additional Help Utterances

Up to this point, the existing communicability utterances have covered only part of the questions presented in the help model, as shown in Table 1. We will now discuss the questions that were not cover by the original utterances, and utterances that did not find a particular place in the model.

The utterances I can’t do it this way.; Where am I?; Looks fine to me.; I can’t do it.; I can do otherwise.; Thanks, but no, thanks. are not used as a direct access to help, but they may be used in the design of help content. In order to access them, a new utterance is proposed: How do I do this? In addition, we propose a few other utterances: Where was I? Why should I do this? Who is affected by this? On whom does this depend? Following is a description of the breakdowns they intend to solve, and the levels at which they function.

(10)

[interface]

What’s this?

[operational-level response]

What’s this?

[tactical-level response]

Why should I do … [strategic-level response]

Figure 5: The occurrence of a Why should I do this? within a tactical What’s this?

4.2.1 Where was I?

This breakdown occurs when the user needs to retrace his/her previous steps in order to understand the state in which he/she currently is. The response is at both the operational level, with an identification and a description of the previous steps, and at the tactical level, with an identification of larger tasks that may comprise these steps.

4.2.2 Why should I do this?

Why should I do this? could be accessed from a piece of help content, whenever the user doesn’t understand the reasons underlying certain instructions. The response is presented at a strategic level, revealing the designer’s perspective on that topic.

This utterance is particularly important to Semiotic Engineering, because the designer can use it to explicitly state his/her rationale of the application. The response may also describe the importance he/she assigns to a certain task or operation within a larger context.

For instance, Why should I do this? could be used from within a piece of help content previously accessed via a tactical What’s this?. In this situation, the user would want to know not only what a task is and how to perform it, but also its importance within the application as a whole (Figure 5).

4.2.3 Who is Affected By This? On Whom Does This Depend?

These utterances may occur when work processes and roles are modelled, and roles are responsible for interdependent tasks. The response may be considered operational, listing the roles affected by the selected task.

4.2.4 How Do I Do This?

When the user does not know how to perform a certain task in an application, he/she may utter How do I do this? and provide additional input in order to obtain the corresponding help information. The response is at a tactical level, describing how

(11)

Existing Communicability Help Utterances Utterances

Where is?

What now? How do I do this?

What’s this? Why should I do this?

Oops! Whom does this affect?

I can’t do it! On whom does this depend?

What happened? Where was I?

Why doesn’t it? Is there another way to do this?

Help!

Table 2: Final set of utterances for designing help systems.

he/she should proceed. Typically, it consists of step-by-step instructions.

Within this help context, users may utter How do I do this? again, in case an instruction isn’t clear. If, on the other hand, he/she tries to perform the operation and doesn’t succeed, it is a case of I can’t do it.

4.2.5 Is There Another Way To Do This?

This utterance comprises both I can do otherwise and Thanks, but no, thanks. In this case, the response is both tactical and strategical. For each alternative path of interaction, it should present the steps required to perform the task (tactical), and the motivation for following that path (strategical).

This utterance is also characteristic of our Semiotic Engineering approach, since it allows the designer to explicitly convey his/her design decisions and intentions.

4.3 Summary of Findings

As presented earlier, some utterances can be accessed immediately from the user interface, such as What’s this? and What happened?. Others, however, require additional input, such as Where is? and How do I do this?. Moreover some utterances may occur from within the help system itself, such as Why should I do this?, for example.

Some of the existing communicability utterances are inadequate for accessing help information. For instance, the user would never utter I can’t do it., but rather ask How do I do this?. The utterances I can do otherwise (missing of affordance) and Thanks, but no, thanks. (declination of affordance) wouldn’t probably be uttered either, because the user would have successfully completed his/her task. However, we present some information in the direction of varying affordances as a response to the Is there another way to do this?.

Table 2 illustrates the set of utterances we will use for designing our help system.

5 An Example

In this section, we illustrate two different approaches to accessing help content. We have chosen a small portion of the help system found in MS-Word97. We will show

(12)

how| we may follow the required steps for accessing a given help topic according to the application’s help system, and then compare it to how it would work out when following our approach. Let us consider the following usage scenario:

Jack and John are writing together a report due tomorrow. After joining their writings, John was to review the whole report, and discuss his ideas and doubts with Jack. Since they cannot meet in person, John has used a tool within his word processor to highlight the changes he has made. The problem is that Jack is not familiar with this tool. He tries to erase a portion of the revised text, but the only thing that happens is that the text appears in another colour and format. He tries to call John, but he can’t reach him. What now? He accesses the help system . . .

1. Upon asking the assistant for the term Review, he gets a help topic called Review a document (Figure 6 a), which provides a description of the reviewing task, and the following options: Insert a Comment, Modify a Comment, and Track changes while you edit.

2. Upon selecting the last option, he gets a help topic explaining how to start making changes (Figure 6 b), but not how to deal with the existing revision marks (in Word’s terms, accept or reject).

Moreover, there are some elements in the help topic that he doesn’t understand. He clicks on each of them, in order to get more information, and a popup window is shown for each: toolbar (Figure 6 c) and revision marks (Figure 6 d).

3. Since he still couldn’t find what he wanted, he decides to browse and search the help index, looking for terms that would seem related to his doubts. He clicks on the Tracking changes item, and selects the reviewing comments subitem. He is presented with a small popup window with two options: Incorporate or reject changes made with revision marks, and Review the comments in a document.

4. He selects the first option, and finally gets to the help content he needed (Figure 7).

It is noteworthy that the user’s first attempt did not get him to the necessary help content. In our approach, we try to increase the benefits provided by the help system by providing contextualised help. We do that by coupling terms that characterise the application and the supported tasks with the user’s perceptions about his/her doubts, as expressed via a communicability utterance. This way, we can provide less information, following a minimalist approach (Carroll, 1998), but more focussed on user’s current doubt. He/she can access more detailed levels of information as needed. In our approach, the following steps could have been carried out:

1. User right-clicks on a piece of revised task, and a popup appears, showing the utterances he may use for accessing help about the

(13)

Figure 6: A user’s first attempt at accessing help about “Review”.

Figure 7: Help content window that contains the required information to our user’s task.

(14)

clicked item. He chooses How do I do this?, hoping to find information about how to deal with revised text. A help message is shown, such as “The reviewing functions may be accessed through the Track changes option under the Tools menu.”1

2. If the user still doesn’t understand how to do it, he may ask again How do I do this? about the Track changes expression. The corresponding help message could be like the following: “From the Tools menu, select the Track Changes option. If you want to turn on/off the reviewing mode, select the Highlight changes . . . subitem. If you would like to accept or reject each or all of the current revisions, select Accept or Reject Changes . . . . Finally, if you would like to compare two documents, select Compare Documents . . . .”

3. The user now gets that he can deal with the existing revision marks by selecting Accept or Reject Changes . . . . However, he would like to know if there is another way to accomplish this task. He then asks Is there another way to do this? on the Accept or Reject Changes . . . subitem. Another message appears, informing the user that he can also show a Reviewing toolbar and use its buttons.

6 Proposed Help Model and Architecture

From the aforementioned analysis, we found it necessary to include some kinds of information in our help model. In the following, we will present the enhanced help model and the proposed architecture for designing help systems.

6.1 Enhanced Help Model

The previously proposed help model (Figure 1) did not cover all the information required for supporting the results obtained from our analsys. In particular, we have included a new entity, called Flow, from which we derive all the contextual information, and the alternative paths of interaction (along with the underlying rationale). The resulting model is presented in Figure 8.

6.2 Proposed Help Architecture

Taking the enhanced help model as a starting point, we now describe the proposed help architecture that supports users’ most frequent doubts, under a Semiotic Engineering perspective. Our architecture provides two means for accessing help:

via a Main Help Module, and via Users’ Utterances.

The Main Help Module is what might be called a traditional help system. It provides answers to all of the questions in the model, and it is typically disconnected from the application (for instance, an independent window). It describes the whole application, from the general domain conceptualisations, the tasks supported by

1If a message contains an underlined expression, one or more help topics are also available for it, via communicability utterances.

(15)

}R~€(‚ƒ

„E…†(ƒ‡

„Eˆˆ‰‚ŠD€‡‚

‹R€OŒ4

ŽO‘(’“ ”•F–—5˜™š ˜–5› —œOœE™žU—5˜™Ÿ5  ¡5Ÿ5¢£—™ 5¤¥

Ž¦O§4¨©I’ª4«’“ ”•F–5—5˜™š ˜–5›  5—˜¬5­I› Ÿ5®¯<Ÿ5­° ™  ˜–™š ¡5Ÿ5¢£—5™ O¤¥

‘(±¦O“4«§ ”•9–5ŸF—D­› ˜–5› —5žT˜Ÿ5­šI¤¥

©I²¦ ”•F–5™žU–9­Ÿ5› ›—4žT– —5žT˜Ÿ5­œE—5³DšT¤¥

´‘(§D’¨Pµ(“E(¶9²¦(ŽO±¦ ”•9–5—5˜¡Ÿ5›Dš ˜–5› ¬5šU›5­  5›5›D¡F˜Ÿ °D (ŸD¯¸···¤¹

‘Oªª²’¨4‘U«’“ ”•F–5—5˜™š ˜–5› —œœE™žT—5˜™Ÿ Fº˜›5žU– 5Ÿ5Ÿ»5³+¼ ¡5Ÿ5¢£—5™ ¥¤¥

½

«’²’«¾ ”•9–5—5˜™šœEŸ5šTšU™¿O›F˜Ÿ£¡Ÿ ¯+™˜– ™˜¤¥

‘OŽ4À‘(“4«1‘(±¦O§ º•F–—5˜—D­› ˜–5› —¡5ÁD—5 5˜—5»›5šU¤¥

ª²‘U«ÂO©U ”•F–™žU– žUŸ5¢+œ¬5˜—˜™Ÿ5 —4›5 5Á™­Ÿ5 ¢F›5 5˜ ™š —DšTšU¬O¢£›5¡5···¤¥

‘O¨«1’À<’«’¦(§ ”•9–5™žT– —5žT˜™ÁD™˜™›Dš ¢F—D³ ¿5› žU—D­­™›5¡ Ÿ¬5˜ ™  ˜–5™š ›D (ÁD™­Ÿ5 ¢F›D 5˜¤¥

ªU«’“§ ”•F–5—˜—D­› ˜–5› —5ÁD—5™—¿5› Ÿœ+˜™ŸD Oš ™  ˜–5› žT¬5­­›5 ˜ ÁD›5­šU™Ÿ 5¤¥

‘O“‘O²(±¾ º•F–5—˜ ™š ˜–5› ¿5—5šT™ž žTŸ5¢+œ¬5˜›5­Ã–O¬5¢£—5  ™ 5˜›5­—žU˜™Ÿ5  —5 5—5Ÿ»5³D¤¥

’Ž¦O“4«’Â’¨D‘f«’“ ”•F–—5˜TĚ ˜–5› ˜—5šU°  5—¢£›5¤¥

Ž¦O§D¨5©I’ª4«’“ ”•9–5—5˜E¡5Ÿ5›DšM›5—5žT–F˜—DšT°£¢F›D—5 5¤¥

½

§‘(±¦ ºÅŸD¯ žU—5 IÆ¢9¬5šU˜ ¬5šU›5­šM¡5Ÿ ˜–5—5˜¤¥

©I‘4«1’O“‘(²¦ º•F–³ šU–5Ÿ5¬O¡ ¬5šT›5­šR¡5Ÿ ˜–™šÆ˜–5—5˜¤¥

ªO§5’«’(“¸’“/«Ç¦ ÈÉ ºÊ  ¯<–(™žT– œE—­˜ Ÿ® ˜–5› —œ5œ™žT—5˜™Ÿ5 

šT–5Ÿ5¬O¡ ¬5šT›5­š ¯<Ÿ5­°D¤¥

§‘([ª²¦ ½

§5‘O±¦ ”ËI¼E—5¢+œE›4š Ÿ5®œE›5­®Ÿ5­¢£™ 5» ˜–5› ˜—šU°D·¥ ̀(ŒD†Í Έ~Oƒ

̆O‰~Oƒ…ŒF‡~

ΌD†

Š4~ ˆ~(Œ5†OÍ

ˆ†(ÐIÏ~ÐI

ŒDÎ̇I€(ŒD

²¶F§ º•F–5™žU– œE—5˜–5š Ÿ5®›¼›5žU¬˜™Ÿ5 9—D­› œEŸ5šUšT™¿(›¤¥

²¶Ñ©I‘U«’O“‘O²¦ ºÊ  ¯<–(™žT– žUŸ 5˜›¼E˜šT–5Ÿ5¬5¡£— œE—˜– ¿5› ®Ÿ5ŸD¯M›D¡O¤¥

©I²¦MÒ[«D‘§Dµ ”•F–5™žU– ˜—šU°Dš—D­› ›¼E›5žT¬5˜›D¡ ¿³ ˜–5™šP­Ÿ5›D¤¹

Ž¦(ª¦O“Ž¦5“¨D¾ ”•F–5Ÿ5¢Ó¡5Ÿ5›Dš˜–5™šP—D®®›4žD˜¤PÔ 9¯<–(ŸD¢Ñ¡5Ÿ›5šM˜–5™šM¡›œE›D (¡¤¹ ÕO‰

ŠD~ˆ~OŒ5†(Í

Figure 8: Enhanced help model.

the application, to the user’s manual, with scenarios exemplifying the usage of the application.

Users’ Utterances provide a great shift in perspective. Instead of just asking for local information by means of a hint over an interaction element, which only answers What’s this? questions, in the proposed architecture the user may ask for help by means of a predefined set of utterances (Table 2). In addition to providing varying access to help information in accordance with users’ doubts, our approach guides the designer in composing the help content, taking into account the different levels of affordance implied by utterances. This content must be organised in a way that allows layering (Farkas, 1998), i.e. it should provide clearly marked opportunities for accessing alternative or complementary help content, via utterances within the help system. Thus, users will be able to navigate throughout the help content, at different levels of affordance, and according to their immediate need for information, as expressed via the utterances.

In addition to offering help upon users’ requests, the application can also volunteer help information, by means of Direct Instructions and Error Messages.

Direct Instructions are explicit warnings the designers sent to users, to inform them how to proceed at a given moment, or the preferred path of execution for a task.

Error Messages are generated in case users perform an action incorrectly, or when a system failure occurs. These two components were already present in our first model proposal, and have been implemented in a case study (Silveira et al., 2000).

(16)

In addition to these components, additional features were also identified that may be desirable for enhancing help systems:

FAQ generator: the application may monitor users’ utterances, and build a list of their most frequent doubts for faster access. It seems to be especially useful within a multiuser environment, where a user may benefit from other users’

previous doubts. It can also be a valuable resource for future application redesigns.

Search engine: since the users’ terminology may differ from that used in the application, we must provide synonyms for the expressions used for accessing help, both through utterances and the main module.

Annotation module: this module allows users to bookmark or annotate help topics, for future reference (their own, or other users’). This might also be considered a user-defined FAQ list.

7 Discussion

Help systems shouldn’t be seen either as useless or as a remedy for all intrinsic design problems (Sellen & Nicol, 1990). In order to try to meet users’ needs and promote their understanding of applications, we use this model based on Semiotic Engineering, exploring the direct and indirect messages from designers to users.

Users access help when there is a breakdown in their understanding of an application. Since the designers have only an indirect participation in the interaction scenario, we try to maximise users’ comprehension of the application from the designers’ point-of-view.

Our approach differs radically from traditional help systems, with explanations that are disconnected from the situation at hand and from users’ needs. Instead, we try to bring some of the designers’ world, their concepts and perceptions, to users. Typical help systems are very impersonal and sometimes abstract, providing definitions that are separate from the current (intended) context of use. Sometimes a little of the designers’ rationale appears in tutorials, but usually not in the help content itself. By following our approach, we bring to help systems more insight about the software, a more accurate perception about the underlying reasons behind every implemented element and operation. We presume that a better informed user will be able to perform better.

The study of communicability utterances has urged us to extend to our original model in order to cope with additional communication breakdowns that occur during interaction, and that can be detected by the communicability evaluation method. These communicative utterances allow users to investigate, from different perspectives, what is going on during interaction. This new help access, taking into account what the designer predicts are possible communicative breakdowns that may occur during interaction, opens up new possibilities for users, in an attempt to minimise their doubts and help deal with their needs using the system.

Apart from impacting the help system itself, our approach can also be beneficial to the design process, because it prompts designers to explicitly give answers to

(17)

man

× y questions that remain unanswered throughout the development process. This happens typically because too many things are taken for granted, which perhaps shouldn’t be, from a user’s point-of-view. As a result, many design decisions are not recorded and cannot be retrieved later, if a problem occurs or another, perhaps conflicting, decision must be made.

Acknowledgments

We would like to thank the Semiotic Engineering Research Group atPUC-Rio for invaluable discussions about some of the ideas contained in this paper. The authors would like to thank PUCRS, PUC-Rio, TeCGraf, and CNPq for supporting their research.

References

Carroll, J. M. (ed.) (1998), Minimalism Beyond the Nurnberg Funnel, MIT Press.

Chamberland, L. (1999), Componentization of HTML-based Online Help, in Proceedings of the Seventeenth Annual International Conference on Computer Documentation (SIGDOC’99), ACM Press, pp.165–8.

Chu-Carrol, J. & Carberry, S. (1998), “Collaborative Response Generation in Planning Dialogues”, Computational Linguistics 24(3), 355–400.

de Souza, C., Prates, R. & Carey, T. (2000), “Missing and Declining Affordances: Are these appropriate concepts?”, Journal of the Brazilian Computer Society 7(1), 26–34.

de Souza, C. S. (1993), “The Semiotic Engineering of User Interface Languages”, International Journal of Man–Machine Studies 39(5), 753–73.

Dourish, P. (1997), Accounting for System Behavior: Representation, Reflection, and Resourceful Action, in M. Kyng & L. Mathiassen (eds.), Computers and Design in Context, MIT Press, pp.145–70.

Farkas, D. (1998), Layering as a Safety Net for Minimalist Documentation, in Carroll (1998), pp.247–74.

Hansen, B., Novick, D. & Sutton, S. (1996), Systematic Design of Spoken Prompts, in G. van der Veer & B. Nardi (eds.), Proceedings of CHI’96: Human Factors in Computing Systems, ACM Press, pp.157–64.

Johnson, W. & Erdem, A. (1997), “A Interactive Explanation of Software Systems”, Automated Software Engineering 4(1), 53–75.

Kedar, S., Baudin, C., Birnbaum, L., Osgood, R. & Bareiss, R. (1993), Ask How it Works:

An Interactive Intelligent Manual for Devices, in S. Ashlund, K. Mullet, A. Henderson, E. Hollnagel & T. White (eds.), Proceedings of INTERCHI’93, ACM Press/IOS Press, pp.171–2.

Marx, M. & Schmandt, C. (1996), MailCall: Message Presentation and a Navigation in a Nonvisual Environment, in G. van der Veer & B. Nardi (eds.), Proceedings of CHI’96:

Human Factors in Computing Systems, ACM Press, pp.165–72.

(18)

Mittal,Ø V. & Moore, J. (1995), Dynamic Generation of Follow on Question Menus: Facilitating Interactive Natural Language Dialogues, in I. Katz, R. Mack, L. Marks, M. B. Rosson &

J. Nielsen (eds.), Proceedings of CHI’95: Human Factors in Computing Systems, ACM Press, pp.90–7.

Norman, D. (1999), “Affordance, Convention and Design”, Interactions 6(3), 38–42.

Norman, D. A. (1988), The Psychology of Everyday Things, Basic Books.

Prates, R., Barbosa, S. & de Souza, C. (2000a), A Case Study for Evaluating Interface Design through Communicability, in D. Boyarski & W. A. Kellogg (eds.), Proceedings of the Symposium on Designing Interactive Systems: Processes, Practices, Methods and Techniques (DIS2000), ACM Press, pp.308–16.

Prates, R., de Souza, C. & Barbosa, S. (2000b), “A Method for Evaluating the Communicability of User Interfaces”, Interactions 7(1), 31–8.

Priestley, M. (1998), Task Oriented or Task Disoriented: Designing a Usable Help Web, in Proceedings of the Sixteenth Annual International Conference on Computer Documentation (SIGDOC’98), ACM Press, pp.194–9.

Raskutti, B. & Zukerman, I. (1997), “Generating Queries and Replies during Information- seeking Interactions”, International Journal of Human–Computer Studies 47(6), 689–

734.

Rintjema, L. & Warburton, K. (1998), Creating an HTML Help System for Web-based Products, in Proceedings of the Sixteenth Annual International Conference on Computer Documentation (SIGDOC’98), ACM Press, pp.227–33.

Roesler, A. & McLellan, S. (1995), What Help Do Users Need? Taxonomies for On-line Information Needs and Access Methods, in I. Katz, R. Mack, L. Marks, M. B. Rosson

& J. Nielsen (eds.), Proceedings of CHI’95: Human Factors in Computing Systems, ACM Press, pp.437–41.

Sellen, A. & Nicol, A. (1990), Building User-centered Online Help, in B. Laurel (ed.), The Art of Human–Computer Interface Design, Addison–Wesley, pp.143–53.

Silveira, M., Barbosa, S. & de Souza, C. (2000), Modelo e Arquitetura de Help Online, in M. Pimenta & R. Vieira (eds.), Proceedings of III Workshop on Human Factors in Computer Systems, IHC’2000, Instituto de Informatica, UFRGS, pp.122–31.

Sleeter, M. (1996), OpenDoc — Building Online Help for a Component-oriented Architecture, in Proceedings of the Sixteenth Annual International Conference on Computer Documentation (SIGDOC’96), ACM Press, pp.87–94.

van der Veer, G. & van Welie, M. (n.d.), “Groupware Task Analysis”, Notes from a Tutorial at CHI’99. http://www.cs.vu.nl/ martijn/gta/.

Winograd, T. & Flores, F. (1986), Understanding Computers and Cognition: A New Foundation for Design, Ablex. From 1988, an Addison–Wesley publication.

(19)

Author Index

Barbosa, Simone D J, 1 de Souza, Clarisse Sieckenius, 1

Silveira, Milene Selbach, 1

(20)
(21)

Keyword Index

communicability utterances, 1 help systems design, 1

help systems model and architecture, 1 semiotic engineering, 1

(22)

Referências

Documentos relacionados

implemented, the solution would be used by the owning department to provide ad hoc insights and recommendations to the other functional units of the business who might need to

E às vezes pensamos fazer de uma forma e acontece de forma completamente diferente… porque os miúdos não vêm todos os dias iguais, nós também não… e o que nós por vezes

Therefore, both the additive variance and the deviations of dominance contribute to the estimated gains via selection indexes and for the gains expressed by the progenies

It seems that the development of software for the planning of care tend to enable nurses to use the content present in these materials to help them apply or modify the

In this connection, the present study aims to present an epidemiological analysis of accidents involving scorpions in Brazil, in the period from 2000 to

palavras-chave mecânica da fratura; modo I; ensaio double cantilever beam (DCB); ligações coladas; correlação digital de imagem; sistema ARAMIS; extensões de flexão;

In this connection, the present study aims to present an epidemiological analysis of accidents involving scorpions in Brazil, in the period from 2000 to

Para tanto foi realizada uma pesquisa descritiva, utilizando-se da pesquisa documental, na Secretaria Nacional de Esporte de Alto Rendimento do Ministério do Esporte