Seamus Speaks
by Seamus forTMIS

Seamus

Anthropomorphic Illusions in the Human-Computer Interface


Intelligence, considered in what seems to be its original feature, is the faculty of manufacturing artificial objects, especially tools for making tools."

Henri Bergson (1911)

1. Introduction

This article proposes to take a bit of a ramble, to take the long way around the problem of anthropomorphic illusions in human-computer interfaces. In fact, it is my hope to 'encompass' the problem.

This approach to the topic mirrors the research that led to the line of thought that is explored here. The ideas also developed a bit like a stroll through a garden. The path was not aimless, but rather built of consecutive layers of thought and reflection on topics that became progressively more focused.

My first motivation was to discover something about the theoretical and practical underpinning of the thinking on graphical-user interfaces in the most general sense. The amount of material available is overwhelming, though much of it consists of instructions, recommendations, and opinions rather than research directed at the phenomena underlying the topic. Along the way, I perused in some depth three complete volumes on interface design (including Jeff Raskin's recent and widely acclaimed The Humane Interface), and some interesting insights into the state of the art in interface design emerged.

A major narrowing of the scope of this research focused on consideration of affordances and metaphors, topics that are most agreeable to the psychologist and linguist and which have generated quite a great deal of heated discussion and enlightened commentary.

The third phase of the research was drawn toward anthropomorphic illusions and their use, indeed their ubiquity, in the design of interfaces for 'intelligent' agent software. I have selected two articles to serve as the focal point for a discussion of the merits of this design strategy.

The first paper is titled "Can Computer Personalities Be Human Personalities" (Nass, et al). It discusses the remarkable ease with which designers can invest an interface with a perceived human-like 'personality' and provides empirical evidence to support a connection of this phenomenon to received human personality theory in psychology.

The second paper is "’It's the Computer's Fault’ -- Reasoning About Computers as Moral Agents" (Friedman and Millet). As the title implies, the authors have gathered empirical data about the tendency of people to impute moral agency to computers. The striking point about this paper is the extremely high percentage of even computer-sophisticated subjects who ascribe such agency to their machines.

In a way, these two articles are little gems of their kind. They represent a kind of psychologically oriented research into human tendencies with regard to interaction with this particular kind of machine. They provide reproducible, systematic data that may serve as a dependable reference point for design decisions that, in turn, have an enormous potential for impact on humans as they interact with computing machinery. Regrettably, this is somewhat rare. Most discussions of such issues are nothing more than an assertion of the authors' preconceptions and prejudices.

The structure of my argument will proceed from some comments about the history of interface design, especially with regard to the issue of consistency, and build upon those comments with a discussion of the elusive nature of affordance and metaphor as applied in interface design. Some personal observations and other evidence on the tendency to anthropomorphize computers will then be presented, followed by summaries of the two papers that have been chosen for presentation.

The concluding section will constitute something of a caveat about the wisdom of pursuing anthropomorphic interfaces for 'intelligent' agents. I hope to give some reasons for pause to those who make such design decisions, but with a distinct lack of optimism about the future directions that will be taken by those who are most influential in the human-computer design world.

2. Consistency and the History of Interface Design

"A foolish consistency is the hobgoblin of small minds."

Ralph Waldo Emerson

In 1968, Douglas Englebart demonstrated the first graphical user interface at the Xerox PARC facility in Palo Alto, California. Englebart, a disciple of Vannevar Bush and inventor of the computer mouse, laid the foundation for almost all future implementations of the GUI at that demonstration. The rest is simply a history of theft. Steve Jobs went to PARC and was enchanted by what he saw. So he stole it. Granted the Apple people did an enormous amount of research and development on the concepts they had shoplifted, but the only thing that they were eventually able to copyright was the icon for the 'trash can.' They were able to force Bill Gates and Microsoft to turn that icon into the 'recycle bin' when Gates and Company, in turn, stole the Apple GUI. What Microsoft has added to the GUI is a 'look and feel' that serves as their fountain of customer loyalty. The original ideas all came from Englebart and the Xerox team. There is nothing new under the sun.

The problem here is that 'look and feel.' All texts and discussions of human-computer interfaces emphasize consistency as the single most critical component of interface design. This is no exaggeration, as the following typical example clearly shows:

Consistency is a fundamental principle of good UI design, but it's really just a corollary of the axiom "make the program model match the user model", because the user model is likely to reflect the way that users see other programs behaving. If the user has learned that double-clicking text means select word, you can show them a program they've never seen before and they will guess that the way to select a word is to double-click it. And now, that program [had] better select words when they double click (as opposed to, say, looking the word up in the dictionary), or else you have a usability problem.

If consistency is so obviously beneficial, why am I wasting your time and mine evangelizing it? Unhappily, there is a dark force out there that fights against consistency, and that is the natural tendency of designers and programmers to be creative.

Now, I hate to be the one to tell you "don't be creative," but unfortunately, to make a user interface easy to use, you are going to have to channel your creativity into some other area. In most UI decisions, before you design anything from scratch, you absolutely have to look at what other popular programs are doing and emulate that as closely as possible. If you're creating a document editing program of some sort, it better look an awful lot like Microsoft Word, down to the accelerators on the menu items that you have in common…

Even if it's not right, if Microsoft is doing it in a popular program like Word, Excel, Windows, or Internet Explorer, then millions of people are going to think that it's right, or at least, fairly standard, and they are going to assume that your program works the same way. Even if you think (as the Netscape 6.0 engineers clearly do) that Alt+Left is not a good shortcut key for "Back", there are literally millions of people out there who will try to use Alt+Left to go back, and if you refuse to do it on some general religious principle that Bill Gates is the evil smurf arch-nemesis Gargamel, then you are just gratuitously ruining your program so that you can feel smug and self-satisfied, and your users will not thank you for it.

And don't be so sure it's not right. Microsoft spends more money on usability testing than you do, they keep detailed statistics based on millions of tech support phone calls, and there's a darn good chance that they did it that way because more people can figure out how to use it that way.

To create a good program with a usable user interface, you're going to have to leave your religion at the door, thank you. (Spolsky, 2001)

The fact that people develop mental models of what to expect from a user interface places an enormous constraint on interface designers. This constraint is continuously as a justification by those whose dominance on the desktop has smothered design innovation for the last decade or more. Both Apple and Microsoft are guilty. Which is the guiltier depends perhaps on your preference in the number of buttons on your mouse and other topics of religious significance.

These received, consistent designs are not necessarily good or bad per se, but insofar as they are less than optimal, we have institutionalized bad interface design in the name of consistency. A de facto standard has been created. An excellent example is the delete confirmation box. Despite the fact that users immediately become habituated to clicking through them without thinking, these confirmations have become gospel, a universal standard of good GUI design. Better that the interface should actively protect the user against unwanted loss of work than to train her to ignore warnings.

Such phenomena are disturbingly reminiscent of our recent class discussion of the QWERTY keyboard. The Dvorak keyboard provides about an 80% increase in speed and efficiency, and it is sublimely easy to change the keyboard in software, yet it is counterproductive to learn the new system. An archaic, inefficient interface has been frozen by time and usage into a consistent, immutable de facto standard. Such, I fear, will be the fate of the user interface.

Now a great many designers and theorists will argue that the current standards are more than justified by the psychological efficaciousness of affordances and metaphors, but I hope to cast a little healthy skepticism on that claim in the next section.

3. Affordances and Metaphors

"Oh what a tangled web we weave

When first we practice to deceive!"

Sir Walter Scott

 

The concept of affordances was developed by the perceptual psychologist J.J. Gibson in the late 1970's. They "refer to the actionable properties between the world and an actor (a person or animal). To Gibson, affordances are relationships. They exist naturally: they do not have to be visible, known, or desirable." (Norman) It is the ability of an object to convey its purpose to the user.

Donald Norman, author of The Psychology of Everyday Things, broadened the definition to include two categories, real affordances and perceived affordances. Real (or classical Gibson) affordances are properties of objects that exist in the real world. The only real affordances that we have in a GUI tend to be objects like buttons, which afford clicking. Most other user interface objects have only perceived affordances, which is not at all the same thing. Perceived affordances are learned cultural constraints and conventions, and "play very different roles in physical products than they do in the world of screen based products." (Norman) Thus, this concept borrowed from the world of industrial design seems to have been extended beyond its domain of applicability in the world of interface design. The manipulation of objects such as icons or hypertext links must be learned, however much the designer wishes to justify them as 'intuitive.' Icon functionality in particular is obviously overextended, as can be ascertained by glancing at the desktop of any 'naïve' user, where her entire screen surface is covered by identical folder icons that live on the desktop because the user has not been trained to think in hierarchical filing system terms.

Likewise, metaphor is often uncritically brought to bear as a justification for many of today's standard GUI practices. "Metaphors are linguistic devices in which the similarities between two things are highlighted by referring to them as equivalent… Metaphors typically compare things that have pre-existing identities, both conceptually and in appearance." (Gaver) However, there is a problem with extending the linguistic concept of metaphor to graphical symbols. The conceptual mapping of two ideas differs from the perceptual mapping that must relate the concept to a graphic image.

That the functionality of the software must be reflected somehow in the image that represents it is no small problem for icon designers, but the larger problem is that there is often no 'intuitive' metaphorical relationship possible between the icon and the concept in the user's mental model. Once again, we must rely on learned cultural constraints and conventions to arrive at the necessary coupling of concept and function. The much touted 'intuitive,' metaphorical nature of the interface is most often neither intuitive nor metaphorical. Still interface designers find the concepts of affordance and metaphor handy in guiding their work. It simply is not true, however, that these concepts serve as a justification for current GUI practices such as the 'desktop metaphor' or the proliferation of icons.

If we find ourselves working at a desktop represented graphically on a computer screen in 20 years or so, the designers of today will have done us a severe disservice. Since we have sown the seeds of doubt about the benign effects of interface consistency and the veracity of affordances and metaphors as applied to human-computer interfaces, let us examine a more specialized manifestation of these concepts, the anthropomorphic agent.

4. Anthropomorphism and Agents in Computer Interfaces

"Something seems to have happened to the life support system, Dave."

HAL, the computer in Stanley Kubrick's 2001, A Space Odyssey

I am just old enough to remember the first real impact of the computer on the wider public consciousness. It was a product of television in its early days, when the CBS television network leased an early UNIVAC computer as a publicity stunt to predict the outcome of the Eisenhower-Stevenson presidential election of 1952. The computer statistically analyzed early returns and tipped Eisenhower to win even before the polls were closed. CBS refused to release the prediction until the following day when it was obvious that the UNIVAC was correct in every detail. This provoked a huge sensation, and the American nation began its long acquaintance with the 'Electronic Brain,' as it was universally called. Thus from the very beginning, these most inhuman (and often inhumane) machines were cast as having human attributes.

Whatever drives us to ascribe humanity to these machines, it has been a constant theme in the popular imagination: from science fiction's Rollo the Robot to the infamous HAL9000 in the movie 2001, A Space Odyssey, the computer and the robot are endlessly anthropomorphic and endlessly both threatening and fascinating. My first computer programming project (in 1969, and therefore after HAL) was to get an IBM 1130 mainframe to format and print Lewis Carroll's famous Jabberwocky using Fortran and EBCDIC. Apparently my instructors found this both ingenious and appealing (I got an A). Here again, my motivation (and perhaps their approval in those distant days when computers seldom processed any text at all) was the charm of anthropomorphizing the machine to do something so uncharacteristic as to spew out poetry.

People have long invested themselves in the objects around them in a unique and curious way. These objects become 'extensions' of their owner, and are psychologically imbued with human characteristics. Boats and airplanes are always 'she,' and even computers have names. Furthermore, we humans invest ourselves emotionally in these objects into which we have extended ourselves. When another driver carelessly cuts in too close to an automobile that we have made an extension of ourselves, most of us react with the same discomfort, even anger, as when our own personal boundaries are violated. It is unremarkable, then, that we tend to introject ourselves into such a responsive and seductive machine as the personal computer.

This extends into the human-computer interface as well:

Why is this tendency to personify interfaces so natural as to be virtually universal in our collective vision of the future?

Computers behave. Computational tools and applications can be said to have predispositions to behave in certain ways on both functional and stylistic levels. Interfaces are designed to communicate those predispositions to users, thereby enabling them to understand, predict the results of, and successfully deploy the associated behaviors. (Laurel, 355)

This tendency has been adopted wholeheartedly by those who design intelligent agents, perhaps because it seems to be a thoroughly natural 'metaphor' or affordance for the function that such agents provide. We should not be confused when an anthropomorphic agent is represented as an animal. For the most part, it is not animal behaviors that are exhibited, but rather typically human ones. Mickey Mouse is not a representation of a mouse. Perhaps an exception to this might be the dog agent that fetches the email in some GUI implementations, but that seems to be merely an effort to be 'cute.'

Agents themselves are extremely interesting construct. The idea in terms of computer functionality was first formulated in the 1950's by John McCarthy, and the term was coined by Oliver Selfridge a few years later. The simplest sense of the word merely indicates one who takes action, which can be extended to imply one who is empowered to act on behalf of another person. The tasks that a computer agent might be asked to perform would include those that "require expertise, skill, resources, or labor that we need to accomplish some goal and that we are unwilling or unable to perform ourselves." (Laurel, 359-60) A further implication is that such an agent would ask the user for advice and guidance in the performance of such tasks, that such tasks would be customized to the user's needs. That such intelligent, computerized agents are sorely needed is simply indisputable. That it seems utterly appropriate for psychological and functional reasons to represent these agents in the interface as having human or humanoid characteristics is the problem that we have set out to explore. It is certainly easy to implement an agent interface as having a human-like personality, as we shall see in the research presented in the next section. Whether it is the best strategy in the long run is the question.

5. The Research Papers

"If we knew what it was we were doing, it would not be called research,

would it?"

Albert Einstein

"I don't know who discovered water, but it wasn't a fish."

Marshall McLuhan

These research papers selected were selected from the CHI (Computer-Human Interface) '95 Proceedings of the Association for Computing Machinery. They are in the freely distributed section of the ACM archives and so may be accessed by non-members..

5.1 Bibliographic Reference

Friedman, Batya and Lynette Millet. 1995. "It's the Computer's Fault" -- Reasoning About Computers as Moral Agents. CHI '95 Proceedings. ACM. Available at Accessed 27/02/02.

5.1.1 Authors' Abstract

Typically tool use poses few confusions about who we understand to be the moral agent for a given act. But when the "tool" becomes a computer, do people attribute moral agency and responsibility to the technology ("it's the computer's fault")? Twenty-nine male undergraduate computer science majors were interviewed. Results showed that most students (83%) attributed aspects of agency -- either decision-making and/or intentions -- to computers. In addition, some students (21%) consistently held computers morally responsible for error. Discussion includes implications for computer system design.

5.1.2 Summary of Friedman and Millet

This study conducted an interview survey of 29 male undergraduate computer science students in California in order to assess their views about computer agency and moral agency for computer error. Three specific areas that were questioned were: 1) the computer's capability to make decisions and its capability to have intentions, 2) student assessment of computer system characteristics and limitations, and 3) student moral judgements of moral responsibility for complex computer decision making in two scenarios.

The first scenario presented a computer error that gave too much radiation to a patient during medical treatment. The second concerned a computer mistakenly rejecting a qualified job candidate. For each scenario, three levels of moral culpability were presented: a completely automated computer system, a token human intervention by a person of low status and authority in the system, and a non-token intervention by a person with authority and status.

The amount of blame that each student assigned to the computer for each scenario and condition comprised the scoring. Careful coding protocols and appropriate nonparametric statistics were used to tabulate and analyze the results.

It was found that 79% of all student judged that the computers had decision making ability and 45% judged the computers to have intentions. 83% of the student ascribed at least one of the abilities to computers, while 41% ascribed both abilities. The authors discuss the reasons elicited as students supported their reasoning in making these judgements.

About 21% of the students consistently blamed the computer for the computer error, rather than the system designers, the computer operators, or the administrators. The authors explore in detail the justifications given for not blaming the computer and for not blaming the human actors in the scenarios.

The authors conclude that their work supports "a growing body of research that suggests people, even computer literate individuals, may at times attribute social attributes to and at times engage in social interaction with computer technology." They feel that the fact that such subjects hold the computers themselves responsible for computer error "should give us pause." They feel that designers should consider taking measures to ensure that users realize that it is the human factors that are, in reality, responsible for computer errors.

5.2 Bibliographic Reference

Nass, Clifford, Youngme Moon, BJ Fogg, Byron Reeves and Chris Dyer. 1995. "Can Computer Personalities Be Human Personalities?" CHI '95 Proceedings. ACM. Available at Accessed 27/02/02.

5.2.1 Authors' Abstract

The present study demonstrates that (1) computer personalities can be easily created using a minimal set of cues, and (2) that people will respond to these personalities in the same way they would respond to similar human personalities. The present study focuses on the similarity-attraction hypothesis, which predicts that people will prefer to interact with others who are similar in personality. In an experiment (N = 48), dominant and submissive subjects were randomly matched with either a dominant or submissive computer. When a computer was endowed with the properties associated with dominance or submissiveness, subjects recognized the computer's personality type along only that dimension. In addition, subjects not only preferred the similar computer, but they were more satisfied with the interaction. The findings demonstrate that personality does not require richly defined agents, sophisticated pictorial representations, natural language processing, or artificial intelligence. Rather, even the most superficial manipulations are sufficient to produce personality, with powerful effects.

5.2.2 Summary of Nass et. al.

These authors explore their subjects’ reactions to computer 'personalities' using the similarity-attraction hypothesis from personality psychology. They feel that although it is a commonplace to attribute personality to computer agents, no one has yet examined that using the empirical and theoretic tools of psychology.

They conducted a lab experiment using computers whose interfaces had been specifically endowed with personality markers. They predicted that the subjects would be able to identify that personality and respond to it in a manner predicted by the similarity-attraction hypothesis, i.e. that people will prefer to interact with personalities similar to their own.

Their method was to design computer interfaces that would be perceived as either dominant or submissive. Their subjects were then rated on a dominance/submissiveness scale using standard personality test. They then worked with the computers on various problem-solving tasks, and subsequently recorded their reactions to the experience.

The results supported their hypotheses:

· the dominant computer was indeed perceived to be significantly more dominant than the submissive computer,

· the subjects did not rate the computers differently in terms of affiliation or competence, and

· the dominant subjects preferred working with dominant computers, and the submissive ones preferred submissive machines.

They conclude that "personality is powerful and easy to manipulate, even in its simplest form." In terms of computer interfaces, they feel that the simulation of personality can be implemented very simply and does not require sophisticated algorithms, complex graphic environments, or "richly defined agents."

5.3 Comparison of the Studies

While these studies differ substantially in terms of their research methods, they both are of special interest as they are empirical explorations into what had hitherto been an assumed, common-sense explanation of their respective phenomena. That is to say that in both cases, 'everybody' already 'knew' the truth without needing to test it scientifically. From this point of view, neither result is really surprising, yet the strength of the results is so obvious that we should indeed be "given pause" and consider the implications.

Even computer science majors (who certainly ought to know better) impute decision-making, intention, and moral responsibilities to the poor dumb machines that they are learning to understand in the deepest possible way. It is a profoundly simple matter to imbue a computer with a personality that humans will automatically perceive and socially respond to exactly as if it were human.

Both of these studies point to something about the nature of the human-computer interaction that we would ignore at our peril. We are designing interfaces to machines under the blithe assumption that it is best and most natural thing to present the machine as an anthropomorphic entity, as a virtual human. It is almost automatic. Yet human social interaction certainly has its darker underside, and these studies suggest that perhaps we are foolish to ignore that possibility.

 

6. Discussion and a Tentative Conclusion

"Genius in truth means little more than the faculty of perceiving in an

unhabitual way."

William James

"Reality is that which, when you stop believing in it, doesn't go away."

Phillip K. Dick

"The most erroneous stories are those we think we know best -- and

therefore never scrutinize or question."

Stephen Jay Gould

Our discussion has set up a series of assumptions about the nature of a good human-computer interface, and then taken pot shots at them in an effort to stimulate some sort of more critical thinking. First, we examined the need for consistency in the interface. While consistency seems to be absolutely critical within an application or suite of applications, I have argued that it has been twisted to support the view that all applications must be consistent to one de facto standard, and that standard has become Microsoft office. May the good Lord please spare us.

Next we have taken a look at the use of affordances and metaphors to justify many features of contemporary GUIs. That some of the original purveyors of this notion are backing away and casting doubt on it should give rise to a certain amount of healthy skepticism. These concepts have been extended beyond their original competence and are bandied about quite carelessly. They are certainly useful, but loose and inaccurate usage of the concepts does not engender confidence in those who maintain that they support and justify contemporary interface design practices.

That agents should be anthropomorphic as a natural way to exploit human interactive tendencies has also been discussed. It is hoped that by now the reader will have second thoughts about simply accepting that assertion as obvious. Certainly the two research papers presented show that it is very easy to imbue interfaces with a personality, and that humans will readily ascribe intention and responsibility to computing machinery.

It is the thesis of this article that we should proceed very cautiously when we intentionally set out to design computer interfaces that give the illusion that we are having social interaction with another human. It is so tempting and 'natural' to do so, that we may be creating a monster, a Frankenstein creation that fails to fulfill the expectations that we are purposely led to invest in it. One of my professors recently made the interesting point, quite contrary to the received wisdom of system analysis and design, that the analyst should not consult the user about which features are desired in a system. Budgetary and management constraints often prohibit the implementation of many such features, and it is much better not to raise expectations that you cannot fulfill. Today's technology cannot provide the level of insight, pattern analysis, and common sense that we expect from human beings. I strongly suspect that such machine capabilities are generations away, if they ever arrive at all.

One might attribute this lust for anthropomorphic illusion to the obvious fondness that many computer professionals have for science fiction literature. Robots and futuristic computers in this genre seem always to display startling verisimilitude to human behavior. Spielberg's recent movie AI is a prime example, exploring the moral implications of truly intelligent machines. It has been a recurring theme over the last 70 or 80 years. It is primal, interesting, and often downright scary. It is also the basis of much of the Luddite reaction to technology in recent years. Computers promise to be humanlike and easy to use. When it turns out that neither is true, people react as if they had been betrayed.

Wouldn't it be better to encourage users to think of their machines as machines? Should not programmers and system designers stand up and take frank responsibility for the performance of the systems that they create? It actually seems rather cowardly to invest these machines with tricks that make the user believe that they are human. What an adroit way of shifting the blame when things go wrong! How incredibly dishonest this is!

Will anyone heed this call to skepticism? I sincerely doubt it. The steamroller of fashion and vested interest can't be stopped that easily. People truly are in a state of 'Gee Whiz!' over the illusions that technology can create. Microsoft pays the piper and Microsoft calls the tune. After the disaster of Microsoft Bob and Clippit, the much despised, irritating paperclip that is the help agent in MS Office, one might expect that the software giant might be cautious about introducing new anthropomorphic agents. Nope. Microsoft will never say die. This research led to the discovery of a product that they are developing (the Persona Project) that employs a talking parrot as an agent. Walt Disney apparently had a distorting influence on Bill Gate's emotional development.

For better or worse, there is also a recurring theme in science fiction that focuses on the outcasts from the dominant society. This is being acted out very creatively in the open source and free software community. I personally enjoy working almost exclusively in the Enlightenment window manager environment under Linux and FreeBSD. This GUI offers little in the way of traditional icons and affordances. All applications are accessible only through menus that are brought up by various mouse clicks on the background. I leave a pager on the screen (to visualize and move to various other screens) and an icon box that contains only the icons of minimized applications. It is elegant. It is completely unlike anything else I've ever seen. Was it difficult to learn? It took me 10 minutes to master. It is true that when I come directly from working with Windows, I tend to double click things instead of the Unix standard single click, but I only do it once or twice. So much for consistency.

Perhaps this arena will prove to be the test bed for any real development in human-computer interfaces. The industry is marching in lock-step with the mass marketing movement. They are operating under the assumption that users are stupid and need to be coddled constantly. They might be right, but their attitude will not lead to innovation and the development of more powerful and efficient user interfaces.

And as a closing note, here is one last epigram with a pithy warning.

"A witty saying proves nothing."

Voltaire

 

7. Bibliography

The Papers Presented

Friedman, Batya and Lynette Millet. 1995. "It's the Computer's Fault" -- Reasoning About Computers as Moral Agents. CHI '95 Proceedings. ACM.

Available at:

Accessed 27/02/02.

Nass, Clifford, Youngme Moon, BJ Fogg, Byron Reeves and Chris Dyer. 1995. Can Computer Personalities Be Human Personalities? CHI '95 Proceedings. ACM.

Available at:

Accessed 27/02/02.

 

Other Source Materials

 

Ball, Gene, Dan Ling, David Kurlander, John Miller, David Pugh, Tim Skelly, Andy Stankosky, David Thiel, Maarten Van Dantzich and Trace Wax. 2002. Lifelike Computer Characters: the Persona project at Microsoft Research. Microsoft. Redmond, Washington.

Available at:

Accessed 04/03/02.

Gaver, William W. 1995. Oh What a Tangled Web We Weave: Metaphor and Mapping in Graphical Interfaces. CHI '95 Proceedings. ACM.

Available at:

Accessed 27/02/02.

Laurel, Brenda. 1999. "Interface Agents: Metaphors with Character." in The Art of Human Interface Design. Brenda Laurel and S. Joy Mountford (Eds.) p.355-365 Addison Wesley. Reading, Massachusetts

Norman, D. A. 1998. Affordances and Design. Don Norman Home Page.

Available at:

Accessed 19/02/02.

Norman, D. A. 1999. "Affordance, conventions, and design." Interactions 6, 3 (May, 1999) p. 38-43.

Raskin, Jef. 2000. The Humane Interface: New Directions for Designing Interactive Systems. Addison Wesley. Boston.

Rehder, Bob, Clayton Lewis, Bob Terwilliger, Peter Polson, and John Rieman. 1995. A Model of Optimal Exploration and Decision Making in Novel Interfaces. CHI '95 Proceedings. ACM.

Available at:

Accessed 27/02/02.

Spolsky, Joel. 2001. User Interface Design for Programmers.

Available at:

Accessed 29/01/02.

Takeuchi, Akikazu, and Taketo Naito. 1995. Situated Facial Displays: Towards Social Interaction. CHI '95 Proceedings. ACM.

Available at:

Accessed 27/02/02.

» email this article «

Your Email:

Their Email:

  

Please direct comments or inquiries regarding this site to Webmaster.
Copyright � 1998-2003 by TM Information Services
All rights reserved.