The detection and elimination of spurious complexity

Harold Thimbleby
Computing Science
Middlesex University, London, N11 2NQ, GB

DRAFT

Abstract

Computer science develops complex systems that demand all our attention to just begin to understand. Critical thinking is overwhelmed, that might otherwise have been directed at rhetoric blocking and hubris detection. This paper shows that there is much unchecked hyperbole in computing, which affects our own standards and ability to design well. The paper explains why such bullshit comes about, how people collude in its propagation, and proposes ways of reducing the problem. Furthermore, we show that detecting and eliminating it is a high calling, and must be seen as engaging in justice and fighting hypocrisy (even in ourselves), and is an extremely worthwhile, if daunting, task.

"Learning how to not fool ourselves is, I'm sorry to say, something that we haven't specifically included in any particular course that I know of. We just hope you've caught it by osmosis." Richard Feynman

Introduction

When computers work well, they work very well. Handheld calculators would have been miracles a few years ago; fly-by-wire aircraft are very impressive -- there are many other examples. But when things go wrong, as they do from time to time, they can go brain dead in ways we would prefer to quickly forget than think about.

I was told recently of a frustrated user who jumped up and down on his electronic personal organiser, until there was broken plastic and glass around on the floor. I am sure it was a satisfying experience! But can you imagine someone jumping up and down on their paper diary? You'd have to be mad to get much satisfaction from destroying one.

There is something special about computer systems, which personal organisers in the story represent. They are complex, unreliable -- and yet we depend on them, and buy upgrades to go even faster.

So what uniquely identifies computing? We could start with an approach like Turing's, but this defines an object of study, not what characterises it. What is unique is the impact of spurious, man-made complexity. Most computing is not based on elegant programs, or even ones that work, but consists of hugely complex systems like Windows 98, the World Wide Web (and all its browser software), aerospace systems, financial systems, nuclear control systems, video recorders, and a host of consumer gadgets, from toasters to tamagotchis. Indeed, tamagotchis represent computing rather well:

I know a 12 year old who has written a Visual Basic program to behave like her (now defunct) tamagotchi. The complexity of tamagotchis is reasonable for a 12 year old to construct, yet what they are is complex enough to challenge the skills of someone like myself, with postgraduate qualifications in computing!

We could make many more observations -- e.g., computing systems can fail but not stop working (a broken bridge doesn't bridge a river, but a financial program that fails is still a financial program). Without over-philosophising, computing concerns objects that

and, in consequence, they:

The conspicuous feature is the difference of construction complexity versus comprehension complexity. We can view this difference from inside, examining the programming process, or from outside examining the assessment process.

From inside, the programming effort is effectively linear. If a program is a string of bits, programs grow sub-linearly with the typing the programmer does (some typing may be deletions), and this is true regardless of where the programmer is editing the program. The number of things a program can do, however, grows exponentially with the number of interactions it performs while executing -- the user does not know what the outcome of interactions will be (e.g., because they don't know what state the program is in). So, programs typically have a complexity of behaviour that grows faster than their complexity of construction. This argument has few assumptions for the sake of being realistic. Of course, in the best case, the programmer could write a trivial program (e.g., the instruction that freezes the computer): clearly trivial to program and to use -- but few programs are so simple. In the worst case, the programmer constructs some trapdoor function (of which there are lots of simple examples: the program could require the user to solve a Diophantine equation, or it could ask questions about context free grammars, etc): thus it is easy to program, yet arbitrarily hard to use, if not undecidable.

It is popular to say humans have a bounded rationality; for our purposes it is sufficient that humans have a finite speed of computation and a finite lifetime, in other words, that there is a hard upper bound on their cognitive resources. It follows that there are programs people can write that cannot be understood. Indeed, routinely people who are sufficiently skilled to build objects achieve behaviour that is incomprehensible to themselves, even though they may have techniques to deny it -- some of which we will discuss below. (As an aside, this is why formal methods are necessary: to compress the behaviour into something manageable.)

There is also an adversarial aspect to the problem. In the natural sciences, it is assumed that however complex the Universe is, given long enough science will be able to catch up on it. Occam's Razor encourages us to be somewhat more optimistic: we believe in the parsimony of nature. In understanding computer systems, however, we would be very lucky indeed if a system stayed fixed long enough to understand it before it was 'maintained' or completely upgraded!

Because understanding a system overloads our limited cognitive resources, we have no spare capacity to evaluate it and come to a sound opinion of it. Since our cognitive resources are typically completely consumed by just trying to use the program, we may not even be aware we do not have any critical faculties left over to form any critically informed opinion of the program! As we argue next, there are many confounding psychosocial issues to make it even harder.

From outside, interesting things happen. A person watches the execution of a program mediated through its peripherals, such as a window on a screen. Any observation records a trace, which the person generalises into a model of what the program should be able to do in principle. Unfortunately, there are no guarantees to this generalisation, yet evolution has endowed us with over-powerful mechanisms to generalise. The so-called "media equation" (Reeves & Nass, 1996) says we take media as reality evolutionarily speaking, media are so recent that we tend to treat everything our senses perceive as real. A real program behaving like one demonstrated would work everywhere else in its domain; yet the demonstration has only shown us a single trace, and in a demonstration one cannot distinguish between a simulation (which need be no more than a "film") and the real thing.

We regularly exploit the media equation for enjoyment for the willing suspension of our critical faculties. Theatre is the projection of a story through the window of a stage, and typically the audience gets immersed in the story as if it was real. This is deliberate. We willingly suspend asking questions about the story that is not projected, such as we don't worry about unrepresented details of King Lear's life. However, if the theatre represented a real model, such questions would have answers. In computing, the power and technique of the theatre is recruited to demonstrations there is a literature urging the exploitation of dramatic technique to enhance interactive systems (Laurel, 1991). It is very hard to watch a demonstration and to enquire about the off-stage issues: it is as if one is breaking the cultural taboos of interacting with actors. It is therefore tempting to come away from a demonstration believing (or not knowing otherwise) that the trace was typical of the general behaviour of the program. (Theory and theatre have similar Greek roots: theory is about objects of study, and theatre presents objects to view or study (Knuth, 1996).)

There would be no problem except we require systems to meet certain prior requirements, and for most systems (apart from games) these requirements are hard to meet. The people who design and build computing systems need certain skills.

The issue is how to eliminate spurious complexity (that is the consequence of inadequate skill applied to the task of constructing objects of particular behaviour) when we are not disposed to see it, whether we are users or designers.

Brief examples of problems

Casio calculators

Calculators are an example of a mature technology. Basic calculators have well-defined requirements, of accuracy and performance and so on. There have been many generations of calculator designs, and the manufacturers have had many opportunities to step their production to fix known problems. The only limitations on calculators are the manufacturers imagination and skill. I want to devote some space to this example because so few people see any problem at all.

Casio is the leading manufacturer of handheld calculators. Two of their basic models are the SL-300LC and the MC-100.

Thus a market leader, Casio, makes two similar calculators that work in subtly different ways, and both proclaim features that are ironic. Memory should save paper and help the users do sums more reliably. Yet most users (especially those that need calculators) would need a scrap of paper to work out how to avoid using paper to write down the number!

Casio has been making calculators for a long time, and the two calculators are not "new" in any way. It is not obvious how Casio can justify either the differences or the curious features shared by both calculators. Neither comes with user manuals or other information that reveal any problems.

Any calculator, and the Casio ones in particular, can be demonstrated. They are impressive, especially if a salesman shows you them going through some typical (but unsophisticated) calculations. It is possible to demonstrate the memory in action, and only some critical thought would determine that it is a very weak feature.

Canon cameras

The Canon EOS500 is one of the most popular automatic SLR (single lens reflex) cameras, and is a more complex device, with more complex requirements, than a calculator.

In the Casio calculator examples, despite Casios undisputed ability to make calculators, we might query their ability to design them. In the Canon camera example, we have more evidence. The EOS500 camera manual warns users that leaving the camera switched on is a problem. Canon evidently know that the lack of an automatic switch-off is a problem! There is an explicit warning in the manual on page 10:

"When the camera is not in use, please set the command dial to 'L'. When the camera is placed in a bag, this prevents the possibility of objects hitting the shutter button, continually activating the shutter and draining the battery."

So Canon knows about the problem, and they ask the user to set the camera off -- rather than designing it so that it switches itself off. A cynic might suppose that Canon make money selling batteries or film; the next example is another case of Canon apparently trying to sell more film:

"If you remove a film part-way, the next film loaded will continue to rewind. To prevent this press the shutter button before loading a new film."

There are many other admissions of flaws. Thus Canon is aware of design problems, but somehow fails to improve.

The user manual for the EOS500N, which is a new version of the EOS500, puts the same problem thus:

"If the film is removed from the camera in midroll without being rewound and then a new roll of film is loaded, the new roll (film leader) will only be rewound into the cartridge. To prevent this, close the camera back and press the shutter button completely before loading a new roll of film."

It seems like the manual writers have now discovered that as well as pressing the shutter button, the camera back must be shut too (it would probably be open if you were changing a film). But it doesn't seem like the camera designers read the EOS500's manual themselves.

Java

Java is promoted as a programming language with a buzzword list of virtues. We will look at one problem: it is very easy to confuse the different behaviour of fields and methods. This is a point made in the book The Java Programming Language (Arnold & Gosling, 1998), written by some of Javas designers:

"You've already seen that method overriding enables you to extend existing code by reusing it with objects of expanded, specialized functionality not forseen by the inventor of the original code. But where fields are concerned, one is hard pressed to think of cases where hiding them is a useful feature."

"Hiding fields is allowed in Java because implementors of existing super-classes must be free to add new public or protected fields without breaking subclasses."

"Purists might well argue that classes should only have private data, but Java lets you decide on your style."

Purists may define all fields to be private, and will provide accessor functions if the field values are needed outside a class body. Unfortunately, this safer programming has efficiency implications, which is probably the reason Java is designed the way it is.

Like the Canon camera, we see the English description of a system admitting avoidable problems with the system.

Collusion

We've shown that commonplace systems are badly designed, and we argued that bad design is a consequence of unmanageable complexity. Ideally, systems should be better engineered, but they arent.

There are many reasons why we collude with bad system design, whether as consumers of attractive gadgets that promise to do wonderful things; whether as programmers who make a living from developing systems; or as academics who can make a living solving the problems. The reasons are deep and varied psycho-social mechanisms (e.g., Baudrillard, 1998; Postman, 1992).

Successful dreams

Thinking through the use of systems is almost impossible because of their intrinsic complexity, yet we do manage to dream about the use of complex systems ("scenarios" are a standard design technique: Carroll, 1995), and the dreams are mostly positive -- because it is hard to imagine at the same time as a realistic scenario the infinite ways in which it will fail! Designers will tend to think of the ways in which their systems will be used; clearly they cannot possibly anticipate the many ways their design will fail, and they probably won't think of any of the ways because it is too hard just thinking how it should succeed.

Lottery effect: computers seem to be more successful than they are

Lottery winners are reported in the media, and we become familiar with success. But success is infrequent -- just sampled with bias by the media! In technologies that depend on media (e.g., the Web) it isn't possible to sample failures anyway. Companies that experience computer failures and hence go out of business don't exist, and aren't going to tell anyone.

The review culutre: feature ticking is sufficient

Systems are reviewed by journalists who don't have time to text systems thoroughly. Reviews tend to list "new features" rather than discuss issues of complexity or reliability (there isn't time to do this part of a review). The rest of the world, as readers of reviews, gets the regularly-reinforced impression that this is how systems should be assessed, and we end up with feature-oriented discussions. The complexity of systems, their usability, and other systemic features get under-valued, and, of course, feature-ticking as a metric plays into the hands of programmers -- it is always easy to add a new feature, if no regard is taken of its integration into the theory the system represents. Thus, feature-ticking suits everyone, except when it comes down to actually using the explosively-featured systems!

Successful children -- dumb adults

Some companies use children to help them design their products, on the basis that children are more capable with complex technologies, and anyway they will be their future customers. Our culture assumes children are capable, and adults are "too old" to program their video recorders. Adults blame themselves for problems, and therefore buy upgrades and new products, rather than pressurising manufacturers to improve.

In my view, the reason children are successful when adults are not is not because adults are "too old" but because adults are mature and know how things (such as VCR clocks) should work. But most systems are not designed sensibly and therefore are impossible to use if you think you know how they work. Children don't know how to use anything, so they play. Random play will find out successful examples of using systems faster than mature expectation that is wrong!

This culture benefits manufacturers, and it benefits the prestige of technology adopters who acquire a youthful charisma.

Realism-reality gap: designers are under pressure to deliver because it is "so easy"

Realism is easy: a look at any arcade game will show the sophisticated realism that is possible. The media equation implies we tend to treat realism as reality good design is easy to fake, especially when you can't assess the mechanism.

Most people therefore think programming is trivial. (Even if it is hard, the scale of production means the marginal cost of design is trivial.) So, designers are put under pressure from marketing, management, and everyone else, to deliver complex products faster than is possible consistent with doing a good design.

Oracle effect: experts under-estimate complexity

Experts (particularly programmers) know how complex systems should be used ("press the twiddle key when you do that!"), and often the reason why a user cannot operate a system is because they do not know some apparently trivial fact. The expert tells the user, and the user is impressed with the skill. The expert thinks the user is stupid, because the fact is trivial.

One way to use computers

Because oracles are so successful, there "must be" a right way to use computers. It is useful to have a word for deliberately avoiding their narrow-mindedness. A system is permissive if it permits itself to be successfully used in more than one way. One that is not permissive is restrictive. For example, to get my VCR from record-pause mode to record mode, I must press Play: yet both Pause and Record do nothing -- this is both odd and restrictive. (It probably comes about because programmers write straight-line imperative programs, rather than declarative programs.)

Even human factors experts may assume there is one right design, and that users must know it. Nielsen (1993) describes a permissive system, yet users were classified as "erroneous" if they knew only one of the alternatives!

Confusing automation for computation: mindless efficiency

Computers can automate bureaucracy, and they can do it faster than by hand. This results in a mindless application of computers to solve problems by making inefficient activities faster, rather than more efficient. A specific example is the way calculators merely do what mechanical calculators do, rather than something new (Thimbleby, 1996, which provides both a critique and a solution); and, worse, on-screen calculators mimic handheld ones!

Inertia

Lawyers have ensured that there is no liability attached to shoddy design. One has only to look at the warranty that comes with any piece of software. The following warranty is interesting because it is made by a manufacturer of safety critical systems that is "committed to meeting the highest applicable safety standards." They feel able to provide a good warranty for their hardware (some of their products have life time warranties), but they provide a very feeble warranty of their software:

"Each Fluke product is warranted to be free from defects ... Fluke warrants that software will operate substantially in accordance with its functional specifications for 90 days and that it has been properly recorded on non-defective media. Fluke does not warrant that the software will be error free or operate without interruption." Fluke (1997)

The Fluke warranty is extreme only in juxtaposing a conventional warranty with a typical software so-called "warranty."

There is no reason to improve (this is the tragedy of the commons: because we all benefit by not improving), which leads to the state of the art defence in law.

Superficial usability

Because there is one way to use computers, and because programmers "don't need" to improve, and because of the media equation ..., a huge emphasis has come to be placed on appearances and post-design methods. New computers look attractive, but they still run the same old software. The disciplines involved in assessing usability of systems have developed various non-technical approaches, and because of their effectiveness (in the face of apathy) they have gained ascendancy (cf. Landauer, 1995). User interface designers have seen their job as understanding the psychologically-interesting human responses to bad technology, rather than avoiding the problems in the first place. To be positive, the value of "usability engineering" is that it is usually done by people who do not understand system design, and therefore, like users, do not have access to oracles. See Rettig's (1992) paper "Interface design when you don't know how," summarising the wisdom of conventional HCI, and which makes usability a purely non-technical activity. There is a more recent summary of a debate: "You need a psychologist to teach HCI correctly to a computer scientist" (Chesson, 1998).

It is arguable, of course, that HCI has defined itself to be a soft discipline; but, if so, it leaves a serious gap, and will find itself unable to influence system designers. If we want to improve computing beyond placebos and palliatives, we cannot look to usability experts for serious help -- see (Thimbleby, 1990 & 1998) for constructive suggestions on ways to help designers.

Usability is the user's problem!

Ralph Nader's classic book Unsafe at Any Speed (1965) shocked the 1960s car industry. He strongly criticised the industry for making intrinsically unsafe cars and for blaming drivers for accidents. "The driver has the accident and the driver is responsible," the manufacturers argued. Pedestrians "gently knocked" were killed, cut open by sharp body styling. Car manufacturers responded that in any collection of accident statistics one would be bound to get some gruesome cases: they denied that the inherent dangers of sharp fins were their fault, and anyway drivers wanted such grotesque styling!

In the sixties people were told to drive more safely (to be car literate just as today users read X for Dummies to be computer literate), but the manufacturers said this to deny their responsibility for designing safer cars. Today people have problems with computers. Today people are told to read the computer manual and make themselves computer literate. If a user does that, the problem for the manufacturer goes away. This approach feeds an industry in training and consultancy.

Nader showed that many designs were intrinsically unsound and could not be driven well even by highly skilled drivers. The onus -- not admitted in the 60s -- was on the designers to make cars that were easy and safe to drive. It required force to make this change in perception, led by consumer pressure, as well as legislative and professional standards.

Good design as engaged explanation

Somehow, there is a gap, and it needs bridging. Good user manuals seem to be conscious of usability problems, but the manuals are somehow not engaged in the design process -- rather, they are commentaries on it, written by powerless authors too late in the product design cycle. How can effective consciousness be brought about? One way would be to make a method of taking warnings in manuals as indicators of improvements.

User manuals are often scapegoats for bad things. They are indeed often unintelligible, and thereby contribute to the confusion and difficulties users have. But it isn't possible to write good manuals for bad systems. However, manual writers do "stand back" from mere manual-writing and provide users with useful advice about how to cope with problems with a things design. A user manual is a partial program for the user to execute to run their side of the user interface; we ought to use all the tools of computing to make user manuals better (e.g., declarative, if we think declarative programming is good).

It is self-evident, and borne out by experiment (Carroll, 1990), that short manuals are better than long manuals. Combining this idea with the previous gives a design approach to make better things:

  1. Construct the initial user manual. This step should be automated.
  2. Find problems. Clearly, good technical authors are able to do this. It is likely that the act of explaining clearly how to use a system helps uncover problems with it. Some aspect of a design that cannot be explained briefly and clearly is likely to be hard to understand.
  3. Fix the design: the user manual, along with its warnings, lengthy explanations and invocations of oracles, is a direct indicator of the design areas that need attention.
  4. Fix the manual, having fixed the specification. (This step should be automatic if step 1 is automatic.)
  5. And repeat, while each step improves the design and the product. Many manufacturers have the luxury of producing a range of products, and of updating them regularly. In such cases, one might manufacture a design before the improvement cycle is complete, leaving further improvements for future products. Thus, the method not only improves design, but gives marketing a method for continually enticing consumers. It ought to be easy to justify!

It is important that this process is semi-automated; it assumes that repetition is rapid. In the conventional approach, a system is specified, then fabricated, then its manual is written, then it is used (and possibly tested). Only at the end of the cycle is there use data to feedback into a revised specification -- but it is already too late: the product is built.

A good user manual can be written (or partly written: see Thimbleby & Ladkin, 1995) by automatic tools, there is little delay in this cycle; it could be fully concurrent. If manuals have to be written by people without help from the formal specifications (eek!), then at least in manufacturing, last year's manuals can be fed into this year's products.

To the extent that this is a good method, then systems should be designed so that user manuals can more easily be generated from them or their specifications (cf. literate programming: Thimbleby, 1990). While at it, we can also generate other sorts of "manual" (paper, interactive, diagnostic, and so on) with little additional effort.

It is easy to write manuals that are vague, inexact and misleading. To be effective, manuals need to be complete and sound. Perhaps there could be internal documents that are used in the design process, and actual user manuals that are derived from the internal manuals, made more readable for users.

More generally, for "manual" substitute any view. The formal specification of a design (whether as a logical formula or a circuit diagram or computer program) is "just" another way of explaining the design -- but to a different sort of user (a mathematician, an electronic engineer, a programmer). These "manuals" can give the "technical author" opportunities to explain and help the "user." Different sorts of design problems will be brought to consciousness, and fixes will be suggested. Thimbleby and Ladkin (1997) use a logic specification of an Airbus subsystem to show that quite complex system manuals can be improved (and that minimisation algorithms can be used to reduce their size).

Justice

What do we mean by designing better things? What is good anyway, what is this goal of getting better? These are questions of ethics (or moral philosophy), the study of what is right. Ethics has a long history, going back to Aristotle (384-322BC) and earlier.

Aristotle defines justice as the act of giving a person good. This is what designers who strive to design better things do. They design "good" which is embedded in the things they design. This good is then passed on to the users of the things. To do good design, then, is to be engaged in acts of justice.

There are different sorts of justice. A user of a gadget is typically unable to negotiate over details of the design: in a sense, the designer has authority over the user, at least in so far as the product constrains the user. Justice as an act of authority is the maintenance of rights: the user of gadgets have rights, and just design is to maintain those rights. And there is contributive justice, which is the obligation to enable individuals to achieve good. In contributive justice, the designer contributes to the users ability to make good use of the gadgets. Clearly, good manual writers contribute to a just world.

Designing systems, as a matter of clear thinking, and certainly as a matter of formal mathematical activity, involves truth. But we have seen that 'truth' is insufficient to design well; devices, and how we understand them, are part of social institutions. Justice is to social institutions as truth is to rational thinking.

If design is justice, can we make use of this fact? A few thousand years of philosophising on justice has had little effect on the world. John Rawls wrote the classic book A Theory of Justice (1972), where he promoted the idea of justice as fairness. Rawls defined justice as a system of rules that would be designed by people under a "veil of ignorance" of whether and to what extent those rules applied to themselves. By this he meant the designers do not know how they might be affected, so they will build a world that treats them fairly. For example, one might imagine that the planners of a just system are as-yet unborn. They might be brought into the world at any age; they do not know whether they will be rich or poor, black or white, handicapped or athletic, male or female, blue-eyed or green. Under this veil of ignorance they would be foolish to behave other than as fairly as they possibly could. They might be brought into the world too old to operate a video recorder! The scope of the fairness applies to the designers themselves as well as to the users. (Rawls makes it clear that the idealised [just system] designers have no information of their future state, but they do have the knowledge and skills necessary to design well.)

(There are difficulties with taking Rawls too seriously. There are duties of just action to non-contracting parties, such as to the environment. How we design things to take their responsible place in a larger ecosystem beyond other users, say to be recyclable, is beyond the scope of this paper -- but that is not to imply such issues are optional; see Borenstein (1998).)

Do designers of things act justly by the Rawls definition? Mostly not. They design things they know they will not use, and even if they did use, they would have oracular knowledge. Designers are never in a veil of ignorance. Many programmers build systems that they have no intention of using. If, instead, they worked under the Rawls veil of ignorance, they might try harder -- in case they ended up being a user of their system. If they were programming a tax program, they might end up "born as" accountants, tax-payers, civil servants designing tax law, tax evaders, auditors, managers, as their own colleagues having to maintain their system at a later date, or even as the manual writers -- they would have to design their tax program carefully and well from all points of view, including making manual writing easy (which gains the advantages described above). They might prefer to contain complexity rather than risk it being unmanageable.

This idea is anyway enshrined in conventional good practice: "know the user" (cf. Thimbleby, 1990; Landauer, 1995). Rather than merely "know" one's way into all the other possible roles, one might more easily, and more reliably, do some experiments and surveys with other people (though to do this requires the product, or perhaps an earlier version of it, to exist). It is pleasing that accepted design practice is also just (who wants to be called unjust?)

To summarise: good design is engagement with justice, and we have seen two ways to do this. First, to stand back and be conscious of the ways in which others (users) will operate the product -- use concurrent engineering with user manuals; secondly, to put oneself into the many different roles of usage. A consequence is that designing systems to support easier manual generation becomes a higher priority, and this in turn helps improve the systems themselves.

Design by accident?

Aristotle claims justice is the only virtue that can be achieved by accident. You can't have integrity (another virtue) by accident: integrity has to be intentional. Someone who claims to have integrity but does not is faking, and has no integrity. But acts of justice do not depend on the judge, they are outcomes and are just or unjust to the extent that they fairly affect others. The point for designing better things is that some designs may be good by accident.

If we are optimistic, the market helps ensure (but unfortunately does not guarantee) that good design thrives, and conversely, poor design will get less market share in the face of better competition. The market is a force of "natural selection." Designers are the evolutionary equivalent of mutagens -- they create mutations: they produce new designs and new variations. By Aristotle's argument, in design we can have a successful blind watch maker: some even random designs will succeed. That some blind watch makers may be successful by chance is no reason to copy them. If we want to design deliberately, we need a commitment to justice in design. This cannot be done by accident -- even if we are optimistic about market forces.

If we are not so optimistic, the market does not help at all! There are many examples of "superior" technical systems failing because of arbitrary economic and social forces (consider VHS and Betamax). A recent example is the growth of incompatible proprietary versions of HTML in Web browsers, despite a standard for HTML itself. By being incompatible, companies can attempt to make their competitors' software less useful to their own clients.

Conclusions

The argument of this paper is that computing systems are so complex that they are really a different kind of thing that requires a different kind of thinking. In particular, they are so complex that we are no longer able to assess them for quality, and so we take them as objects for uncritical consumption. Our entire culture is taken up in this game: it suits almost everyone in different ways -- manufacturers make lots of money (selling systems to fix problems that should not have been there), book publishers sell "dummies" books, marketing people have lots to advertise, and we all seem to swallow it whole. Indeed, it is fun to have a fancy device!

If we are designing systems, we are caught up in the culture, and design over-complex systems that we are over-proud of. This paper suggested an approach to help design better (concurrent and iterative manual design): a method that can be used to help direct the design so that automatic user manual generation is easier. To try to escape from the cultural forces is not easy, but it may help to see that the effort is engagement in justice, and therefore a noble cause.

Acknowledgements

Peter Ladkin made very useful comments.

References

Aristotle, Nicomachean Ethics, in Great Books of the Western World, 8, Encycopedia Britannica, 2nd ed., 1990.

K. Arnold & J. Gosling, The Java Programming Language, Addison-Wesley, 2nd. ed., 1998.

J. Baudrillard, The Consumer Society, SAGE Publications, 1998.

N. S. Borenstein, "Whose Net is it Anyway?" Communications of the ACM, 41(4), p19, 1998.

Canon Inc., EOS500/500QD Instructions, part no. CT1-1102-006, 1993.

J. M. Carroll, The Nurnberg Funnel, MIT Press, 1990.

J. M. Carroll, ed., Scenario-Based Design, John Wiley, 1995.

P. Chesson, "You Need a Psychologist to Teach HCI Correctly to a Computer Scientist," ACM SIGCHI Bulletin, 30(1), p36, 1998.

Fluke Corporation, Fluke 1997 Test Tools Catalog, H0002EEN Rev. V 97/03, 1997 (see also http://www.fluke.com).

D. E. Knuth, Selected Papers on Computer Science, p143, Cambridge University Press, 1996.

T. Landauer, The Trouble with Computers, MIT Press, 1995.

B. Laurel, Computers as Theatre, Addison-Wesley, 1991.

R. Nader, Unsafe at Any Speed, Pocket Books, 1965.

J. Nielsen, Usability Engineering, Academic Press, 1993, p61.

N. Postman, Technopoly, Vintage, 1992.

J. Rawls, A Theory of Justice, Oxford University Press, 1972.

B. Reeves & C. Nass, 1996, The Media Equation, Cambridge University Press.

M. Rettig, "Interface Design When You Don't Know How," Communications of the ACM, 35(1), pp29-34, 1992.

J. Rawls, A Theory of Justice, Oxford University Press, 1972.

H. W. Thimbleby, User Interface Design, Addison-Wesley, 1990.

H. W. Thimbleby, "A New Calculator and Why it is Necessary," Computer Journal, 38(6), pp418-433, 1996.

H. W. Thimbleby, "Design Aloud: A Designer-Centred Design (DCD) Method," HCI Letters, 1(1), pp45-50, 1998.

H. W. Thimbleby & P. B. Ladkin, "A Proper Explanation When You Need One," in M. A. R. Kirby, A. J. Dix & J. E. Finlay (eds), BCS Conference HCI95, People and Computers, X, pp107-118, Cambridge University Press, 1995.

H. W. Thimbleby & P. B. Ladkin, "From Logic to Manuals Again," IEE Proceedings Software Engineering, 144(3), pp185-192, 1997.

"Engineering is the art of moulding materials we do not wholly understand ... in such a way that the community at large has no reason to suspect the extent of our ignorance." A. R. Dykes.