In Class Group Brainstorming

From CS 160 User Interfaces Sp10

Jump to: navigation, search





  • While this is a dense reading, it is a seminal paper in HCI. Figure 6 is especially important. It is worth thinking about semantic and articulatory distances and how they differ for the gulf of execution versus the gulf of evaluation.

Matt Vaznaian - 1/28/2010 20:59:09

I don't think I've ever read such a detailed paper on the interaction between the user and the interface before. One idea I found interesting was that sometimes interfaces that seem natural to use feel that way only because the user has trained themselves to understand the input language. I do agree that what a user thinks in their head should be easy to convey to the interface to produce the desired output. I also think the idea of responsive output is important for the user; they want to see immediate results that they can easy interpret.

Charlie Hsu - 1/29/2010 22:41:49

I found the four concepts defined in the reading: semantic/articulatory distance and gulfs of execution/evaluation very interesting. Gulfs of execution and evaluation seem to me like measures of intuition for input and output. I had a little bit more trouble discerning the difference between semantic and articulatory distances, but as I understand it, semantic distances deal with the ease and ability with which an user can express himself, and articulatory distance deals with more physical things and the relationship of expressions to their physical form (more IO device-driven). I'm still having a little trouble defining articulatory distance.

Furthermore, the paper highlighted the continuous presence of tradeoffs in user interface design. Direct manipulation is certainly a goal for some interfaces, such as maps, but is detrimental for a program that needs to do something repeatedly. Direct manipulation interfaces often cost more engineering time than simpler interfaces. High level languages, while shrinking the gulf of execution and lessening semantic distance, are often less optimized for performance than low level languages. It is interesting to think about UI design tradeoffs with these new definitions.

Jonathan Hirschberg - 1/30/2010 15:18:06

So if people get used to unusual systems, they'll be able to think about it and it happens. It will seem direct to them. Now I know that easiness is not necessarily the same as directness. A system can be difficult to use but direct because it may just reflect the difficulty of the thing being manipulated. Or a system can be easy but indirect where you have these easy tools that don't allow you to do much. Sure, the former category may be harder to learn, but when you know how to do it, it's direct. If it's more convenient for the system to express it in this way and even if it's harder to learn, then at least the user would be able to deal directly with the elements that are being manipulated in the system. The task is about studying the complexities of the thing being manipulated rather than learning how to use the program.

Long Do - 1/30/2010 18:06:53

The ability to have a touch interface has greatly decreased the distance of the gulf of execution. The ability to be able to touch something on the screen and move it and see the result instantaneously really closes the articulatory distance by showing the user the effects of his action and how he would undo it (presumably by just reversing his actions). The downside is that the semantic distance has increased, as well as the gulf of execution. This is due to the fact that the touch, and multi-touch, capabilities opens the user to many other ways to move and interact with the objects that he is presented but he does not know if his action is valid or even what the result might be until he tries it out. When I first used an iPod Touch, I did not know how to rearrange the icons or delete them until I stumbled upon it, nor did I know that I was creating a new page when I moved an app towards the edge. This hurdle is small though and once taught, it quickly becomes very intuitive.

Daniel Ritchie - 1/30/2010 18:30:14

Near the conclusion of "Direct Manipulation Interfaces," the authors bring up the most fundmental problem with developing only direct manipulation interfaces: missing out on the creation of "new ways to think of and to interact with a domain." If early interface designers had restricted their attention to (the admittedly more intuitive) pen-like technologies when considering the text input problem, we wouldn't have keyboards (the arguably more efficient interface) today. In this case--and many like it, I think--the "new way" of interacting with the problem domain was born of necessity: personal computers at their inception simply couldn't process hand-written text input, but they had no problem dealing with discrete key presses. Thus a highly effective interface was born essentially by accident.

This leads me to wonder: is there a more principled way to experiment with new, unfamiliar modes of interaction in order to find useful ones? We've probably all played the odd Flash game that introduces a bizarre new control mechanism "to be different" or "because it looks cool," and these experiments often fail spectacularly. Is it possible--as Hutchins, Hollan, and Norman have done for direct manipulation interfaces--to develop a more universal theory of good user interfaces that applies even to those in the IN-direct category? Can we establish a framework for "experimental interface design" that permits a more enlightened exploration of the vast space of human/computer interaction modes? Might there exist "optimal interfaces" for certain domains, and if so, how can we find them?

Jason Wu - 1/30/2010 23:23:25

The authors' example of a direct manipulation interface in Figure 1 reminds me of a programming environment called LabView that I used for EE20N. LabView makes use of a graphical programming language with icons that look very much like the ones in Figure 1, so a programmer can use data stored in tables as input to function icons that manipulate the data. The semantic distance was fairly low in both the gulf of execution and the gulf of evaluation since the graphical commands matched my thoughts and goals, and the graphical output was intuitive to interpret, so I found it quite easy to learn the language. However, I did notice a loss of generality from text-based programming languages, since I found myself resorting to inserting blocks of written code to accomplish some more complex tasks in LabView.

Spencer Fang - 1/31/2010 14:52:33

An IDE such as Eclipse would be a good example of a direct manipulation UI because it allows the programmer to think on a high level, interacting with the code as if it was composed of abstract objects. If the function "foo" is called, the programmer can do things such as hover the cursor over "foo" to see its return type, or click on "foo" to go to its function definition. This is a much more direct interface compared to editors such as vi or emacs. These tools force the programmer to think of code as plain text. If a programmer sees an invocation of "foo", he must search through the code for the strings "foo", and manually look through the hits to separate the function calls from the function definition. But at the same time, a programmer who is familiar with thinking of code as plain text will have an easy time writing shell scripts to accomplish complex and unusual tasks. A programmer who is only familiar with the IDE's interface language of menus and buttons might not know how to express such a task in way that the IDE can understand.

An IDE would have a small gulf of execution because frequently it understands the programmer's code and can give warnings about type errors, syntax errors, and so on. The programmer's intentions are understood by the IDE. The gulf of evaluation in an IDE is also smaller than that of a simple text editor because the programmer can be alerted of errors as he is programming, instead of checking at compile time. In some situations such as building graphical user interfaces, an IDE can even provide a drag and drop WYSIWYG interface. This is a much more direct interface than laying out GUI elements via code.

Tomomasa Terazaki - 1/31/2010 16:04:14

This reading was about Direct Manipulation Interface. Direct Manipulation means that a person programs graphically. In the beginning this idea was not used by many people. I believe that it was an idea that was so revolutionary that it took a while for people to get used to it. One of the most important concept of this chapter was the two distances: semantic and articulatory. This quote explains the two very well “Semantic distance reflects the relationship between the user intentions and the meaning of expressions in the interface languages both for input and output. Articulatory distance reflects the relationship between the physical form of an expression in the interaction language and its meaning, again, both for input and output.” One of the advantages of the direct manipulation is that “the immediacy of feedback and the natural translation of intentions to actions make some tasks easy.” However, there are some problems of it too. For example, it has problems with accuracy because “the notion of mimetic action puts the responsibility on the user to control actions with precisions” which are sometimes easier when it is shown symbolically.

Calvin Lin - 1/31/2010 17:05:13

As the paper discussed the various benefits and tradeoffs of minimizing the semantic and articulatory distances, I got a sense of two different kinds of interfaces. When it comes to interfaces that companies design for the average consumer, those are the ones where designers strive for minimal distance and direct engagement. There are so many different products out there, and often what makes or breaks a product is whether consumers find it appealing and easy to use. It’s common for users to simply take a quick glance at an interface, and if it isn’t obvious how to use the interfaces, people will dismiss it and move on to another product. But with interfaces for designers and engineers themselves (such as programming languages), there you find low-level, close to the bottom interfaces. In EE20 we used LabView, which was a graphical/iconic approach to programming. It was fun and more user-friendly, but I agree with the paper that by having low-level, non-task/goal oriented designs, it helps engineers to avoid being narrowed in scope of thinking when it comes to brainstorming new ideas. Consumer interfaces intentionally direct a user to perform certain tasks. But with designing and productivity tools, we need flexibility, and thus it could potentially be negative to push an interface in a certain direction.

Mohsen Rezaei - 1/31/2010 18:32:36

So the gulf of execution is the set of things that would match the users' thoughts and goals, and the set of things that a user would like to see in an interface design. In addition, the gulf of evaluation is making the output shows what a designer would interpret from interacting with users of a model/system. Putting these two together we would get a system that satisfies users' need from a system. Both gulf of execution and gulf of evaluation go under the "goals" category in a system.

Jungmin Yun - 1/31/2010 18:36:41

I think the most important thing we have to know from this reading is interface languages. We describe two properties of interface languages, semantic distance and articulatory distance, because they have the relative independence of meaning and form. Semantic distance is the relationship between the user intentions and the meaning of expressions in the interface languages both for input and output. Articulatory distance is the relationship between the physical form of an expression in the interaction language and its meaning both for input and output. At this point, we need to know that how semantic and articulatory distances are different for the gulf of execution versus the gulf of evaluation. The semantic distance in the gulf of execution shows that how much of the required structure is made by the user and user. If the user make more, the distance will be greater to be birdied. On the other hand, the semantic distance in the gulf of evaluation reflects that the amount of processing structure that is needed for the user to check whether or not the goal has been achieved. Articulatory distance in the gulf of execution shows that an action specification that is the form of an input expression having the desired meaning. An input expression is executed by the user on the machine interface and the output expression appears on the machine perceived by the use. However, articulatory distance int he gulf of evaluation is interpretation that determines the meaning of the output expression from the form of the output expression. After I read this once, i was a little bit confused understanding the differences between these concepts. This reading is pretty helpful because we can understand the relationship between user's intentions, meaning of expressions, and their physical form. In addition to, we can think of the differences between these concepts in execution and in evaluation.

Alexander Sydell - 1/31/2010 21:50:15

It's clear that the principles discussed in this article on direct manipulation interfaces are very important as they've been adopted to a much greater scale today than they were at the time the article was written. We now have operating systems and programs which use concepts such as windows and folders to bridge the articulatory distance between users and computers. There are also new types of input devices such as touch screens and microphones with speech recognition that help bridge the semantic distance. From today's point of view, some of the concepts discussed even seem obvious. Programs today usually bring the computer closer to the user in both the gulf of execution and the gulf of evaluation. Designers strive for this so that all users can user their applications instead of only those with enough technical knowledge to express their problems to a computer.

Victoria Chiu - 1/31/2010 22:12:18

There are two aspects of how the interface feels to the users (directness). The first one is the distance the user feels, and the second one is the engagement, which represents how similar the users feel about manipulating the system instead of the interface. And the distance can be categorized into semantic and articulatory distance using the conversation model. The gulf of execution represents the gap between the real system/commands and the user's intention, or in other words, how well the commands are to representing what the users might want to do. And the gulf of evaluation is the gap between the output display to the users and users' conceptual model of the system.

bobbylee - 1/31/2010 22:25:17

This reading is all about direct engagement and how the semantic distance and the articulatory distance span the gulf of evaluation and the gulf of execution between the user and the interface language. To me, the notion of all this is a bit abstract. But, the violin and the piano example at least explains the semantic distance between the users’ intentions and the interface really well. And I learnt one important concept in the piano, violin example that the semantic directness of an interface depends on the nature of the task.

Eric Fung - 1/31/2010 22:29:32

This week's reading tries to capture the aspects of why direct manipulation succeeds as an interface style. It makes sense that direct manipulation works as the interface for many systems, including the given example of the spreadsheet program or perhaps an operating system. Visualizing the objects you change gives the immediate feedback to close both the semantic and articulatory distances in evaluation and execution. Which is why touch-based operating system seen in Minority Report and Iron Man are so exciting: they mimic real-life actions, while computers seamlessly duplicate reality.

Richard Lan - 1/31/2010 22:36:52

Based on this paper, the main point of developing direct manipulation interfaces is to reduce the amount of cognitive effort that the user must invest in using the interface. In fact, the authors claim that the best direct manipulation interfaces will not seem like interfaces at all because the users are able to directly engage with the model-world in intuitive ways. Such interfaces represent real-world objects in ways that are familiar to users. This is an idea that relates to the practice of including interface elements that users expect, even if they do not directly relate to the system's main functionality. Semantic distances have to do with the gap between a user's intentions and the meanings of the expressions they use to interact with the computer. Articulatory distance is related to the meaning of an expression and the physical form it assumes, such as a table or an icon. Both concepts are important when considering the design because they represent the amount of difficulty users will have in figuring out how to use the interface to accomplish their goals and how easily they will be able to interpret the output displays.

Wei Wu - 1/31/2010 23:29:24

The article's discussion of "semantic distance" reminded me a lot of the 61C class, in which students deal with both extremes of semantic distance through the levels of computer architecture that they explore. At the "farthest" distance, we had to translate the instructions we wanted a processor to execute into actual machine code represented in binary, and vice versa. This was what all computer scientists did decades ago with punch cards in order to get computers to execute processes. At the "closest" distance, we used Logisim in which we could drag and drop gates and wire them together to create logic sequences. This allowed us to visually experience the workings of a processor at a very high level.

Computers, of course, have come a long way from the days of "far" semantic distance and punch cards, and the majority of computers today involve direct-manipulation interfaces that imitate our activities as we would do them in real life. However, it is interesting to note that there are users who still prefer the command-line way of doing things, perhaps because, as the article points out, some things are more easily done with human-computer interaction modeled with a conversation metaphor. Although GUIs are supposedly making things more easy and accessible to a wider range of users, I wonder if they will ever experience a backlash?

Annette Trujillo - 1/31/2010 23:29:51

This reading relates to our previous reading "The Psychology of Everyday things" because for example, when Norman talks about the washing machine being too difficult for it's owners to use, there is a huge gulf of distance between that interface and the users. The interface should have been made more direct with the user's needs. But because it wasn't, the bridge over the gulf of execution between the user and the interface is bigger.

Raymond Lee - 1/31/2010 23:41:41

The tradeoff between specialized and generalized systems as discussed in this paper is quite evident in comparing the complexity of C vs. Objective C.

Nearly all of C was effectively summarized in the hundred or so pages of K&R, but it seems that Obj-C has in contrast tons of documentation as required reading in order to be a decent iphone programmer.

On the flip side, it's very hard for me to imagine creating something like the Hello World App in vanilla C in just a few days.

Jessica Cen - 1/31/2010 23:55:26

At one point the seminal paper tells us that direct manipulation interfaces seem remarkably powerful. And now some of our computers have the ability to interact with the user through a touch screen. Touch screen are powerful because it is a device that serves as both the input and the output and it’s very intuitive for the user. Moreover, I believe that the next step of user interface technology will be the development of something more tridimensional. The output would be similar to a hologram and the input will not only be manipulated by the fingers like the touch screen, but by the whole hand. This kind of interface is certainly direct manipulation and should be straightforward and intuitive for anyone. The need for holding and molding and not only touching will certainly speed the development of tridimensional input/output.

Richard Mar - 2/1/2010 0:08:38

The concepts of semantic and articulatory distance allow for a different take on the never-ending argument on the best input devices for playing first-person shooters. Mouse and keyboard supporters trumpet the mouse's superior accuracy and the keyboard's immense flexibility, whereas game controller proponents praise its compactness and easy-to-reach controls. A joystick is fairly direct- move the joystick and the camera pans accordingly. A mouse, however, decreases the articulatory distance; moving the mouse directly affects the player's camera motions in proportion to the distance moved. Although the use of a joystick for accurate targeting can be practiced over time, using a mouse to aim is much easier to pick up as it mimics the point-and-click operations of normal computer use.

Daniel Nguyen - 2/1/2010 0:12:36

A good point that the reading makes is the difference between the information provided and the means by which it is provided (the differences between the semantic and articulatory distances in the gulf of evaluation). I feel it is very easy to differentiate the two in terms of the gulf of evaluation, but that the two distances go hand in hand in the gulf of execution. For example, dropping and dragging a file from one folder to another accomplishes the same task as using the command prompt to copy a file over, but is much more intuitive and visually rewarding. Visual representations of actions and data are important to a good interface, and recent technologies are allowing for these representations to bridge the articulatory distance better without sacrificing bridging the semantic distance.

Aneesh Goel - 2/1/2010 1:03:29

While the reading is interesting, it doesn't seem to give enough credit to the use of the command line and text-based programming, outside of a single reference to there being advantages and disadvantages to any form of interaction. At the time, Kay's article on visual programming methods making traditional coding obsolete may have been persuasive, but 25 years later hackers who weren't born at the time still favor text editors and text-based IDEs, and the command line is nowhere near dying out; the flexibility and degree of control it offers shouldn't be underestimated. Having used LabVIEW and the Lego Mindstorms graphical programming environment, I agree that there are some tasks that drag-and-drop wiring programming is much more convenient for; that said, for most complex data manipulation in LabVIEW, EE20 labs all recommended adding a node that allowed for text-input of mathscript because actually setting up and wiring all the terminals was both slower and more confusing in the end. Also, having worked with CAD software like Autodesk Inventor, I can't imagine trying to do the same thing with a traditional programming style; however, there were plenty of times when I wished I could just edit a data file in plaintext rather than spend the time trying to rotate and zoom to just the right position to be able to select the specific point on a working plane that I could barely see around the other components near it. Obviously, for the bulk of users and uses, a clean GUI with direct manipulation might be better, but suggesting that it would make text-based interactions obsolete seems very overzealous.

Linsey Hansen - 2/1/2010 1:20:18

So, I really liked the part that discussed finding balance between making a specific too specific and making it too specialized, because while it sounds like common sense, I have definitely experienced quite a few interfaces that considered one extreme, but not the other. I also thought it was interesting the way it described the feeling of directness where an interface is actually intuitive to use vs. the one that is initially not intuitive still feels direct since the person has gotten so used to it. Perhaps a way of combining these two things would be to do something weird, like have different modes to a single interface (ie. beginner, intermediate, expert), where it could start out as a really simple, intuitive layout (since some ui's are intuitive initially, but all of the options clutters it), perhaps lacking some bells and whistles, and eventually be built up as the user becomes more familiar with the system. Though I supposed that most people want all of the functionality from their program as soon as it comes out of the box...

David Zeng - 2/1/2010 1:54:59

The reading made me think a lot of the different operating systems and how their interfaces work. As a TA for CS3, I often see new students that have only used windows/macs, but never Unix. The article points out that directness is important for people to learn. I thinking this is very important, as people often struggle to learn the commands for Unix. At the same time, familiarity with the system allows me to feel that the system is more direct than others. The evolution of text-based video games into the games of today, with their crazy 3D graphics, is another really good example of constant improvements to an interface.

Richard Heng - 2/1/2010 2:37:51

The idea of designing user interfaces to minimize mental exertion, and bridge the gaps from the gulfs, really clarifies the general goals of this class for me. I also found it quite interesting that the author decries relying on automated behavior. It is possible that a system could be used more efficiently with automated behavior. For instance, an experienced vi user could probably be much more efficient in manipulating text than an experienced user of a word processor and mouse. While the word processor and mouse might have a better interface for dealing with the text more directly, there is not a small enough granularity of control to accomplish the same amount of speed.

Yu Li - 2/1/2010 2:37:57

  Direct manipulation is a really useful form of interface design. It helps users of programs feel as if they are directly working with the interface, and makes accomplishing their goals much easier and intuitive. I think its interesting that there is a very apparent trade off between general and specific-designed programs. Specific programs are very narrow in their functions, while general programs have too many functions, which makes the program clumsy and hard to use. 
  Furthermore there are two aspects of directness, which include distance and direct engagement. The gulf of execution and gulf of evaluation are aspects of distance. The gulf of execution relates how well the user's goals can be accomplished by the physical system, while the gulf of evaluation relates how well the physical system produces a understandable output for the user.

Owen Lin - 2/1/2010 3:01:07

The thing I found most interesting is that this article's idea of "direct manipulation" seems to be exemplify Apple's approach to user interfaces. Apple definitely stresses the ability to directly modify the representation of objects as if they were the objects themselves with their touch screen technology. The iPad, especially, is the prime example of this. As discussed in lecture, the touch screen used in the iPhone and the iPad is designed for fingers. It makes it more natural to interact with these devices, but the tradeoff here is the lack of precision. Fingers are pretty fat compared to a pen or a stylus, and it shows when you compare the resolution of a tablet PC versus the iPad (which has only a 1024 x 768 resolution). There does seem to be a tradeoff with direct manipulation, and perhaps the reason why programming languages are mainly still in text rather than drawn on graphical interfaces is that the power and flexibility of a programming language cannot be captured on a GUI.

Hugh Oh - 2/1/2010 3:53:51

The basic summary I got from this article is how the user and the designer are connected through this relationship of semantic and articulatory distance. The article goes over how the user can accomplish his or her tasks and describes what would happen if the burden laid more on the user or the designer.

If the user were to have the burden then a certain degree of competence is required to achieve simple tasks. On the other hand, putting the responsibility on the design will cause production to be inefficient on many different levels. Trying to find a good medium is the goal of a good interface.

Boaz Avital - 2/1/2010 4:49:38

This reading was quite interesting. A few things struck me while reading it. The first thing I was wondering about from the very beginning of the text is what kind of interface would have, for instance, far "distance" for the user but a good level of direct engagement. What I settled on while reading was that it would be a program that gave you a feeling of direct control, perhaps through a low articulatory distance, but involved a lot of micromanagement so it took a long time and effort to complete your task. Lo and behold on pg. 335 in the chart of distance vs engagement, my theory was validated.

I think one of the best ways to reduce distance is to anticipate what kinds of goals the user has and make those as easily accessible as possible. The reading seems to be wary of this by saying that if your language is too specialized that you can't do basically "out of the box" stuff. Here I think the author takes a too idealistic approach, as if the software we are interfacing with is capable of great and amazing things the developer never dreamed of - especially apparent in the section on "virtuosity." I doubt this is very often the case.

One last thing I was wondering about is articulatory distance and direct engagement when the interface you're manipulating deal with abstract information, data that has no analog in the physical world. What kind of interaction would make the user feel connected to this interface?

Brandon Liu - 2/1/2010 10:16:13

The article was challenging since we take Direct Manipulation for granted - I had to put myself in the 1980s to understand the perspective of the author. I was surprised to see the authors suggest that in the future all programming would be done through direct manipulation. One of the things I immediately thought of was programming environments for kids such as Scratch. These bridge the articulatory distance for computer programming to the level where non-programmers are supposed to use it. The main issue here (as mentioned in the conclusion of the paper) is the loss of expressiveness with direct manipulation, compared to "conversational interfaces". I thought the best example of this was the user of the vi text editor who describes deleting a word as "dw". in a direct manipulation interface, if I wanted to delete 100 words, I would perhaps move words off the screen 100 times, while in vi I could just type "100dw". This demonstrates the disadvantages of direct manipulation in that it is less expressive - there is a greater semantic and articulatory distance for this repetitive task.

Kathryn Skorpil - 2/1/2010 11:36:07

In generals, humans naturally tend to understand things better that we can touch and see as opposed to things in our minds. In HCI, everything begins in the mind and is then put out in real life as something that people can see and touch. The best application in the world is useless if it's something that the user cannot understand or use. The paper "Direct Manipulation Interfaces" explores these important points and discusses the psychological reasons why one would have the user try out their products. This will be relevant in the following weeks once we have our projects selected and we want outsiders to try using our app.

Jeffrey Doker - 2/1/2010 11:39:31

I am a mathematician, and so I couldn't help but notice how the concepts of semantic and articulatory distance and the gulfs of execution and evaluation apply to the way mathematics is represented. The foundation of any field of math is like a low level programming language; logical and set-theoretical constructs (and indeed Turing machines) are capable of doing anything, but are very difficult to use intuitively. Both types of distance are high.

However, math is often structured so that once the user (mathematician) has automatized the foundational concepts, she is given access to higher-level interface tools that are much more direct and intuitive. A great example is the invention of the commutative diagram ( for representing simultaneous maps between different sets. Treating maps as arrows between a spacial array of sets (represented by letters) is a simple trick that is semantically intuitive and clear in articulation.

Many tools like this exist in math and are duly celebrated, but there is not much emphasis placed in fostering their development. I think that more people might want to be mathematicians (and that the ones we already have might be able to prove more things) if more effort was devoted to bridging the gulfs with direct manipulation interfaces, both in computer simulations of math as well as the symbols and constructs used to represent and manipulate math on paper.

Vinson Chuong - 2/1/2010 11:51:42

After having read the paper, the idea of a "Direct Manipulation Interface" is still very abstract. Although the paper has defined many characteristics of such an interface, the concept of "directness" still eludes me. Suppose that there exists a widely used system whose interface is not direct in every way, and users have learned to use it as part of their daily lives. Suppose that an interface for a different system used the very same "language". Does such an interface constitute a "Direct manipulation Interface"?

It seems that the trade-offs of a "Direct Manipulation Interface" are significant and that such an interface would only be suited for systems with a very limited set of intended tasks. How then would someone implement such an interface for a more complex system?

Stepping back a bit, there are many systems that are simple and perform a limited number of tasks--video games for example. For such systems, "Direct Manipulation Interfaces" may be the most natural type of interface to use. Why then are we taking the time to classify such an interface? It seems that for systems predisposed to such an interface, designers would naturally arrive at it.

Dan Lynch - 2/1/2010 12:06:21

The article on Direct Manipulation Interfaces poses an intellectual and philosophical view of Human Computer Interaction. It dives into details in regards to intentionality, semantics, and execution. The article discusses the concept of directness in terms of distance (both semantic and articulatory) and engagement.

Many esoteric terms are used in this article, some of which I would like to bring up. The term inter-referential is used to describe what I would interpret as a dynamic relationship between events. When a user is interacting with a system, if certain actions depend on certain past actions, then the actions are said to be inter-referential.

The gulf of evaluation is the distance that spans both semantic and articulatory distances, and can be used to analyze the usability and functionality of the overall system.

Semantic distance is a term the author uses to desribe "the relationship between user's intentions and meaning of expressions, articulatory distance has to do with the relationship between the meanings of expressions and their physical form." I interpret semantic distance as being how far away the semantics of the implementation is from what the user wants to do. For example, you have code that performs an operation, but whatever that operation means is the semantics of that operation. If the user's intention matches that of the semantics of the operation, the semantic distance is zero.

The articulatory distance can be viewed as how far away is the actual interfaces physical presence from the meaning of semantics of the operation. In this case I believe this distance is always greater than the semantic distance because the physical form is often abstracted. However, a system can contain elements that shout their function's meaning.

Chris Wood - 2/1/2010 12:43:28

The idea of a direct manipulation interface is intriguing to me because it required that the designer to predict what users will find the most intuitive. A direct manipulation interface should provide the user with an interface that seems direct and easy to learn. Though it is true that every user will be unique, we can increase the usability of our interface by keeping it engaging and ensuring that the user feels that he is directly manipulating the objects of interest.

Kyle Conroy - 2/1/2010 12:55:14

Even after 25 years, this paper remains remarkable current and relevant. The advances in multi-touch user interface design (present in the iPhone OS, Microsoft Surface, and Android) represent a dramatic shift in the methods used to span the gulf of evaluation. The Maps application on the iPhone is a great example of shortening the semantic distance on the evaluation side. This application allows for direct manipulation of the interface objects (maps), letting users zoom, pan, and rotate using gestures ripped straight from the real physical world. I think these devices (iPhone, iPad, Android) represent a turning point in interface design, returning the focus to direct manipulation and shying away from the file system metaphor.

Long Chen - 2/1/2010 13:50:25

The chapter presents an interesting notion of directness to programming design that would simplify the user interactions and understanding. The writer could perhaps even apply their own idea to their writing: the writer from the last reading, Norman, presented the same idea of the design cycle, but his language used was more direct and easier to comprehend. Thus he was able to effectively convey his ideas to his audience (users) and made the interaction with his book more direct.

The material in this reading was exactly what I was thinking about while working on project 1. Why would Apple choose Objective C as it's language of choice when so many people are already familiar with Java or C++. Although there are only limited differences, the inherent learning curve distracts from the main purpose of creating a popular and useful application. The interface builder is definitely a move in the right direction of direct manipulation of objects to convey the same information with textual programming.

Just as PC interfaces have evolved from terminals to mouse and windows, where will the new "Web 2.0" movement take computer interaction? I strongly believe that the new Google Chrome OS will only further the drive for even more advanced Web interactiveness and eventually greater user freedom in actions and movement. Just as dimensions in space move from points to lines to planes and to further dimensions, computer interaction could possibly evolve exponentially from textual to graphical or maybe something even better.

Vidya Ramesh - 2/1/2010 14:12:12

This chapter was enlightening on a few points, but very confusing on a couple others. For example, I was confused about the difference between directness and distance. At first it seems that they were used interchangeably, but then it seems like they are two similar but distinct concepts. When explaining semantic distance, the authors describe it to be the relation of the meaning of an expression in the interface language to what the user wants to say. On the other hand, they explain semantic directness to be the matching of the level of description required to the level at which the user thinks about. This almost seems like the same thing because both directness and distance seem to be a mapping from the interface language's syntax to the user's mental processes. I think that semantic distance might be related to evaluation and understanding more than semantic directness.

Brian Chin - 2/1/2010 14:26:41

Direct manipulation interfaces seem to be a great way to create software that is easy for users to learn, while at the same time, being powerful enough for experts. In one part of the reading, it talks about how natural languages and how different languages have different semantic distances for the same expressions. I was wondering if this would also relate to creating interfaces for different cultures. If an interface was developed for one culture, and matched the semantic distances that were expected for that culture and its language, would you have to change the interface if you wanted to sell a product with that interface to another culture? Or would it be easier to develop interfaces that had semantic distances with no culture in mind?

Michael Cao - 2/1/2010 14:47:04

As I was doing the reading on Direct Manipulation, I kept thinking about our first individual project and relating that interface to the reading. I feel that the interface used to build the iPhone application has both features related to Direct Manipulation and ones that don't. For example, actually building the look and interface of the app was very intuitive and simple to understand. It was just dragging, moving, and resizing certain objects into a window. However, actually implementing what each object does by coding wasn't as intuitive. That took much more specialized knowledge of how objective C works. Overall I feel that Direct Manipulation makes a system much simpler to use and understand, and that more systems should follow this design method.

Alexis He - 2/1/2010 14:50:12

on Direct Manipulation Interfaces: I am surprised by how relevant the paper is given its age. Many of the points presented for better interface design are well illustrated with a study on website design over the last 2 decades. For one, the paper encourages "semantic and articulatory directness" which can be shown in websites' extensive use of icons and symbols to represent commands menues (ex: youtube's new nav-bar). A second piece of advice, "responsive", is shown through the development of AJAX and how websites nowadays do not require reloading a page to update a single element (remember how painful it used to be to play Lastly, for the interface to be "unobtrusive" or essentially, be invisible, the best example is current search engines utilizing user data to tailor results (ex: msn's bing search finds users' location based on IP to figure weather data). These are all points I present that illustrate how relevant this article still is in describing user interfaces.

Swapnil Ralhan - 2/1/2010 15:19:53

Firstly, I find it highly impressive how this piece of reading, written in 1985, predating most GUIs, highlights aspects of "direct manipulation interfaces" that are still relevant 25 years from then. Secondly, I feel that the reading makes aspects of intuitive interface design more rigorous in its description, a bit unnecessarily sometimes I feel. Lastly, a point that I was curious about is whether self-referential I/O helps to bridge the gulfs of execution and evaluation. I felt that Self-referential I/O is an aspect that is not only present at the physical system level, but also at the goals level.

Angela Juang - 2/1/2010 15:35:51

Every user has an easier time using a system that mimics things they are familiar with using or interacting with on a daily basis, so I think for the general population, systems allow people to feel more like they are actually touching or manually manipulating what is on their computer (i.e., with a tablet, touchscreen, etc.) will be much more intuitive than others (i.e., terminals). Similarly, languages that are higher level read more like regular spoken English, and because of that they are easier to understand at first glance. This implies it's the system's responsibility to adapt to the user and present an interface that the user can easily understand. Of course, it's completely possible to go the other way, where the user adapts to the system - but I think systems using this method will have a harder time becoming generally accepted.

Wilson Chau - 2/1/2010 15:38:30

While this is a dense reading, it is a seminal paper in HCI. Figure 6 is especially important. It is worth thinking about semantic and articulatory distances and how they differ for the gulf of execution versus the gulf of evaluation.

When talking about direct manipulation interfaces we must think about the two forms of distances, semantic and articulatory and the fults in execution and evaluation. Semantic distance is the difference in what a user really wants and how the goals are represented in the interface. Articulatory distance has more to do with how things are represented in the interface and their physical form.They differ in the the gulf of execution is the difference in what a user is trying to do and what happens and the gulf of evaluation is the difference between what the interface is trying to say and what the user is interpreting from the interface.

Peter So - 2/1/2010 15:42:57

The task of creating an intuitive user interface is attractive not only for the target users but even for the programmer since it ultimately combines the functionality of computers with the agility of human thought. The article introduces one way to achieve this via creating higher level programming languages that make the implementation of the components of a task more accessible to the user however this presents its own challenges to the person responsible for designing the program. As mentioned in the paper, higher programming languages may provide a more intuitive dialogue between the user and program but introduce more complexities into the programming to make higher level operations talk to one another.

One comment on the ideal interface. I agree that it should be easy to learn but cannot be immediately obvious to all people in general without becoming cumbersome with vocabulary and step by step walkthroughs. It is important to understand your target user group and take into account their skill sets to best design an interface that balances their needs with their level of understanding. One example is traditional line coding versus visual coding with block operators. As a engineer with limited line coding experience I found block coding by connecting inlets and outlets of block operators (like in LabView or Max) more accessible than line coding because I could better follow the program execution visually.

Divya Banesh - 2/1/2010 15:45:43

There are several programs that are used widely today, especially in the computer engineering world, that employ the concepts of direct manipulation. Programs like LabView, PSpice and MultiSim allow electrical engineers to build their circuits on the computer by drawing lines (to represent wires) and using I/O pins of icons to connect different components together. These programs are good examples of direct manipulation because the user can take a circuit that they have designed on paper or with actual components and replicate it exactly on the computer. There is no hidden syntax or extra operations; the user can build a circuit on the computer and hit "run" to get the results. The interface is fully picturesque and the user can "program" using graphical representations of the components.

Saba Khalilnaji - 2/1/2010 16:15:04

In the real world actions have reactions. If I push a cart the cart will move. Direct manipulation brings an essence of this action-reaction type interaction with computers. Users will be able to more naturally interact with an interface if it is closer to the interaction of their daily lives. Rather than programming a few lines of code then running it to see the reaction, users can directly engage object such as windows in GUIs to do their work.

Looking at this from a different angle: user's intentions can be expressed in different ways amongst different users. This large range must comply with the metaphor that the interface is trying to use for interaction. Therefore interface systems need proper feedback which takes us back to direct manipulation!

Jeffrey Bair - 2/1/2010 16:24:10

The semantic distance is the relationship of the users intent and how it is conveyed in a language. For example, all languages have some sort of semantic distance since languages with rich vocabulary appear from different cultures which may have different ways of saying things. This is similar to the gulf of execution since it is also what bridges the commands of the system with the thoughts and goals of the user. If the goals of the user cannot correctly interact with the system due to the commands not being adequate for the goal in mind, that is gulf of execution which is much like semantic distance.

The Articulatory distance is the relationship of the users meaning of expression and how it forms physically through sound or sight or any other sensory image. This is similar to the gulf of evaluation because the gulf of evaluation makes an output display that presents a good conceptual model of the system that can be perceived, interpreted, and evaluated much like how words are formed and articulated through the mouth with sound. This form of the word and the actual perceived meaning of the word may be different depending on the articulatory distance which can be attributed to differences in languages and how the form of expressions can be interpreted differently.

Thomas Evans-Pratt - 2/1/2010 16:45:29

This reading is really interesting when you think about the year it was written, 1985. While there have not been very many widely accepted or used forms of user output devices (a simple visual based monitor is still the king), there are many new and upcoming technology for user input. This means that the gulf of execution has the potential to shrink as new ways to input and interact with information allows for the user to input and interact with data in ways that would feel more natural and truer to the process they are working on.

Arpad Kovacs - 2/1/2010 16:54:03

After reading this 25-year old article, what strikes me most is that despite astonishing technological advances, the ideals of directness (distance and engagement) remain still so far away. Some of the recommendations of the paper, such as higher-level languages (eg Python, LabView), have made expressing user's intentions somewhat easier in the "interface as conversation" paradigm, and the repetition of specific user interface conventions has make certain automated behaviors feel more direct (even if they might not be closer in terms of semantic distance). However, even today, there still exist vast gulfs of execution and evaluation because the user's intentions must be modified to fit the limitations of the machine, and the machine's output must provide a balance between generality and specialization in the language of the interface.

Esther Cho - 2/1/2010 17:00:21

The paper breaks down the interaction between user and interface by 4 things: the semantic and articulatory distance for the gulf of execution and the semantic distance and articulatory distance for the gulf of evaluation. What I thought was interesting was that it took into account not only what the user has to do to interact with the interface (and how that would appear to the user) but also what they meant. By considering these "distances," a designer can critically analyze the effectiveness of their system to a particular user. It is not good enough to have all the information outputted to screen but the way it is outputted. The paper also mentions that these distances change depending on the particular task the user needs to accomplish which brings back a reoccuring theme that designers should be specific about the tasks (or problem) their system is solving.

Andrew Finch - 2/1/2010 17:25:29

This was a very eye-opening article that shed light on UI issues and psychological aspects that are hardly ever discussed. So many interfaces we use today have enormous semantic and articulatory distances. They require the user to learn new expressions, languages, and frameworks, They are misleading, and don't produce the results the user expects without extensive tweaking. The concept of direct manipulation and interaction with the interface is extremely valuable, and yet is not seen very often. I can really only think of a few interfaces that allow the user to manipulate items so directly--the desktop in modern operating systems, and a few interfaces in digital content creation software, such as maya, and adobe after effects. Far too many interfaces resort to cryptic buttons, pull-downs, lists, and typed commands. This is something that needs to change.

Wei Yeh - 2/1/2010 17:26:49

The main reason the recently announced iPad is an extremely user-friendly piece of technology (from all that I have read and seen) is because it is highly directly manipulative, a concept discussed in our reading. You interact with the software with your fingers, operating on the objects of interest themselves, and obtain immediate feedback -- it does not get more natural than that. Using vocabulary from the reading, the iPad uses a "model world metaphor" for most of its major applications.

The book reading app is a perfect example of direct manipulation. There is a very small semantic distance when using the app, since the way you interact with it is almost exactly like you would in the real world, with a real book. To turn the page, you actually "flip" the page with your finger!

This impressive level of direct engagement is made possible by the iPad's blazing fast A4 processor. As discussed in the reading, instant feedback "removes the computer as an intermediary by providing continual representation of system state." It is no wonder Apple decided to design its own processor in-house -- they knew they needed to achieve this speed in order to make this direct engagement believable.

Conor McLaughlin - 2/1/2010 17:28:44

This lecture's paper was a little dense and vaguely repetitive, but I really enjoyed the discourse on the importance of minimizing semantic and articulatory distance. As a result of using this course I've been using Macs much more often, and I've been noticing subtle visual cues Apple developers have implemented to shorten articulatory distance. File folders represent a pseudo-manifestation of direct manipulation, but when the file properties are changed, such as read-write privileges, a little icon appears over a corner of the folder expressing the state of the folder. For someone without full permissions to a folder, there is a little stop sign. As spoken to in the paper, it's unobtrusive but immediately signifies the current status of the folder with minimal cognitive power. The belief in the world metaphor and the analogous situation of someone's suspension of disbelief during a play also really struck a chord with me. If something is done with enough attention to detail, and as unobtrusively as possible, people are perfectly willing to accept the abstraction as a real space. Just as the actors upon the stage can depict real emotion, so too can a successful interface allow one to manipulate the object itself. Control is given over to the user, and they are given further freedoms as a result, even if the actual physical implementation is just 1's and 0's.

jonathanbeard - 2/1/2010 17:31:25

First off, the .pdf wasn't loading so I found an alternate source, which I hope is the same:

Direct manipulation reminds me of computers that are used in movies / tv show, the ones that can do very advanced tasks and only require a few easy inputs from the user. I think this would be great, but these programs also need to be customizable and offer advanced features for more knowledgeable users. I just don't see computers getting that much more simple. Every year, Microsoft tries to make Office easier to use with more advanced features, but the easy to use features tend to make advanced features harder to find / use / and customize. I agree that an interface should be easy for a new user to figure out, but if those features linger, they can get in the way of more experienced users. I simplify my Office toolbar / Windows toolbars so that they have no huge bubbly buttons on them, which only take up space after one knows where they are. Then I have more screen space to focus on what ever I'm working on, not a bubblicious row of 'easy to use' buttons.

Geoffrey Wing - 2/1/2010 17:34:08

Before reading this article, I believed that making the user interface as easy to use was the most important thing to do. If a program was too difficult to use, people won't use it. The article definitely stresses the importance of ease of use, but it also shows the dangers of making an interface to specific. Other approaches and views on tasks are important, or else innovation will not occur. Today's society definitely values to customization, (i.e., Burger King: "Have it your way"), but there is a delicate balance between ease of use and customization. This will be something I'll keep in mind throughout the course.

Mikhail Shashkov - 2/1/2010 17:35:03

I'd have to agree with the keen analysis (in section 6) about the limitations of direct manipulation in not only functionality but consequently problem space. What I mean is that such interaction will never, as Hutchins et al. also claim, be used as a IDE for any serious programming language. In fact, I think it important , as an additional trade-off, to compare the functionality you lose as the scope of the problem gets bigger and more serious.

Likewise, I completely agree that the amount of knowledge and the unique perspectives on problem-solving that will be lost with any major transitions in interface are certainly worth more consideration that a note on the last page of paper.

Andrey Lukatsky - 2/1/2010 17:37:37

With the growing amount of touch-screen devices available today, I believe direct manipulation will be used more and more - particularly to target non-tech-savy users. By increasing levels of direct engagement, new users will be not be afraid of computers - in fact, they may no longer perceive them as such, but instead as handy tools. They will see such devices as they see a wrench: intuitive ad useful.

Nathaniel Baldwin - 2/1/2010 17:37:53

This week's reading struck a chord in me; it made me think of something that happened to me just this weekend, and my thoughts on how things could have been better handled. I was at a large ski resort, and had taken a bus from the parking lot to the base area. Upon returning to the bus area at the end of the day, I realized that there were 7 different bus lines, and I'd paid no attention to the name of the area I'd parked at in the morning. Looking around for useful information in order to get on the correct bus, I found only an employee stationed to give out information. I described to her where I'd parked in the morning, and she directed me to a bus line. I got on it, and rode the entire way, realizing towards then end that I was nowhere near where I wanted to go. I also realized what had happened - I'd left out the fact that I had gone a particular direction through a roundabout, and the employee had assumed a different one. Had there been a map of the area, I would have easily found the name of the area I'd parked at. In other words, had I been interacting with the resort's information system in a "model world" interface instead of a "conversation" interface, I wouldn't have endured a long, pointless bus ride.

I also was impressed by the discussion at the end of the article about the trade-offs of the direct manipulation model. Specifically, I was nodding along to the sentence "The articulatory directness involved in pointing at objects might need to be traded off against the difficulties of moving the hands between input devices …" as I have long eschewed the mouse as an input device much of the time for this very reason, using keyboard commands whenever possible. More recently, I've found that I micromanage my choice of input devices to some extent, based on what I'm doing. For example, some simple file manipulations are easiest and fastest by manipulating the files in the GUI with a mouse, but other things - renaming a bunch of files, for example - are most quickly accomplished by pulling up a terminal window and running some commands or writing a small script.

Jordan Klink - 2/1/2010 17:38:15

I must admit that I found the reading incredibly difficult and struggled to fully understand it. After finishing it, though, I believe to have obtained a deeper understanding of what makes an exceptional user interface. It must act as a layer of abstraction, hiding all of the internal details and code while at the same time allowing someone to fully use the program. It must bridge the gulfs of execution and evaluation at the highest level possible without limiting what a user can do. There are of course limitations, as with any layer of abstraction, since when you abstract some of the details away you lose some of the functionality that a lower level may provide. A good interface though will target the needs of the user and simply cater to those needs as best as possible.

I don't necessarily agree that adding higher layer of abstraction will always add limitations, though, since some high-level languages can give you so much flexibility that you can write a function in 3 lines where in a lower-level language it would take you over 100. Sure, a lower-level language could do it, but it might not let you do so in the way that you desire. I don't believe that using a higher layer of abstraction will always be limiting, but instead can offer alternative, more concise solutions.

Weizhi Li - 2/1/2010 17:42:18

The author’s reasons for using direct manipulate interface are backed up by research. He gives specific recommendations for standards such as the aspects of directness, as well as two forms of distance. He explains and uses a good shorthand throughout the article for semantic form of distance verses articulatory form of distance. Finally, he points out that forming an intention is the activity that spans semantic distance in the gulf of execution.

Mila Schultz - 2/1/2010 17:42:21

Norman's paper is so dense and thorough that it is difficult to respond in only a few sentences. The underlying theme of the interface's necessary "disappearance" in many situations was particularly insightful. Many interface failures stem from arrogant interfaces and a fundamental misunderstanding of users. It is interesting to look at how Fishkin may have derived his taxonomy for tangible user interfaces from Norman's work. It seems that TUIs build on the articulatory directness that had not been explored when the article was written.

Bryan Trinh - 2/1/2010 17:46:27

As the general public continues to further integrate computing into their lives, the importance of representing digital objects in their natural domain increases. This fact does create a situation where the specificity of computing tools essentially increases, but I think necessarily so. Making computing tools that speak in the users native language is the only way to effectively decrease the cognition of the user--bridge the gap. We have apps for everything not an app that does everything--I think this is important. Computing interfaces for a general audience should be highly contextual and domain specific.

Although many are saying that the IPad is essentially a larger ITouch, and I don't disagree, the ways in which we interact with the device is very different. ITouch devices are carried in the pocket, the IPad in a bag. ITouch makes easy use of thumb typing, the IPad hand typing. These subtle differences should necessarily spawn a distict set of applications that better fit this context.

Darren Kwong - 2/1/2010 17:51:58

When I did this reading, I thought of LabVIEW. The semantic distance in the programming adds to size of the gulf of execution, as many actions (e.g. wiring blocks) may be required. Finding what you want isn't completely intuitive, although there is a search function. The graphical approach lowers the articulatory distance in execution. The semantic and articulatory distances for the gulf of evaluation seem relatively small, as the outputs are usually given in a direct link between front panel items and the block diagram.

Kevin Tham - 2/1/2010 17:56:49

In this paper on direct manipulation, it discusses aspects of the level of interaction between the user and the interface. One key theme was, how "high" of a level should our design be? Should more work be done on the user end, or at the designer end? Tradeoffs involve freedom of manipulation, at a higher level, you get less degree to specialize tasks, but it is faster. (e.g. LISP language vs. 0's and 1's) And if it's too high level, then everything will be a big clunky mess and very complex. Also, there's the cognitive aspect of automation; anything designed, even if poorly designed, can be adapted by users. After repetitive uses, intuition is not important anymore and the task becomes rather automatic. Another idea was semantic meaning. If the interface was treated as a medium, will "translating" our intentions to this medium be easy or rather difficult and indirect? These are many criteria taken into consideration when designing an interface. This paper states that direct manipulation while can be good, is not always useful for each task, some tasks being better and more efficiently treated with e.g. scripts.

Anthony Chen - 2/1/2010 18:05:51

I find it interesting that since at the time of writing, most of the concepts relating to UI were hardware limited, that he focused more on translating text or basic image manipulation to function (distance vs. engagement). Today's hardware on the other hand, is much more powerful and things like multi-touch input devices allow practically direct manipulation of the computer.

Sally Ahn

The authors suggest that modeling the interface after the world such that "Instead of describing the actions of interest, the user performs those actions" strengthens the feeling of directness. They mention that Sutherland's work took 20 years to for widespread impact due to hardware limitations for graphical interfaces. As the examples of 3D desktop and tactile input devices (like the iPad) we saw in lecture reveals, the domain of our interaction with computers has rapidly expanded--and is continuing to expand. This made me wonder if one day technology would enable users to literally "perform [the] actions" (the Wii stick is a an example of such a device, although with limitations) for input. Of course, although this may bridge the gulf of execution completely, there are drawbacks. For example, moving a document from a folder to another folder via graphical icons may be easier than physically performing such actions.

Aneesh Goel - Feb 01, 2010 07:58:57 pm

In case other folks from group M haven't found it, our page is at Group M.

[add comment]
Personal tools