Human Information Processing (KLM, GOMS, Fitts' Law)

From CS 160 User Interfaces Sp10

Jump to: navigation, search

Contents

Readings

Optional Readings

Mattkc7 - Mar 04, 2010 06:38:43 pm

Daniel Ritchie - 2/27/2010 21:22:29

Raskin mentions in this chapter that speed is not the be-all and end-all of quanitative measures: he acknowledges that error rate, training time, and other factors are also important. He doesn't give any insight into how these other factors might be modeled, though. Obviously, one could gather data on them by building the interfaces in question and testing them with many users, but that's not as desirable as an analytical model such as GOMS which allows for comparison of interfaces without constructing them. Seeing as we already estimate quantities such as information-per-keystroke and relative probability of choices for GOMS analysis, it seems like we should be able to use this data to estimate how frequently a user might make wrong choies (and how much information each wrong decision would waste).

Raskin also points out, toward the end of the chapter, that following Hicks' law generally leads to collecting user options in fewer menus/screens. While I generally agree, I think this approach--particularly on mobile devices--could come into conflict with Fitts' law as the options become so small as to unacceptably slow down the user's task completion time. Indeed, I do notice that many iPhone apps use comparitively more screens for options and settings than their desktop counterparts. I would be interested in seeing quantitative analysis on the tradeoff between Hicks' and Fitts' law on small-screen devices.


Charlie Hsu - 2/28/2010 10:38:43

I thought the GOMS keystroke-level model was right to point out the expensive costs of pointing and mental preparation. From personal experience, I know I hate to waste time homing and pointing, so my initial solution to the temperature conversion model was entirely keystroke based: type the 4 characters, hit an arrow button to switch the radio button temperature conversion type if necessary, and hit enter. This results in MKKKKMK (3.7 sec) for situations where the correct conversion is already selected, and MKKKKMKMK (5.25 sec.) for situations where a switch is needed, a marked improvement over homing to the mouse, pointing, and clicking on the correct radio button.

However, the solutions later in the reading (especially the elegant bifurcated interface), are much more efficient. Like the reading said, the quantitative analysis provides us a concrete measure of interface efficiency, where we can only hypothesize with qualitative analysis.


Vinson Chuong - 3/1/2010 10:18:40

Unlike the previous reading, in which the author attempted to impose a computer metaphor onto how humans process information, the GOMS models emphasize instead the approximation of an upper-bound on human performance via discrete operators used to complete tasks. I believe it is important to avoid applying an end-all methaphor that seems to make sense on a structure that we really know little about. The GOMS model seems to be more useful in practice because it simply uses empirical data to quantify the differences between interfaces without making any guesses as to how that data fits into a grand scheme--and thus prevents misunderstandings about the human brain.

I believe that an expansion in the GOMS model described in this reading using some of the concepts from the previous reading (the Human Information-Processor Model) may prove useful. For example, the "Mentally Preparing" operator can be split into multiple operators that quantify some of the ways that humans supposedly process incoming information and produce outgoing information.


Alexander Sydell - 3/1/2010 10:58:21

In the beginning of this chapter, Raskin stresses that it is important for an interface to respond to a user action quickly for the user to think he or she cause the response. However, whereas the Model Human Processor reading uses a 100msec maximum time for this, Raskin states that a response must occur within about 250msec. Raskin states the number without any proof, and the prior reading had calculations behind it, so I would be more inclined to go with the 100msec figure. However, I wonder where Raskin is getting his measurement from. Aside from that, Raskin's explanation of quantitative GOMS analysis shows that it is a great tool to use in designing interfaces because it provides fairly accurate performance measurements (at least in relative terms) without user testing, which would definitely speed up the process of designing a better interface.


Matt Vaznaian - 3/1/2010 13:52:42

If we can use all of these calculations (time it takes to process keys, time it takes to use interface, Fitt's law, Hick's law) to efficiently understand how the brain processes work, how is this any different then how we test computers? Of course computers are orders of magnitude faster than humans but in a way it seems as if interfaces are designed for humans in the same way software is designed for hardware. Speeds are tested, memory management is taken into account, etc. So how is the human mind any different than a computer? I know the point of the article was more about using formulas we know which calculate human brain processes to create more efficient interfaces but after the chapter I couldn't help but notice that the way interfaces were being analyzed according to the calculated limits of the brain was very similar to the way software is analyzed against the limitations of computer hardware.


Long Do - 3/1/2010 16:00:37

If a user can be thought of as a machine with three different processors, then the user interface should take into account the limitations of the machine. The user can keep track of multiple things going on by switching his locus of attention, but can only perform one action at a time. This means that functions in the user interface should halt one another when an event that requires the user's input occurs. Multiple pages can be used for multi-tasking, but one keyboard should be used for one text entry and not more than one. The fact that there is memory decay should also be taken into account. Search functions should be more symbolic rather than semantic, as it is easier and faster to retrieve and does not decay as quickly. Chunks of important data should not have similar meanings, but instead have different names so that they are easily differentiated as well. This means our user interfaces should not have functions like error and fault, since those two are very close semantically and could lead to confusion.


Jessica Cen - 3/1/2010 22:54:56

I agree with Raskin on page 75 when he mentions that interfaces should provide feedback if there are delays. I believe that as the user gives input to the computer, the computer should at least indicate if it is receiving the user’s input or if it is busy at the moment. The most frustrating thing I encounter when using a computer is when it freezes, and so I try to make it respond by trying any reset combination on the keyboard. Frozen computers are worse than the sudden blue screens of death because I don’t know if the computer is going to ever respond again, and when I force it to turn if off it usually doesn’t say why caused it to freeze. I also agree when Raskin on page 76 says that a progress bar should represent line linearly because most of the progress bars I have encountered are not linear. They first start off fast and it gives me an idea that the process will be quick. But when the progress bar indicates a completion of 90%, it usually slows down and misleads me. Therefore, I believe that it is essential that user interfaces are honest to the users and responsive to their commands.


Annette Trujillo - 3/2/2010 18:20:06

To make an efficient interface, I had never thought of considering how much time it takes a user to input information, and how much information must be input. I hadn't thought of interface efficiency being all the information the user needs to input and dividing it by the amount of information the interface requires to be supplied. Instead, I considered interface efficiency to be how fast a user can go from doing one task to doing another different task. This idea is definitely something that will help our group develop an efficient, useful app because for our target user group, skaters, when they use our app they will most likely be out and about skating, and if our app requires them to input too much information, they may not consider using it too much. However, if we design our app to not require too much input from the users, this will make it easier and faster to use, and it would appeal more to our users.


Wei Wu - 3/2/2010 19:29:33

Raskin's analysis of the three different temperature conversion user interfaces is interesting because the interface with the actual thermometer sliders was quantitatively the worst performer using the GOM model. Out of the three versions, this one provides the strongest metaphor to real life, a practice that, as we learned earlier in the class, generally leads to good, easy-to-use interfaces due to its immediate relatability with a user. Yet, based on heuristic calculations, Raskin shows that this design principle should not always be followed. Depicting actual thermometers complicates the task of converting temperatures by adding unnecessary mouse movements that require more precision.

This example shows that qualitative measures of UI quality may not coincide with quantitative measures of UI quality, so neither provide a definitive way to determine whether an interface is good or bad. Ultimately, such judgment calls are made by the actual users of the product, which is why the design process places so much emphasis on hands-on testing with people from the target user group.


Jason Wu - 3/2/2010 20:21:41

Even after reading the chapter, I'm still not completely sold on using a quantitative approach in analyzing user interfaces, especially since the popular models, such as the GOMS keystroke-level model, require so many simplifications and assumptions about "typical" human thought processes and behavior. Nevertheless, I can still see why designers might choose to use quantitative techniques during the design cycle, since calculating GOMS values for performing specific tasks in a user interface is quick and easy compared to timing actual users while they interact with a prototype. By going through the GOMS heuristics or measuring the information efficiency of numerous interfaces, designers can likely narrow down the list of interface candidates before actually putting their effort into creating lo-fi or even hi-fi prototypes.

The thought exercise of designing an interface for Hal was a real eye-opener for me. I came up with two solutions: one much like the dialog box with radio buttons and another very similar to the command line interface where the user must press Enter to proceed. On the other hand, Raskin gave 5 solutions as examples, including one which was very inefficient (the scales GUI) and one that was essentially optimal (bifurcated interface). The fact that one simple problem can have so many different interface solutions with such a wide range of efficiencies is something that I will have to keep in mind when designing interfaces in the future.


Vidya Ramesh - 3/2/2010 23:22:14

The Double Dysclicksia discussed by Raskin in Chapter 4 caused me to raise a couple of questions. Even though it seems that most of the readings encourage designers to design for the lowest denominator of users, the designs suggested don't actually correspond to this philosophy. For example, Raskin states "We must design for the dysclicksic user and remain aware of the problems inherent in using double clicks in an interface", yet he himself acknowledges that the interface technique called double clicking is widely used and extremely useful. Other than this, I really thought it was interesting that Raskin pointed out that while it is impossible to design computers to react perfectly every single time instantaneously, it is extremely important to give feedback to the user to reveal that the user input has been received and is being processed. I also think that while information efficiency of an interface is an important characteristic to consider, it is a very simplified outlook of an interface and does not really speak much of the user experience associated with the interface.


Arpad Kovacs - 3/2/2010 23:42:36

The sense I got from this chapter is that according to the GOMS metrics, keyboard interfaces are far more efficient than GIDs, as shown in the slider vs bifurcated temperature conversion example. I think that for this reason, forms that allow the user to tab between the fields of forms are an extremely efficient paradigm for data input, while those that only allow mouse navigation are very inhumane from a usability standpoint. The tab serves as a delimiter between the fields, and thus effectively allows entry of the data as a series of concatenated strings, maximizing the character efficiency ratio. This can be shown by a GOMS calculation: Pressing 'tab' is a keying action that takes only 0.2 sec, while the alternative of homing to the mouse, pointing to the next form input, and then homing back to the keyboard takes vastly longer: 0.4 + 1.1 + 0.4 = 1.9 seconds.

However, I think that Raskin's quest to find the absolutely most efficient input interface should be taken with a grain of salt. If we were to seek efficiency of input at all costs, then we would probably revert back to the text-only command-line interfaces of the past, where all input consists of very terse, but arcane command incantations. I think that current interfaces offer the best of both worlds: Novice users can use the less efficient, but more intuitive mouse-centric graphical user interface, while advanced users can use quicker keyboard shortcuts.


Victoria Chiu - 3/3/2010 0:02:35

Double-clicking is described as a problematic technique for users. Users cannot simply tell what actions might be taken if they double-click, and usually there is no indication if a double-click event is supported. Users have to remember where double-clicks can be applied. It is hard for users to discover where and when they can use double-click and what will happen. Another action is to click and then wait longer than the duration between two clicks of a double-click. This functionality is not quite easy to find out unless users accidentally try it.


Richard Lan - 3/3/2010 1:48:10

The application of the GOMS analysis to a user interface design is akin to conducting a run-time analysis on an algorithm. You can determine the algorithm's order of growth relative to other algorithms, but comparing exact values produces results of limited value. The results of quantitative user interface experiments then let the researcher formulate relations such as Fitt's and Hick's laws. One of the ways to measure the user interface's performance is by calculating information efficiency, which is the ratio of the amount of information necessary to the amount of information requested. Less efficient systems make the user input more information, such as keystrokes, than is minimally necessary in order to accomplish a certain task. Hence, some of the information is unnecessary or redundant. Overall, assigning quantitative measures to actions such as keystrokes and target homing seems very difficult, due to the variability of parameters such as the level of user experience. In addition, what constitutes good interface design can be a subjective question open to debate, and the amount of time a certain action takes might not be the most important factor. Other factors that one might consider are the ease with which the user understands the display, and the ease with which the user can navigate through the interface. At some level, however, these factors can also be attributed to the time a certain gesture takes, but also include elements such as how quickly the mind can process the information.


Kathryn Skorpil - 3/3/2010 2:29:39

The math shown in the reading shows how HCI is such a difficult field to create quantitative data. Though we are given some interesting math to help create some guidelines as to how to calculate if your user interface is good, it is hard to trust these numbers completely. These numbers assume a perfect test subject, but there is really no such thing as a test subject. People also often tend to get better with a program after multiple uses, so it is hard to gauge the long term effectiveness behind these numbers.


Boaz Avital - 3/3/2010 3:03:12

You can combine th GOMS approach with other quantitaive-esque rules like Fitt's law to start builiding a robust representation of interfaces. Overall the GOMS approach seems a little to fine-grain for me to consider it immensely useful. I can see though why it is considered a simple starting point for quantitative analysis of user interfaces. I did like the rules that basically outline when a person needs to think about what they're doing next and when they don't.


David Zeng - 3/3/2010 3:21:33

I'm glad that there's finally a quantitative element to user interface design. For me, it makes much more sense when we can quantify and justify something with concrete evidence. The techniques that were presented in the reading will work well for applications that are very specific, but may run into issues when looking at overall user interface, especially in areas that require response times or decision making. I feel that this problem can be alleviated by making a more targeted user group, thus allowing the designers to get a better of sense of who they are designing for and getting a narrower range of values to work with. However, this can't be done in every case. This leads us to the question of, Do we design for the extremes or for the average and which range of values will we use? While we do want our user interface to work for all cases, it should work reasonably well in the average case. I believe that it is difficult but doable to use quantitative analysis to find a bridge between the two.


Tomomasa Terazaki - 3/3/2010 8:43:12

This article was very interesting. It mainly talked about how an interface of a program affects how fast the user can use the program and it has a huge factor in saving time for the user if the interface is well designed. The article said there are mainly five different things that will take time when the user is using the program. The five actions are K (0.2 sec), P (1.1 sec), H (0.4 sec), M (1.35 sec), and R. K is the time for the user to type one key. P is the time for the user to move the mouse and point/click something. H is the time to switch between keyboard and the mouse. M is the time for the user to think what he/she is supposed to do with the program. R is the time for the computer to load, but in this reading they disregard this time because the point is to make the best interface for the user so it is easy and fast. The times after are the average time to do each activity. One of my favorite examples in the reading was the time difference between the standard Celsius/Fahrenheit converter and the better-looking Celsius/Fahrenheit converter with an image of a thermometer. Just by looking the users will be attracted by the converter with the image because it looks more fun, but it is very useless because when the user want to check really low or high temperature the user has to drag the thermometer up or down which took 16 seconds in the example. The standard just type the temperature in yourself type was way faster. I can definitely use this when I am coding because for the projects I tend to take time so I try to make my application interface more “pretty” rather then making them easier and faster for the user.


Sally Ahn - 3/3/2010 10:28:45

The GOMS model was an interesting concept; it seems to address the need to quantify the efficiency of users' ability to to perform a task given an interface, which is crucial to evaluating the interface design. Separating the gestures into KPHMR seems like a good approach to simplifying the complexity of a user's interaction, but as the reading mentions, these values can vary. I think the M factor ("mentally preparing") can be especially variable depending on the user's experience and familiarity with the device. Although the "heuristics for placing mental operators" seemed to make sense, it also seemed a little inconsistent to base a model for quantification on a heuristic derived more from common sense than numerical data.


Divya Banesh - 3/3/2010 11:44:59

In this reading, Raskin talks about double clicking or "dysclicksia". We are trained from the first time we use a computer to always double click, even if there is no purpose in double clicking or if we don't mean to double click. But are the benefits of double clicking outweighing the pitfalls. I know several people who prefer to turn the double click function of their computers off, and use single click and special keys. It would be interesting to test if a user group, who have always used the single click, do things faster than the double click way.


Linsey Hansen - 3/3/2010 11:59:00

So, my favorite part was the nice little Ten Usability Heuristics section, mostly since the tips were straight and to the point. One problem I did have with it though was the "Match between system and the real world," since I feel like that almost implies making something like those "I'ts just like a real desk!" desktops, and while some are cool, I personally feel that most of them are messy. Plus, computers are kind of to the point where their computery (though this does address computers in general) way of representing the real world is just as good, so doing something that conforms to much to the actual real world could actually just make the interface confusing. Perhaps I just misread this and it mostly meant to stay away from using silly technical computer terms, and I guess it does tell you to refrain from unnecessary dialog and information and to be consistent with platform conventions.


Chris Wood - 3/3/2010 12:14:15

I never gave much thought to quantitatively assessing a user interface, I always assumed that the only way to rate the usability was to get user feedback. The user may not be aware of his own productivity, and a lot of information can be gained from analyzing quantitative measures such as cursor movement speed and decision making speed. However, I believe that users need to be given the right incentives to complete the tasks used to assess interfaces by the GOMS model. Comparative statistics won't make any sense if the users did not know it was a competition. Fitt's Law is a completely quantitative measure of a user interface, and covers a pretty trivial topic if you ask me. Fitt's Law does not take into account the amount of time taken to determine the target, only the time taken to click the target once it's chosen.


Dan Lynch - 3/3/2010 12:36:14

The article discusses quantitative analyses of interfaces, and in particular some methods of how to quantify human computer interactions. One model discussed is the GMOS model, which stands for the model of goals, objects, methods, and selection rules. An example is given about a person named Hal, who has to convert temperatures, and a proposed interface is critiqued. Before this, I would like to point out some of the assumptions that I disagree with as far as the quantifications are concerned. The GOMS model was desribed in terms of Keying, Pointing, Homing, Mentally Preparing, and Responding. They assigned values to these which should in reality be completely volitile from user to user. The example value they gave to Mentally preparing was M = 1.35sec, but for certain projects mentally preparing can be more than 10 times that value. Also, a huge issue that is completely left out of the computation is that the sum of the human-computer interaction + computer procesing is not the sum total. Consider a case where the data-entry for a particular data-base driven application takes 3 months of hard work. Now consider what happens if this is only required one time, and the application has a lifetime of 1000 years. The data-entry, or human component of the anaylsis then should be negligible and not considered for the application. However, there are applications where this can be considered an extreme case.

Other concepts included information-theoretic efficientcy and a derivation was done for an equation that describes these concepts. These equations are applied to Hal's interface and then other proposed interfaces are displayed and analysed. The temperature converted where the number is converted to both C and F is particularly interesting because at least one value is always garbage, and could potentially cause problems if Hal gets confused.


Long Chen - 3/3/2010 12:51:09

This reading, although well in depth, gave a great overview of quantitative analysis of user interfaces. I especially liked the detailed breakdown of the times for each elementary action. Although the range of the times could be debated, the relative length of each section seems reasonable (with mental preparation taking up the most time). Whenever the machine is holding up human operation, a status page or progress bar should be shown to alleviate any confusion. Hopefully our group application will not come to this, but this is definitely a good implementation to keep in mind for the future.

Efficiency of data is definitely another thing to keep in mind when designing a well-used interface. I have always believed the user should be expected to only input as much information as necessary to generate the desired results within a reasonable amount of time, but the computer should also not be expected to do much calculation due to one or two missing pieces. There should be a nice balance the the efficiency quotient is a great indicator of a well tuned system.

Fitt's law is something we have discussed extensively in class, and I really appreciate that the author provided more details to give a nice refresher in preparation for the midterm. I am still dubious that something such as motion and activity can be captured in an equation based on a log function, but the derivation is well articulated and the numbers do make logical sense. An overall feeling I had after the reading is that there is so much room for discovery and development within user interface design. Similar to Economics, where new and more complicated models are researched to explain complex interactions, the field of user design could have the same potential.


Michael Cao - 3/3/2010 14:40:59

While quantitative analysis seems useful, it also seems somewhat unnecessary in designing good user interfaces. Most of the time, people should be pretty good at estimating how fast people should be able to perform some movement. As a result, it should be intuitive for designers to do things such as putting related buttons close to each other so the user doesn't have to move his hand around too much. Also, letting users test and critique your user interface is probably even better than doing all this quantitative analysis and calculations because then the people that use it tell you exactly what problems they might have with it.


Wei Yeh - 3/3/2010 14:59:08

I found it interesting that there are so many ways to qualitatively measure the efficiency of a UI. However, I'm not sure I agree with using these methods to judge the quality of a UI, since there are still so many human factors that numbers can't capture. Is a fast UI necessarily a good UI? Is fewer mouse clicks necessarily more productive?


Brandon Liu - 3/3/2010 15:42:49

Raskin discusses how it's nearly impossible for user interfaces to always react instantly to human input. Therefore it's necessary to provide some kind of "feedback" to the user about a delay. One of the most common forms of this is the progress bar. One area that this applies to in a subtle way is web browsers: When we switch between web pages, we expect a delay of a few tens or hundreds of milliseconds (analogous to flipping a book page) If you've ever tried to design a "endless scrolling" page, however, where the additional content is inserted via AJAX, the interface suddenly feels much more sluggish since we don't have the expectation of a page refresh. Web browsers are thus different than many other GUI applications since they don't have to necessarily be immediately responsive to have acceptable performance.


Calvin Lin - 3/3/2010 15:42:54

I had never seen this kind of technical approach to evaluating interfaces before, and although I see the great advantages to finding quantitative data for comparison, I also see potential downsides. In the engineering word, it’s always about speed and optimization. However, designing user interfaces is a different kind of challenge.

I can imagine an engineer being so focused on making an interface as optimal for speed as possible, and making design decisions based on this. The reading suggests this with the temperature conversion example, where the thermometer design is dismissed because of how slow it is. However, I would argue that there are cases where slower is ok, or actually better, as a trade off for better design (qualitatively). If speed is an extremely important priority, designers could be lead to minimize and simply their designs for optimal use. As a user, I would not mind trading speed for better aesthetics or a more engaging/fun interaction model. The thermometer is a trivial example: it’s more engaging and fun to drag a slider up and down a thermometer than to just type in some numbers. I can see this being important in education. Some people learn better visually, with diagrams and being able to manipulate objects as if they were real. This would certainly slow down performance times in many cases, but in this case, the trade off outweighs the negatives. Even though we may be doing work, counting in the “fun” and engaging factor matters too with HCI.


Richard Mar - 3/3/2010 15:47:53

The examples of interfaces examined with the GOMS model show that keyboard interaction tends to be faster for the temperature conversion than using a mouse to move the slider around. Whenever I try out new software, I always look for keyboard shortcuts specifically for this reason. It irks me when I see "experienced" users using the mouse to click through an interface when a keystroke or two would achieve the same goal with far more speed and precision. The problem tends to be that the keyboard shortcut is non-obvious (especially since there are no truly standardized keyboard shortcuts), and requires memorization. For instance the apostrophe key (') in Firefox triggers a quick search of just the current page's link text.


Geoffrey Wing - 3/3/2010 16:13:56

Raskin's chapter on quantification brings a different side of user interface design that we have not read too much about. Qualitative analysis is important, of course, but numbers can more accurately describe and differentiate interface designs. Perhaps, users may disagree to what interface looks better, but the interfaces can be compared concretely with quantitative analysis.

I really liked the section about progress bars - it's something I, like every other computer user, deals with. Progress bars are very important in keeping the user at ease. Progress bars have definitely involved, giving users more information. In Windows 95, I often remember progress bars that used segmented blocks. This wasn't as accurate as it could've been, and often I could not tell if the application was frozen or not. More common in Windows 98 and XP, we saw the continuous progress bars. These progress bars could show progress more accurately, and users could better tell if their applications failed or not. Today, in OS X, Vista and beyond we often see progress bars with estimated time remaining, which gives users an idea how long their task will take. In addition, progress bars now have some sort of animation, so the user can tell if the program is still running properly when the progress bar has stopped.

Raskin makes a great point about the importance of quantitative testing - without it, we are only guessing at how well we are doing. As we test our lo-fi models, I will keep quantitative measurements to take in mind, to ensure successful interface design.


Hugh Oh - 3/3/2010 16:38:49

Hick's Law is the time it takes to choose a single action out of a multitude of choices. I feel that there are many flaws with this equation in the sense that what the equation is trying to quantify varies so much that the equation itself becomes useless. For example, what is really considered a choice or what does it mean for something to be habitual? So many things are left up to interpretation it makes this law not really a law but more of a way to explain a concept in a specific example but has no relevance otherwise.


Jonathan Hirschberg - 3/3/2010 16:39:49

The timing of user events is important. User interfaces go through great pains to make sure events follow each other to perpetuate the illusion of causality. That’s why progress bars are used to make sure that users don’t think that the system has failed when it really is just busy. Double clicks must be done rapidly within a window of time to be counted as double clicks or else they’ll only be counted as single clicks. Laws such as Fitts’ law and Hick’s law are part of a science intended to maximize the efficiency of human interactions with interfaces by taking advantage of the way that human cognitive processes work. It helps make people more productive in the workplace when they can focus on the tasks they are doing and having the interfaces designed in ways that work with them rather than against them.


Richard Heng - 3/3/2010 16:48:28

Since pointing is about an order of magnitude slower than keying, I suspect many user interfaces could benefit by creating more keyboard shortcuts. Better yet, interfaces could have keyboard centered designs. Modern systems already have many keyboard shortcuts, but they are mostly power user features. By making the the shortcuts more prominent, and centering the interface around the keyboard, this could potentially result in increased efficiency. This seems to be one of the reasons the latter examples of the temperature converter like the bifurcated example are so much better.


Daniel Nguyen - 3/3/2010 16:51:27

While the methods in this reading seem both accurate and effective, they don't seem very realistic. The scale of the examples given in the reading are very small when compared to real applications, and a technique that is realistic for 4-6 actions is not for a program requiring dozens of actions. Also, the example involves inputs that are unambiguous; numbers must always be typed and the choice of temperature must either be clicked or input by a single character. But, for applications with several ambiguous inputs, estimating the average time can be complicated and time consuming. Even calculating the worst-case time expectation may be unreasonable. Nonetheless, for our specific purposes, this method will probably be useful and effective, since iPhone applications involve a limited number of actions per screen/task.


Eric Fung - 3/3/2010 16:55:24

The benefit of GOMS comes in the last sentence of the relevant reading: "Without a quantitative guide, we are only guessing at how much room there is for improvement." It struck me in the analysis of the Celsius-Fahrenheit that the drop from 5.4 seconds (interface with radio buttons) to 3.9, then 3.7 seconds (text-based entry) did seem like a significant decrease that was approaching a lower bound asymptotically. But there is no sense of scale without the mathematical analysis to establish the bounds.

With all this calculation, I also found it important that the reading included a caveat on trying as hard as possible to reduce the average speed of using the interface. "Parameters other than speed are of importance - error rate, user learning time, long-term user retention". I've seen an input device into which a user sticks his hands so that each finger can be moved up, down, left, right, or inward to register a different key. Though this alternative keyboard may be shown to increase typing speed significantly, its widespread adoption may be prevented by factors such as size and learning time.


Mohsen Rezaei - 3/3/2010 16:59:53

There are a lot of important aspect in user interfaces design. One property of a good UI is consistency with the world. In detail, 3 important aspects of a good UI are: Consistency, unchangeable UI in different versions, user-firendliness.

1. Controls are consistent with what a user is used to. This consist manner is to help users find controls easier on a window/view. Stuffing menus with commands and actions often frustrates users. In general a user should not need to dig in for a long time to find something that he/she was looking for on the window.

2. In a good UI research has shown that people hate sudden changes in the new versions of softwares. An extreme example would be Microsoft Office 2000 to Microsoft office 2007. These changes usually push users away from the application.

3. Las but not least, is the user-friendliness of a UI. For example user should not feel forced or rushed when there i no need for being rushed. Good amount of delays between picture slideshow and keeping alerts/warnings for good amount so user is aware of what is happening around him are examples of a user-friendly UI. As Fitt's Law shows, we dont want to have controls and action buttons that are relevant to each other far from one another. Decreasing the amount of time spent traversing a UI would make the user happier.

Following link was used for further study of UI: http://en.wikipedia.org/wiki/Talk:User_interface#source_for_consistency



Saba Khalilnaji - 3/3/2010 17:02:24

The speed of a task is only fast as the total speed of its individual tasks. This is similar to fourier's principle with waves. If the designer can identify all of the sub-tasks of the primary tasks of a user interface, then they can possibly reduce the times for these tasks to the make UI more efficient overall. For example if too much time is spent switching from mouse to keyboard, the need to switch can be reduced to relieve the delays. Also making targets bigger can reduce the time it takes the user to properly position the mouse over a input. Furthermore I think hick's law's prediction of the time it takes to make a decision with n possible choices is a little over-generalized. Some users go through each option in cases where they are trying to find an option that suits their needs, so each option may be weighed equally and thus fit the law. But if the user has preconceived notions about the options or knows what he wants to do the decision may be made faster than the law predicts


Angela Juang - 3/3/2010 17:05:36

This study focuses on a mathematical view of interfaces, rating interfaces on how good they are based on the number of operations required to complete tasks along with practice and other such factors. However, I don't think it's completely possible to determine that a particular interface is better than another based solely on these factors. While they are good indicators for the most part, I think it's completely possible that an interfaces requiring more operations and taking a little more time to complete tasks may end up being either more "intuitive" to the user or more enjoyable. People naturally take extra steps in performing everyday tasks that may not be the most efficient, but most likely they do this for a personal reason (for example, taking a detour on the way to work because it's more scenic). In this case optimizations such as those suggested by this article may not be desirable.


Mikhail Shashkov - 3/3/2010 17:07:03

I'd like to know more about Hick's Law. It makes sense in the creation of menus and "big important" functions that are there. However, it seems like it has to break down in certain scenarios, and there, hierarchical structures will prevail.

This is particularly relevant to the iPhone where the traditional menu is hardly ever present, and it makes little sense for it to be. Likewise, most structures are heirarchical (besides the tab-based app). Perhaps there is some creative but less-common alternative principle used for situations such as the iPhone?


Nathaniel Baldwin - 3/3/2010 17:10:11

I was pleasantly surprised by this further reading from Raskin, as it didn't irritate me nearly as much as the last one; in fact, I felt he had some useful stuff to say. The ability to calculate accurate estimates of both the time it will take users to accomplish certain tasks (using different methods) and the ability to calculate "information efficiency" strike me as potentially useful. I can see referring back to this information, and (perhaps more likely), to consider some of the concepts that the application of these formulas gives rise to - like, "single-button dialog boxes are probably a waste of the user's time", and more importantly (and something we've already discussed, but I think it bears repeating), "always give immediate feedback."


Spencer Fang - 3/3/2010 17:15:40

I agree that the most "efficient" UI in terms of interface efficiency is not necessarily the most effective UI. The reading's temperature conversion example reached its theoretical maximum efficiency when presented as a bifurcated interface, with a time of 2.15s for the user to accomplish his task. I think we can take it one step further and present the user with the table of temperature conversions. In that case the time spent with the interface is 0, and obviously in this case as well as the previous case, the interface is not very friendly to humans.


It is true that progress bars should not mislead the user when the program is actually unable to predict how long an operation will take. I often encounter progress bars that alternate between moving forward slowly, and pausing, only to quickly proceed to 100% after some point. This misleads the user into thinking the operation will take longer than it will really take. Some "progress" bars purposely show no indication of absolute progress, and instead shows something similar to the horizontal "Barber pole" style indicator on OS X. This too is can be misleading, because when the progress bar is animated, the user will believe the program is making forward progress. In reality the program might have gotten stuck in a loop, and has been in the same state for 30 minutes.


Raymond Lee - 3/3/2010 17:20:20

I think that these methods of measurement are well-thought out and definitely relevant, even to our iPhone apps. However I wonder if these calculations are always indicative of "better" UI. I feel that these methods are a good complement, but not a replacement of testing UIs on actual humans. I would imagine it's very beneficial to have quantitative measurements to rely on rather than only gauging user response by their comments.


Brian Chin - 3/3/2010 17:23:44

I think the reading was quite interesting. I believe it is true that there is lots of qualitative data on humans using interfaces and there is, perhaps, a dearth of quantitative data on this. I question, though, the importance of such data. For one thing humans are incredibly varied, making generalizations about quantitative data dangerous. Also, I question how accurate their data is. In one part of the reading the authors say that almost all of the models they have been developed using quantitative data are able to make predictions that are accurate to within one standard deviation, and the best are able to make predictions that are accurate to within 5%. I believe this is impressive but may also be misleading. When using numbers such as this, one generally assumes that everything is normally distributed. However, it would seem that some of these tasks, such as the tapping of a key on a keyboard, takes such a short amount of time, that a poisson distribution would be more of an accurate approximation of the data. This would make their results much less impressive. Also, people get better at things as they do them more and more. How do these models take into account that, and how can they be as accurate if they include this extra amount of variability?


Owen Lin - 3/3/2010 17:33:42

I really like the idea of being able to quantify how efficient your interface is using the GOMS model. The ability to go through your interface and estimate how much time it takes to achieve a certain goal is great for preliminary testing of your interface. This will allow the interface designers to see where their interface is being overly complex or layered and focus on minimizing the time it takes to do various tasks, so that they can figure out the best interface layout to go forward with. I think that this approach is only good for early testing of your interface because this model is based on calculated averages, and real human experience could vary widely and thus testing the interface on real subjects will give the most accurate results. And ideally, the designers would fine-tune their interface based on actual human testing.


Jeffrey Bair - 3/3/2010 17:39:26

With the GOMS calculation it was interesting to see how Hal’s productivity with certain User Interfaces was. Though the second example, with the picture thermometer was much more advanced and seemed a lot more difficult to make, it was entirely much more difficult for the user to use, almost taking up 3 times as much time. We can see that just by making a User Interface more fancy does not necessarily improve the user’s experience. Oftentimes less is more. With Fitt’s law, it is interesting to see that typical functions in windows are oftentimes not that efficient and we don’t think anything about it because we have gotten used to the interface such as the fact that the toolbar is just a bit lower than the very top and you must accurately pinpoint to the toolbar to choose any function. Keeping this in mind, I feel that we have to keep this in mind when we develop our iPhone applications except it must go the other way. Hitting a button on the edge is much more difficult that hitting ones near the middle but scrolling to the edges is an easily done gesture. With the way humans tend to interact with objects, we have to keep in mind these actions so that we can make our applications that much more accessible to new users.


Andrew Finch - 3/3/2010 17:45:00

Raskin presents a lot of interesting ideas in this chapter, and I felt that I was in strong agreement with most of his arguments. One small blurb that I found particularly insightful was the section called "Double Dysclicksia," where he describes how double-clicking is a bad thing. He makes a good point, and if one simple listens to the concept of double-clicking explained in words, it sounds like a terrible idea. It requires more refined motor skills, it requires the user to remember which actions call for double-clicking, and it is error-prone, possibly being interpreted as a drag and drop or two independent clicks. While this is all true to some degree, in the real world, double-clicking has survived to this day and is still used very commonly in all major operating systems. Most people I know don't complain about it or have much trouble with it. Is this just because we have gotten used to it, or is Raskin just overlooking its benefits? It seems to me that double-clicking is actually a much better HCI technique than Raskin gives it credit for, since it provides added functionality without requiring extra buttons or extra icons on the screen. Also, it seems as though most of us can keep our hands fairly steady while holding the mouse, and we can tap our fingers fairly rapidly, which solves the issue of the double-click being misinterpreted in most cases. This would be an interesting topic to study in more detail.


Jordan Klink - 3/3/2010 17:49:21

I found the reading very interesting as it portrayed an attempt to quantify something I originally considered unable to be quantified. I thought that the sort of algebra-like system was very confusing and borderline worthless. It is of course, quite a feat to be able to quantify such abstract things with such precision. However, I really don't see myself calculating out every single sequence of events that can be taken in one of my designs. That is not to say that I will completely disregard the logic behind the system, though, I just don't consider obtaining specific numbers to be of any real use. What should be taken into consideration, though, is how a sequence is played out on the large scale. Clearly I should not design a system where the most common task requires 5 or so different sub-tasks, each taking a substantial amount of time. Hence, in that regard, the reading opened my eyes to look at what I design in terms of sequences, and to encourage me to optimize my design to accomplish the most common sequences as fast as possible.


Jeffrey Doker - 3/3/2010 17:54:31

This reading mentioned that information is at a maximum when all symbols are equally likely, and that certain keys such as j and \ are rarely used. Does this contribute to the development of coding syntax? For instance in C++ every line ends with a semicolon, a key which is otherwise rarely used. In LaTeX the backslash is used to begin any new control sequence. Square brackets are used as default indexing delimiters in many coding languages. These are all keys which are rarely used in normal contexts and which do not require use of the shift key. These seem like very deliberate choices; by allocating common commands to these previously unused keys, all symbols are engineered to be more equally likely.


bobbylee - 3/3/2010 17:55:07

Although GOMS model doesn't give you accurate time for a serial of gestures, it gives you an estimate of which interface will allow users to take less time to learn and use. I really like this feature. It is because different users using the same interface in different environment will take non-identical time to finish a specific task. And it is nonsense to study all the time to finish a particular task in all situation. And I think GOMS model can give the designer better result of which interface will work better for users instead of giving the designers some numerical numbers, which might make no sense to designers. They just want to know which interface is more convenient for users.


Peter So - 3/3/2010 17:55:30

Jeff Raskin's analysis of human performance resonates with concepts of Frederick Winslow Taylor's scientific management of temporal optimization of a action. Both are concerned about the capacity a person can realistically complete a task and aim to help the user realize the most efficient means to accomplishing this task. Taylor collected timing data with a log book and stopwatch on industrial shop floors to identify shortcuts in the way people worked and cut out excess actions which is similar to the Power Law of practice and the Keystroke-Level Method. Taylor's ideas have had significant influence in manufacturing processes and has influenced the way people work beyond the factory level to converge on a single way of optimally performing a given task. This has developed into however a monotone effect across the industry. I wonder if Raskin's concept will also influence interface design into a single optimum design.


Andrey Lukatsky - 3/3/2010 17:57:25

According to the author, once can begin to measure an interface's efficiency by getting a lower bound on the "amount of information a user has to provide to complete [a particular] task" in question. I was hoping the methods for determining such a bound could be further explained in class. I also feel it would help if we covered the reason for choosing a lower bound versus an upper bound.


Jungmin Yun - 3/3/2010 17:59:32

I have been complaining has been lacking in many of other readings, but this reign was interesting because it showed a lot more quantified view of user interfaces. I still have several problems with the way Raskin makes his arguments though, and this chapter has given me many more fresh examples. Example 1 was his discussion of double clicking. He says that double clicking is a problem because it is difficult to keep track of exactly what is going to happen when one double clicks on something. This seems like an arbitrary problem that we are accusing double clicking of. We could just as well as accuse clicking, or right-clicking of it. It is all part of affordances, right? Why is double clicking so different from clicking. There were other arguments that Raskin was not really clear about as well.

I also  found the breakdown of actions using H, P, and K to be quite useful an idea. All the harping on timing, on the other hand, seemed somewhat over the top; I can see situations where knowing how long it takes an action to go through would be useful, but when the action is variable depending on user, it seems a little presumptuous.


Wilson Chau - 3/3/2010 18:00:33

Like the title suggests this reading was more about quantifying the ability of humans to interact with interfaces. This reading was a lot different from some of the other readings we did in that it was less theoretical and approached interface design from more of a scientific methodical point of view. It went over way more formulas likes Fitt's Law, Raskin's, and Hick's than the other readings we did.


Yu Li - 3/3/2010 18:55:08

I think it's interesting how there are many laws and formulas that people use to judge how a person makes a decision, depending on the result of the possible choices and also the response time (Hick's and Fitt's Law). However, I don't think that anyone can determine the underlying thoughts of a user just by looking at mathematical equations. There are certain things that formulas cannot solve and figure out.


Darren Kwong - 3/3/2010 19:26:28

It is interesting that Windows places its taskbar at the bottom edge of the display by default. This seems less efficient when considering that many windowed applications have their menus and buttons at the top edge of the window. I suppose the separation does reduce the precision required to select a widget in a cluster of widgets. It also separates different types of tasks. What would a quantification of this show compared to empirical data?



[add comment]
Personal tools