Human Information Processing (KLM, GOMS, Fitts' Law)
Lecture on Feb 28, 2008
- Quantification. The Humane Interface. Chap 4. Raskin.
- GOMS. University of Maryland Course Notes. 2002. Hochstein.
- Ftts' Law: Modeling Movement Time in HCI. University of Maryland Course Notes. 2002. Zhao.
Ilya Landa - Feb 26, 2008 10:08:45 pm
Well, at least this reading wasn’t 50 pages long, again. Still, the main idea is perfectly summarized in the third paper, “Principles” paragraph. With all the formulas and the DNA sequences from the first paper, what it is trying to convey seems just too intuitive: don’t force users to jump from a keyboard to mouse and back too often. Avoid confusing actions like double clicks (mode), make buttons readable and big, and group everything near edges – “infinitely targetable” areas since pointers can’t fly past them. The idea of calculating average time to execute a certain task (mostly a repetition of previous readings) is still intriguing. I just wish the authors would provide more real world examples of programmers actually using these formulas in designing interfaces (instead of just using common sense to accomplish the same task). Actually, more real-world examples would make any reading more interesting. Please, don’t turn this class into Calculus. I know these calculations and theories are important, but at least show us how people actually use them.
Jonathan Chow - Feb 26, 2008 11:25:41 pm
I have to admit that this reading was more interesting than the previous one, if only for the sake that it was shorter and at least used one good example. I found the comment about why Macs have the menu buttons on the top bar to be very interesting and much more easier to relate to than any example in the last paper. I think it was only after talking to a friend about the material that I finally understand the worthwhileness of these papers. At first I really didn't understand all the use of numbers and why you don't just test real users. But I suppose that with any interface, you have to start with something and that first design really does anchor where the project will go. So numbers and formulas like this do need to be taken into account at the very beginning. After using this information to make an initial interface, we can then do iterative testing to see if our selections are truly practical with real life users.
Gary Miguel - Feb 27, 2008 05:40:15 pm
Well, more tools to quantitatively compare interface designs. I find the idea of information efficiency very interesting. His analysis of the example C<->F converter very useful, as it showed that even interfaces that don't have glaring problems can actually be made much more efficient. Another example of this was with the Windows menu bar placement. Most of the examples we've seen so far (at the beginning of lecture, in The Design of Everyday Things, etc) have some pretty obvious problems. I wonder how much these quantitative methods get used in the CS or other industries when designing interfaces.
Michelle Au - Feb 27, 2008 05:59:50 pm
What was most surprising to me was the huge difference in time to use the dialog box temperature converter and the GUI temperature converter. Both have very straightforward interfaces and at first I thought the GUI interface would be the faster one since it only involved the mouse while the dialog box would sometimes require the user to switch from mouse to keyboard. Although I can see the advantage of the dialog box in that entering input doesn't require much movement in terms of pointing the mouse to the target location, while the GUI interface requires the user to move the mouse all over the screen. It's interesting to note that while the all-keyboard and bifurcated solutions to Hal's interface produced similar or better average times than the dialog box interface, those interfaces, at least to me, seem less intuitive to use. The all-keyboard interface requires more instruction reading than I have the patience for, and the bifurcated interface has the potential to confuse users with the two outputs. I think it would be interesting to see some more reading addressing the issue of time to accomplish the task vs. good UI.
Khoa Phung - Feb 27, 2008 07:57:47 pm
This reading gives great insight on how to quantitative evaluate designs. This tutorial will facilitate decision making on which design is most efficient and fastest to use for the client, which is a goal most interfaces should reach. The only problem I have with this is that it is hard to determine when you have reached the best result. The example with the Celsius/Fahrenheit conversion demonstrated it. Also, sometimes the most efficient design may not be always the most appealing one. In addition to this, I like the fact that they focus on the user concerning waiting times and how fast the user expects a response from the system before going crazy. This will help to understand and build interfaces accordingly.
Eric Cheung - Feb 27, 2008 07:21:40 pm
I definitely got a better sense from this chapter of why you go through all those calculations as opposed to the last reading. Most importantly, as a means of comparison between various interfaces as opposed to an absolute measure of performance. I found Raskin's distaste for dialog boxes with only one possible input amusing, as we've seen many examples of those in lecture. I'd be kind of interested to know how many companies actually employ the techniques described in the chapter when they design interfaces. I would suppose that many companies are too lazy to spend the time to actually do this. I'm a little curious as to how Fitts' law and Hick's law came about, as those formulas don't seem like intuitively obvious ways of measurement.
Chris Myers - Feb 27, 2008 10:57:03 pm
He is right about the double dysclicksia, I've seen it. New computer users or those with no experience with video games can't do it. I've seen people fiddling with the damn icon for several minutes, unable to launch an app. That is definitely to the reason Microsoft switched the default interface settings to single click icons in ME and 2000. It problem with double/single click actions is that you must experiment and use the interface successfully and learn what works. I grew up on double click and find it to be easier to manipulate files without activating them, but recently I've actually switched to single click mode since it is actually faster. But now when I want to select an individual file, I must press control. Turns out this is better, since activating an icon is more common than selecting it.
I liked how he showed the different approaches to designing the temperature conversion interface. The badass, fancy graphic one was very difficult/slow to use in spite of a more accurate metaphor.
Glen Wong - Feb 28, 2008 12:44:00 am
Hsiu-Fan Wang - Feb 28, 2008 01:55:54 am
Again, Raskin is my hero. I liked the double-click mention, I've noticed that my mother is conditioned to double-click everything... including links in websites. I think one click = select, double click = action convention seems to have largely faded in present interfaces. It seems limited to windows explorer, and I wonder why double clicks are still used at all, since they are completely non-obvious (there is never an indication that clicking a second time quickly does anything)
One disagreement I have with the use of Fitt's law to justify the Mac menu placement is that I practically never use menus. I realize I use keyboard shortcuts far more than would be considered "normal", but I wonder if burying items in menus is even the right solution to problems, as acquiring an item in a menu is the most time consuming part of their use, and not that initial menu selection (this becomes far more problematic when watching my mother try to figure out where text encoding options are hidden, is it in tools? view? edit? file? etc, and so this constant mouse movement pretty quickly begins to outweigh the "you can access menus easy." While I'm on this subject, I've found that hot corners (for activating expose and so forth) are actually a remarkably bad implementation because of their relatively low discoverability and the annoying ability to be discovered when you're trying to do something else.
(While I'm on my Mac-bashing spree, the Vista minimize/restore/close menu is significantly better because they are actually clickable at the edges unlike before, and I find that window management functions are much more commonly accessed than menu items. Since menus on Macs take up that slot, you can't throw your mouse into the corner to close a window.)
PS: I own a tablet, and the "infinite width" at screen corners totally breaks down due to the way the digitizer works. So this is my "not all pointing devices are created equal, why does no one love the tablet users" moment of sadness.
David Jacobs - Feb 28, 2008 01:49:55 am
What a difference a good author makes. Gripes about last lecture's readings aside, I have to say I buy in to the quantified human model now. What bothered me most about the model human processor is that it seemed like the authors were validating their mathematical model using similar if not the same data that was used to generate its parameters. So, of course, the model predicts the data perfectly! Raskin makes no claims about how the human mind works; instead he simply presents some simple empirical task measures and constructs a perfectly reasonable grammar for combining them. Ultimately, the model human processor and Raskin's empirical technique attempt to solve the same problem, I just think that Raskin does it in a more honest (and practical) way.
Brian Taylor - Feb 28, 2008 02:36:08 am
This was much easier to read and understand than that last reading describing a model of the human as a computer. This reading actually produced much more commonly understandable metrics by which a person could compare different designs. I thought that the example was relatively nice and easy to understand, and although I thought the GUI would clearly outperform the other temperature editing models, it was interesting to see how the author never gave up, but continued to iterate or test out new design possibilities to enhance the efficiency of what already seemed to be a relatively decent interface. Overall, I really liked the simple model they had for designing efficiency. Although it did not really incorporate the distance between two buttons in cases where efficiency is really important, I think it will a be a good useful metric for measuring the efficiency of our project interfaces.
Megan Marquardt - Feb 28, 2008 02:52:45 am
Whenever I think of what makes a good design, I always think "simplicity" and this chapter helped me to explain my affinity towards this goal. The analyzation of efficiency in interface design gave a pretty good indication -- if a system requires of its users only the information absolutely needed, the efficiency is at its best. Inefficiency frustrates users, creating poor designs. I read this chapter as a way of putting numbers to something that is very intuitive to users. The most important point I got out of all the quantative analysis discussion is that these quantative numbers do not stand alone, they are relative, and only have meaning when compared to a different user interfaces. There is a lot of variation within the variables, depending on the user, and approximation methods, but looking at this method of determining whether a design is good or not, it is quite useful only when comparing designs.
Alex Choy - Feb 28, 2008 04:44:36 am
Raskin's use of symbols in this article was a bit confusing, but easier to understand than in last time's reading. I agree with Raskin that an interface should provide some sort of feedback if delays are unavoidable. It is important that the user knows what the system is doing and that the system does not misinform users. I find that this is often the case with Windows. When I am copying a file from one location to another, such as from a CD to my hard drive, Windows sometimes gives me an estimate of 30 minutes. This is annoying because I know that the operation actually takes at most 6 or 7 minutes. In addition, the mention about the Mac menu versus the Windows menu was interesting. It was faster to move the cursor to a Mac style menu because it is at the edge of the screen. Trying to move the mouse out of the screen isn't possible. I find this to be true of some of the programs in Windows. When wanted to click on the "File" tab, I end up clicking on the icon in the top left corner, opening a small menu dealing with minimizing or closing the application.
Gerard Sunga - Feb 28, 2008 08:05:27 am
This reading seems much better than the previous one, actually having more structure and a more appropriate use of the more mathematical side of HCI through presenting a couple of scenarios and the differences between them through these numbers. For me, the most interesting part of the chapter was this in-depth look at the various approaches to the user-interface and the resulting effects on the user, namely his/her experience as well as the quantified effects of the different approaches. One of the more interesting parts of this investigation was the finding that the slider-controlled version of setting the temperature was the least efficient. I personally hate this kind of interface (it's slow, clunky, and imprecise) and was pleasantly surprised to find information backing up my views.
Katy Tsai - Feb 28, 2008 09:28:59 am
I think it’s interesting how Raskin quantifies efficiency and the time it takes a user to use an interface. He emphasizes how important these details are in designing an interface, and I definitely agree that if it takes a skilled worker longer to complete a task with some kind of interface or machine, then it is completely useless to the individual. He brings up an important point which will be necessary in designing our mobile applications. However, I feel like by quantifying these tasks, Raskin generalizes time spent on completing certain things to a mathematical formula that is often not representative of most people. All users learn at different rates and function at different rates, and it seems like he is arbitrarily placing a formula on one usage type rather than analyzing the scope of learning curves a user must surpass to use a certain interface. When he calculated that “given n equally likely alternatives, the amount of information communicated by all of them taken together is log (2) n”, I felt like it wasn’t reflective of the usual human thought process. Most people have the ability to scan a list of options quickly, with some options more likely than others. By reducing our actions into mere calculations, he overlooks human cognition and our ability to prioritize and think logically.
Kai Man Jim - Feb 28, 2008 10:34:47 am
I feel this article is shorter and more easy to read. As this article is about calculating time users use for keystroke to help designing the UI. It brought me up a question that I had when I was first using a Windows 95 PC, which is how fast should I press for double click on a mouse so that the computer will know I am trying to activate something and not to click a thing twice slowly. I like the example they gave us about HAL, and I think this is pretty interesting and true though, people will act differently with time when the UI behaves differently.
Jonathan Wu Liu - Feb 28, 2008 10:02:51 am
I appreciate the fact that the results to the different results are quantifiable. I remember many interfaces that do not have an enter button or have only one field. I am left guessing what the UI will do. I enter in a certain value but then I wait to notice any change, if it happens. I liked the temperature converter that added a very simple statement that the temperature will appear as I type. Any simple notification is extremely helpful, as long as it doesn't require take over the focus of the application (ie. the "Word has finished searching the document" dialog box). In our android applications, I wonder how we can time how long it takes to do a task. It definitely will help us create a better interface.
Bo Niu - Feb 28, 2008 10:59:25 am
In the reading, Raskin mentioned that instead of expert shortcut one might as well redesign the interface. I personally strongly disagree with this point, i think that expert shortcuts are great on top of a easy to use basic interface. We expect to do things more effectively as we get more experience with the tasks we perform. For example, we use ctrl+c as the short cut of right click on selected section and click on copy with our mouse, this is a great short cut that we use everyday, i wouldn't like to right click and select copy option hundreds of times as I edits my document or code.
Yang Wang - Feb 28, 2008 11:03:41 am
Talking about the efficiency of interface design, why should an interface be as efficient as possible? Yes, we would want to perform certain tasks on PC as fast as possible, but that's not always the case. Why you are facing a common user group, the main goal is indeed make it easy use and easy to read. Despite that everyone has different learning curve to interfaces, more efficient and faster interface is always harder to use. Easy example, would you use pure GUI for a video or key combinations? Of course common users will prefer pure GUI for a shorter learning curve and easy control. But more advanced players will always pick more efficient but harder methods. The formulas in this reading was pretty interesting, but I disagree to use such a simple function to represent interface efficiency. When a formula obviously lacks in factors, there is not much point to talk about how useful or how correct it is.
Robert Glickman - Feb 28, 2008 11:22:59 am
I began this reading pessimistic (after last time's reading), but found myself pleasantly surprised. At first, I thought "oh, no, not another abstract explanation of some methodology with no concrete examples." Initially, there seemed to be this talk of a quantitative analysis without anything concrete. However, this soon turned into optimism with equations and formulaic analysis of a process. In particular, where I did not envision a complex and precise analysis technique, I was enlightened by this tried and true technique.
Andrew Wan - Feb 28, 2008 11:57:03 am
Raskin's description of GOMS calculations is surprisingly reasonable. Dividing user interactions into movement ands mental preparation makes a lot of sense; I hadn't expected any empirical data, but the approach is good. I found the temperature converter example effective: using the simple computer interface (checkboxes) or a more direct metaphor (graphical thermometers) aren't necessarily the best calls. Other designs, like the last shown, are more time and keystroke efficient. The discussion on "double click dyslexia" seemed pertinent, especially given most people's tendency to either double-click all the time (i.e. on web browsers) or to miss-click and select the target icon.
As for the Mac menu bar, I recognize that it's far easier to select objects on the edge of a view area (assuming the mouse is bounded within the region). That said, controlling application windows is easier in Windows (i.e. minimize/close), if only because those areas are in the upper-right when the program is maximized. Given common usage (web browsing, text processing, etc.), I find this more useful than having quick(er) access to menu toolbars, which I rarely access.
Henry Su - Feb 28, 2008 12:24:20 pm
I think Raskin's article does a decent job quantifying the speed of human information processing. I particularly like the example of the temperature conversion interface. Although the GUI interface may look nicer than the dialog box interface, it takes much longer to convert a temperature. This slowness doesn't even take into account the case of the imperfect user. Whereas most people would spend considerable time jiggling the scroll nob back and forth to home in, most people probably won't make many mistakes in typing in a number in the text box example. This is an example where the thermometer metaphor may be a bad fit. I also liked the discussion about Fitts' law. In my opinion, it is even more important in laptops (assuming the touch pad or track-point is used). My experience has been that these devices are considerably harder to maneuver than mice. It is usually more difficult to estimate the distance correctly, and correcting for overshooting or undershooting takes more rounds and thus more time. In light of this, menu options located at the edge of the display become a very big advantage.
Richard Lo - Feb 28, 2008 12:28:01 pm
So. I guess I'm surprised at how quantitative and technical the past readings have been. When I came into the class, I was not expecting to be dealing with any formulas regarding HCI. Equations like Fitts law provide concrete numbers to go along with intuitive concepts that UI designers must think about. Still though, probably from my inexperience iwth UI design, I'm still not completely convinced of the necessity to know every detail regarding split second reactions when designing the typical interface.
Cole Lodge - Feb 28, 2008 12:23:54 pm
This has to be one of my favorite readings so far; the information was clearly laid out and made simple progressions. Simply because of the way the author showed Human information processing, I have come to agree with it. The previous reading tried to hard to cast every user as a computer which I inherently had issues with. This author, instead, went into quantify human reaction speed; at no point did he attempt to compare the user to a computer. I can definitely see myself taking this method and applying it to some of my interfaces to reduce user time; simply thinking about all the actions the user will need to take in order to navigate the interface will help.
I also found the authors side note on double clicking interesting even though it did not seem to fit with what was being talked about. My comment on the side bar would be that we are too late. Double-clicking has become the standard for almost every interface and has become expected; most users will become confused if the feature was phased out. No interface is going to want to be the first to remove it; as long as we have mice we will have the double-click. Although, with the expanding use of touchscreens, this could been soon.
Johnny Tran - Feb 28, 2008 12:39:00 pm
The quantification tools for measuring the efficiency of a UI certainly seem very useful, and I will definitely have to use them in the future. However, I am concerned about quantification becoming the main method of evaluating UIs; that is, without actually getting non-quantifiable feedback from users. I can envision a completely unnatural and unusable UI that nevertheless quantifies well.
The quote about numerical precision being at the heart of science exemplifies this attitude well. At some point, you lose sight of the big picture while being buried in formulas and numbers. I hope that UI designers do not fall into this trap, especially considering that UI design is one of the more abstract and less quantifiable fields.
Roseanne Wincek - Feb 28, 2008 12:59:09 pm
While I generally like Jef Raskin's readings, this one was not my favorite. I though that his explanation of the Shannon Entropy was convoluted and confusing. I think a big part of this stems from his desire to use expressions where really equations would have been clearer. He later explains his distaste for mathematical formalism, which is evident in his own unclear formalism. I also disagree with him that his most efficient UI was best. I think spending an extra 1.55 seconds to be clearer is fine. I think that users could be confused by these calculations that are done on the fly. Why should a program give you an answer when you aren't finished giving it the necessary input. Also, I don't completely agree with his aversion for 1 option dialog boxes. While they are annoying, sometimes they are the only available form of feedback, which I think, and he earlier agrees, is very important. I don't agree with the idea that the best UI is that which, above all, maximizes informational entropy.
Pavel Borokhov - Feb 28, 2008 12:52:03 pm
The reading provided some interesting insights and also an actual basis and reason for performing quantitative analyses of user interfaces. It is certainly a very useful metric to set the lower bound of hour "fast" your interface can be. It is also worth noting, however, that Raskin mentions, this pure speed isn't the only thing that determines what makes a good interface. For example, I personally thought (and Raskin raises the point) that having the dual-output interface for the temperature calculator (i.e. one that showed both Celsius and Fahrenheit conversions at the same time) would be confusing and prone to error. One alternative I would suggest to the "type the numbers followed by C or F" would be to have "enter/return" as the "perform conversion to C" key and then have a "shift+enter" or "ctrl+enter" to "perform conversion to F". This would eliminate the need to shift your had to the lettered area of the keyboard from the number pad for half of all inputs, and the other half of inputs could just require the positioning of the second hand on the modifier key without needing to move the dominant hand. Such a paradigm of modifier keys is used extensively in Mac OS X application and works quite well (it does, however, present the problem of discoverability – unless the user loves pressing all the different modifier keys to see if they do anything, how would they find out about the modifier key's function?).
It was interesting to see the calculation on Mac OS vs. Windows menu bars. While the main point of that example was to show Fitt's law and the Mac was faster for users due to the menu bar's presence at the top of the screen, I think that there are some other factors at play. One thing is that the menu bar for every single application is located in the exact same place. This is different from Windows, which has menu bars attached to individual windows, with some applications also using custom widgets to draw the menu bars, resulting in different locations and sizes for each window the user is presented with. As a result, it becomes nearly impossible to become habituated to the location of the menu bar. Another thing worth noting is that having the menu bar in a single location reduces clutter on the screen, the amount of potentially activatable regions (having 10 windows with 6 menu items each = 60 items which take up your attention, as opposed to just a single menu bar of 6 items) and consequently the number of choices a user needs to make if they want to accomplish some action, and frees up screen real estate for more meaningful and useful purposes.
Hannah Hu - Feb 28, 2008 01:28:44 pm
In regards to the level of readability of these readings, I have to agree with the majority that these two were very easy to digest; the previous reading was far too dense and technical.
I am, however, concerned about how relevant human reaction times are to designing user interfaces. Reaction times will vary over time, often in decreasing amounts, the more a user interacts with an interface. It's a matter of persistence and practice that determines overal reaction times; if a certain message pops up after I delete a file, for example, I can be accustomed to it after a while.
That doesn't mean we shouldn't be concerned about human reaction times, though. For example, if an application is being installed and it shows a progress bar and state of progress, it might be beneficial to allow the user to react to the changing states of progress in order to see what is really happening during installation.
Zhihui Zhang - Feb 28, 2008 01:06:24 pm
I think that an important thing to keep in mind is that these quantitative measurements are not meant to replace qualitative methods of analyzing interface design. I also know from experience that many people seem to get caught up over the concept of a double-click. Most friends and family members i know are confused as to which is appropriate to which situation.
And does anyone else find it a bit weird that we refer to some of these things as "laws"?
William tseng - Feb 28, 2008 01:36:15 pm
I liked the example in the Raskin reading for the temperature conversion. The more "efficient" solution proposed by the reading was to bifurcate the results and present both the Celsius and Fahrenheit results displayed at the same time. This idea struck me because in our "low fidelity" prototyping session for the Airport application in class we had come across the same concept. We wanted a function in the application to remind us of when our next connecting flight was. Instead of limiting the user to inputting a specific time we also chose to give the user the option of entering a flight number. I think the use of either presenting two types of data and or allowing a user to give different kinds of input for the same result is useful in making an application more efficient. However we must still be careful as the additional choices / results will inevitably make the interface more cluttered and potentially more confusing. (i.e if the user thinks they have to supply information to both fields in order to get 1 result).
Edward Chen - Feb 28, 2008 01:30:10 pm
The most interesting part about the reading was the discussion about how to properly design the interface for C<->F converter. Being a person who is used to using only the keyboard for long periods of time and not having to reach for the mouse, the first interface of directing entering in the text seemed the best for me, but reading on about the GUI interface for the converter, I thought that it'd look cool and be very visually appealing as you can see the direct response between the two scales as you click and drag one side. However, on the other hand, I also thought about how annoying it would be to slowly drag to the measurement that you wanted, having to let go of the bar if you need to change the scaling. I had never even thought about the final solution that they came up with, the bifurcated solution. You really save keystrokes in that interface, and it seemed like they came up with the solution by doing GOM Interfacing Timing method, showing that that method of interface analysis really works and helps in developing a better interface.
Randy Pang - Feb 28, 2008 01:29:42 pm
I thought this article moved in the right direction compared with last lectures reading on quantification, but I still feel it could be more relevant. I think that this article was far better then the last one primarily because it actually used the quantification with real examples (something like the temperture converter example was what I really wanted from last article, a step by step iteration using the quantification as a helpful tool to gain insight into designs). I think what I would have really liked would have been more novel UI insights backed with quantification similar to the mac vs. windows window positioning (which was my favorite part of the article because I never really thought about it that way). Because in the end, all the quantification in the world is useless if all your designs and interfaces suck.
Michael So - Feb 28, 2008 02:17:44 pm
I thought the section on the dangers of double-click was interesting. I myself think double clicking is an easy action to perform as a user. I thought it was funny when the author said for those people who can double click with no problem, they do not suffer from side effects of double click and can shoot a bird with some type of gun. He makes it sound people who can double-click are amazing super beings or something. I understand though the problems that users deal with when trying to double-click. Like a double-click in a word processor will select the whole word when you just want to select text within a word.
The whole quantification analysis is useful I would think. The examples of the analyzing the user interfaces for temperature conversion was pretty good illustration. The time it takes a user to perform an operation is important because I think users would want to work efficiently with the UI. I think users expect a UI to help them accomplish things faster than they would without a UI.
Bruno Mehech - Feb 28, 2008 02:18:47 pm
It seems to me that GOMS seems like a way too complicated way to measure the efficiency of a user interface especially the more complicated versions. KLM might be worth the effort if you are trying to decide between two different options, but going down to the detail of GOMS seems to be unnecessary especially since the time for GOMS are pretty arbitrary anyways. The few times that a model like GOMS might be useful is in very simple actions like the converting back and forth from C to F as described in the reading.
Nir Ackner - Feb 28, 2008 01:39:05 pm
I am curious how Fitt's law extends to handle user learning, when people become used to rapidly moving their mice to set positions on screen. Does the model still apply when taking into account muscle memory? Also interesting is how this law would apply with mouse acceleration, where long distances might not increase in difficulty as quickly.
Yunfei Zong - Feb 28, 2008 01:59:30 pm
I find the author's comparison between the mac and windows menu bars to be highly inaccurate; he merely gauges the time it takes to click on the file menu, then finds in favor for macs. The author makes two assumptions which are false; everyone uses their mouse to click on menu bars, and using the menu bar is the most important task. First and foremost, clicking on the menu bar with your mouse is inefficient. Hotkeys are much faster, for obvious reasons. If those aren't available, using the alt key and dpad to grab the relevent menu item, then using the mouse to click on the submenu item is much faster than clicking on the menu, then moving down and clicking on the submenu [which is what the author tests]. Also, clicking on the menu bar should be among the least used tasks. A well designed application should have intuitive hotkeys and buttons to quickly complete tasks without constantly clicking on top menubar and its submenus.
The author also fails to mention why the Windows system chose to place the title bar on top of the menu bar. The Windows designers obviously decided that relocating and resizing the window [by dragging the titlebar or double-clicking on it] is a more important task for users who multitask and run multiple applications. This is something the mac designs obviously failed to account for, since they have their application menubar overlay on top of the system menubar, reducing the ability to multitask.
Daniel Markovich - Feb 28, 2008 02:13:36 pm
Finally!! Compared to most of the other articles that try to teach us how to visually analyze a UI, this article gives a mathematical approach to testing a user interface. Although I realize the distinction and importance of the qualitative methods of UI design, the quantitative methods described in the article suck as GOMS are just as important. GOMS and Fitts' Law seem very helpful in the later stages of UI testing, when you believe you have designed a solid, easy to use interface but would like to "fine tune" it.
Brandon Lewis - Feb 28, 2008 02:26:43 pm
This reading is a welcome reality check for those who like to build flashy interfaces over those that are practical and functional. Sometimes a cleaver graphical idea is just too cumbersome for the user to use. If the user's goals are speed and informational efficiency, having to use approximate tools like the GID is a bad idea, as is the case with the two conversion interfaces. Count your keystrokes, to put it simply. Minimize not merely the number of steps but the amount of time required. So many interfaces have needless confirmation steps, needless warning dialogs. On the other hand, using GOMS seems like a lot of tedious work =(. I wish there were a flashy interface for building GOMS models......
Max Preston - Feb 28, 2008 02:16:10 pm
I found this article to offer some pretty good insight into benchmarking human performance and designing interfaces to minimize the amount of time for a human to perform a given task. The stuff about character efficiency was pretty interesting too, although his analysis of different interfaces from a character efficiency perspective was much more interesting than his discussion of probability and how to calculate it. I think these methods would actually be useful in analyzing and designing a fast and user-friendly interface. Oh, and this article was MUCH better than that article that tried to measure human performance with arbitrarily complex nervous processing equations and figures.
Paul Mans - Feb 28, 2008 02:24:53 pm
I appreciated this article simply because it gave me a better idea of how to make good interface design a repeatable process. A lot of the techniques we are learning in this class are common sense and techniques you might naturally use for design, but being able to identify the separate tasks in the design process and repeat a similar design process as a whole i think is really valuable. In the same line of thinking as Maxwell defines the aim of exact science, a designer who uses GOMS analysis techniques reduces the problems of interface design to the determination of quantities. Describing the time to use an interface in numerical quantities provides a designer with concise data that the designer can easily assimilate in his or her mind without the interference of writing styles or the quality of artistic depiction that arises when interfaces are described in writing or graphically. Overall I think it is a pretty daunting idea to put each of our user-interface tasks through this sort of rigorous analysis, but I imagine if you break the tasks up into small enough pieces it is not so bad.
Harendra Guturu - Feb 28, 2008 01:21:58 pm
I found the concept of using GOMS calculations to compute interface efficiency. This idea of assigning points to quantify an interface seems like a great way of analyzing something very subjective in a objective way. Referring to Hsui-Fan's comment, I think that bury commands in menus may not be the best way to approach an interface, but i think it is much better than requiring the memorization of keyboard shortcuts. Keyboard shortcuts have great efficiency, but the learning curve is huge and also if the interface is not used for a few days, the lack of prompts makes it difficult to remember the commands. On the other hand menus will always list the commands and will allow users to remember exactly what they are able to do at a particular situation.
JessicaFitzgerald - Feb 28, 2008 02:44:32 pm
This article seemed to take a more mathematical approach to interface design, in order to determine the effectiveness of the interface. I thought this was an interesting way to go about it. The usability of something is hard to measure, whether one interface is more user friendly than another, It would make sense though to determine which is better by figuring out the time it takes a user to complete a task. Two interfaces that complete the same task, and one that takes the user a shorter amount of time to execute the task, is most likely the better interface. The formulas used to compute the amount of time are objective and are mostly estimates, which could lead to an incorrect reading of which interface may be better.
Also, I thought Hick's law was an interesting take on the topic. When I think of a bunch of different tasks, in my head I group them into different categories. But when designing an interface, other users may group them into different categories and organize them into different ways which can bring a lot of confusion. The menu in Microsoft Word 2007 is divided into submenus. When looking for a particular button it can be confusing which submenu to look at, so you end up looking at all of them which is not an efficient interface. Thus it seems better that an interface would be designed with one menu with a bunch of items instead of several submenus - exactly what Hick's law suggests.
Adam Singer - Feb 28, 2008 02:43:58 pm
I really enjoyed this reading. So far this semester, we have been mainly focused on the qualitative aspects of HCI design. We have talked about visibility, mental models, and even delving into the workings of the human mind as a series of processors. This chapter provides some useful metrics for actually measuring the efficiency of our designs. While many of the constant values (e.g. the time it takes a user to enter a keystroke vs. the time it takes the user to navigate to a certain part of the interface) will be different in the context of a mobile device, we can use these metrics to gauge the overall efficiency of one design vs. another. While a lot of the results obtained by the use of these metrics are fairly common sense - who wouldn't have guessed that the simple dual textbox/radio button layout was far more efficient than sliding an arrow across a thermometer with multiple scales - they provide a means of testing the efficiency of more experimental designs that don't necessarily have clear advantages or disadvantages. This will help us in designing for Android since Android employs various cutting edge interaction methods that have not yet been 'proven', so to speak, in the real world.
Benjamin Sussman - Feb 28, 2008 02:35:16 pm
Unlike the previous article (as seems to be the consensus), Raskin is able to explain the simple facts about human physical and (at least observable) mental ability in ways completely appropriate to UI. I appreciated how he used the models presented in the previous reading to analyze the complexity and usage patterns that will arise from specific UIs. Things like Fitt's law bring about interesting discussions because we can talk simultaneously about efficiency and understandability when we talk about the user achieving "success" instead of just performing a task or figuring out what the UI is telling the user.
188.8.131.52 - Feb 28, 2008 02:47:12 pm
Finally an article that brings a science to the very subjective study of user interfaces. I really enjoyed this article becuase it is the first mention of quantitative metrics for analysis. Its great that you could take what we read about in this chapter and apply the same formulas to Amazon, Google, Yahoo, and see how they rank up against each other in some common simple and similar tasks. The article basically creates standardized notions of different subtasks, and how long they take, and then builds tasks as a collection of these subtasks, and then adds up the time of the subtasks and estimates a time for finishing a task. This chapter really places an emphasis on "efficiency", that is the interface that will allow you to finish your tasks as quickly as possible. There are some simplifications and oversights here. For example, users might be willing to tradeoff an interface that takes slightly longer than an other interface but it looks and feels a lot nicer. Which is the better interface, and which is the interface that more users will choose? Probably the one that feels and looks nicer, even though it may take slightly longer to accomplish a given task. This was illustrated in the example Rasin gave, where his most "efficient" interface was slightly confusing and not clear.
Reid Hironaga - Feb 28, 2008 02:46:36 pm
The article on quantitative analysis of user interfaces provided an interesting view on the success levels of designs as a more numerical science than some of the other readings. The evidence seems purely empirical, but is useful, as with the models we have seen previously to predict user behavior. By going through a tangible example that we can relate to, Raskin provides a solid set of ideals and how they apply to the temperature conversion example. At the same time, his method of computing times for estimating relative response times seems extremely tedious. I was amazed by Hick's Law, which is comparable to the time complexity of n log n floor sorting time for computers with n unbounded values. I wouldn't have imagined that humans process information near regularly enough to be modeled by any sort of equation regularly. Fitts Law is also empirically based, and together with Hicks law makes me feel like they are putting too much effort into formalizing a method that may be more efficient without such formalities that attempt to force regularity and idealized models onto chaotic systems such as people.
Jeremy Syn - Feb 28, 2008 02:47:33 pm
I thought it was interesting in the way he gave us information about the time it takes for the human to process information. As for the matter in terms of typing speeds, it didn't even occur to me that we type at different rates when typing different sorts of data, such as email addresses. When I hear how fast someone types in terms of words per minute, I didn't realize that this rate varies depending on the situation. I also thought it was interestint that double-clicking posed as a problem. Well for someone like me, who has been used to the features of mice since a young age, double-clicking comes naturally. Some might not know when a double click would be applicable but there's usually a standard in the system and one must get used to that standard in order to be comfortable with the clicking. I did however also have the problem of once in a while double clicking too slowly and that could pose as a problem as well.
Joe Cancilla - Feb 28, 2008 02:33:41 pm
Having a theoretical minimum time for an operation is a great way to determine the efficacy of your program. In the example tempurture conversion program, the concept of a theoretical minimum led me to invent the bifurcated interface before it was introduced in the text. It seems like the best (only?) way to get rid of unneccessary keystrokes. I agree that the error rate would be a concern with this interface, but I love the efficiancy.
Raskin's example of the Mac's universal placement of the menubar at the top of the screen to illustrate Fitt's law makes a lot of sense. I always wondered why Mac's interface was designed that way. It always seemed unintuitive that an application's control menu wasn't actually in the application's window, but that seems like a good enough reason.
Brian Trong Tran - Feb 28, 2008 02:37:45 pm
We tend to always think about examining our algorithms to have the most efficient program, but we never think about the UI. The reading gives a very good reminder that a well designed UI would mean that the user would be done with wanted functions sooner. I liked how user actions were quantified in respect to time. It gives us an idea of how efficient is our UI and whether or not a user will find the interface cumbersome and a chore.
Jeffrey Wang - Feb 28, 2008 02:34:40 pm
I agree with most people that Raskin's quantification method is a good method of determining a interface's effectiveness. In terms of Android, one can measure how long it takes a person to figure out what each of the buttons do. Specifically one can even measure if it's better to use a touch-screen keyboard or a button-based keyboard. Moreoever, navigating through mobile applications are usually harder than computer applications. Quantification can measure what is the best layout for the mobile application. However, one problem is setting up the equipment to get the experiment going.
Both GOMS and Fitts seem to fit in at the later stages of user interface design. At the beginning, designer still need to rely on qualitative methods and decisions first.
Raymond Planthold - Feb 28, 2008 02:23:40 pm
Much better than the last one. Fewer formulas, and many more concrete examples of the concepts.
I knew about Fitt's law, but I hadn't heard about Hick's Law. I thought it was rather interesting, since it seems to clash with the current meme that fewer choices are better. I suppose that one has more to do with the total number of choices rather than the organization of choices, but I think there's some overlap.
For the Fitt's Law discussion however, I wish he had at least mentioned the fact that the Mac approach to the menubar does make it completely modal -- the very subject of the previous chapter in his book. I believe I had a similar complaint about that chapter that it did not point out an obvious connection to the chapter before it. It detracts from the sense that each chapter is building on the previous ones.
Timothy Edgar - Feb 28, 2008 02:20:24 pm
I found it interesting that there is so much math and data analysis in trying to define a metric for UI. I think it's natural of engineers to try to model things, however I really question how useful detailed models are. I think general trends work out, and I'm interested in how they justify the last example of 2 button clicks of 4 choices with 1 of 8 choices. Perhaps it is easier to think that way in discretizing trends, however I'm not for sure if such analysis is truly necessary. It was interesting with the case of the C<->F converter, however the trade off seemed quite obvious. Text is faster, however a gui interface is a bit more intuitive. It relates to LabVIEW and MATLAB, where LabVIEW is seemingly more intuitive, yet it takes a lot more time to set up.
Andry Jong - Feb 28, 2008 02:57:55 pm
I think this reading is really useful in designing a new user interface. It is really important to think about how fast a user can perform a task using an interface that we design. Since we're designing an interface for a mobile phone application, which are mostly used rapidly, it's even more important for our design to help user perform tasks more efficiently.
Although in the reading, Raskin discusses in length of the temperature converter interface design, the one that stroke me more as I read this chapter was the one where he discusses about Fitts' law using Macintosh-style menu versus Microsoft Windows-style menu. It surprised me since I always thought that the menu in Macintosh are way too far. But then again, after reading the explanation of how cursor movement doesn't go beyond the edge of display, it became sort of intuitive.
Zhou Li - Feb 28, 2008 03:08:13 pm
Compare to the last reading by Card, Moran & Newell, this one is easier to read and interpret; at least Raskin gives some real life examples on hot to analyze interface designs using qualitative methods. Many interesting interface design ideas are presented in the reading. The importance of interface timing is again emphasized in the reading. Although we can not build interfaces that can complete any operation within human reaction time, but successful interfaces should give feedbacks that show users their inputs have been received, because long delay can cause confusion and frustration. As shown through Hal's interface example, GOMS analysis (one the the best quantitative analysis methods) can provide clear interface efficiency calculation for each of the alternatives to a design problem, therefore giving designer a concrete tool to evaluate and choose the best interface design.
Scott Crawford - Feb 28, 2008 02:58:30 pm
In general this article seems a bit presumptuous. I guess that's sort of the point, they want to treat just the quantitative aspects of UI analysis, and disregard the qualitative. This is an important piece of the picture, but again not exhaustive (user 'opinion' of their use of the UI is not governed exclusively by speed of use). An important modification to try and get the quantitative aspect closer to exhaustive would be to change the 'weight' of an operation to include more than just the 'time' it takes to perform the action - including things like the rate at which the action tires the user (operations that involve thinking may gradually get harder for the user as they have had to perform the operation more often). Also, the simplification of operation costs to their strict averages over all users (to create the 'typical' user) may actually be an over-simplification. For instance, a particular user may be about the average in one operation type (i.e. proficient typist) but be less than average in another operation type (i.e. thinking operations). In this way, depending upon where within the regions the user falls, different UI's could emerge as the optimal when compared based upon different 'user parameters.' To evaluate between multiple user groups, it might be more effective to construct a k-dimensional convex hull (where you break the action space into k distinct operations; in the chapter, k = 4) of all UIs, and then can quickly compute the optimal UI for a particular user based upon the k-dimensional vector of that user's operational parameters. This would be practical in analyzing what UI to deploy to particular user groups, that within their own subset of users as a whole, can be studied to determine the average user within that subgroup.
Benjamin Lau - Feb 28, 2008 02:57:28 pm
Like everyone else, I thought this reading was much better than the previous one. It was higher level in topic, gave specific user interface examples we were familiar with (eg the temperature converter and keyboard stuff), and didn't get bogged down in unnecessary and possibly contentious metaphors of the human mind to processors. I liked how Raskin explained clearly that the timing numbers used in quantification, despite their appearances, are not meant to be absolute and are actually supposed to be used only for the purposes of ranking interfaces-- in other words, relative comparison. I don't think this was emphasized enough in the last reading. Another thing I found useful in this reading was the snippets of information theory, which allows us to determine what a theoretical lower bound for a 'perfect' interface would be. A lot of this stuff reminded me of work in algorithms. For example, in algorithms, we don't make an attempt to find out the exact time (eg in milliseconds) of an algorithm on an input. That would be machine dependent (and in HCI quantification, user dependent). Rather, we determine the general theta running time of it (linear or exponential etc in N), and then use this abstraction for the purposes of ranking (merge sort better than bubble sort). Similarly, in algorithms, we use theory to determine lower bounds for perfect algorithms. Sorting for instance was proved to have an (n log n) lower bound if comparisons were used. So we knew that, if there were faster sorts, they could not make use of > and < etc directly, and if we wanted to know if our sort was efficient, we would compare it to this lower bound. So in short I found this reading very insightful and much more relevant. I liked the concrete interface examples, eg why Mac pulldown menus are superior to the ones in Windows. These examples showed why stuff like Fitt's law might actually be useful.
Gordon Mei - Feb 28, 2008 03:11:02 pm
In response to Yun, I have to say I respectively disagree. In the comparison of Windows and Mac choices of positioning the menu bar and title bars, I believe part of the reasoning behind placing the Mac menu bars consistently at the top is that they can be reached about five times faster due to Fitts's law, the concept referenced in the article by Zhao. By placing the menu bar at the top edge of the screen, the "clickable" area is infinitely high, so that users can simply "throw" their cursors to top edge of the screen to access the bar. From outside readings, I've read that the fastest areas of access by a mouse are firstly the point at which your cursor sits, followed by the four corners of the screen, and followed by the four edges of the screen. Considering this, it would be more sensible to place a menu bar on an edge (the closest one being the top) rather than a position offset from the top (attached to a window). Furthermore, people generally do not access the file menu (menu bar) directly in an inactive window - they only access the one in focus. People typically bring an inactive window back into focus before doing anything with the file menu.
As for the title bars, these bars do follow any given window, so there is no loss in usability in the multitasking aspect. I do agree that keyboard shortcuts, when mastered, are potentially faster than navigating to the UI element with a mouse, but this time gap is often dictated by how well the UI was designed.
Lita Cho - Feb 28, 2008 03:22:53 pm
I really agree with given feedback to users when there will be a delay and letting the use know if the system doesn't know when the job will be completed. Lying and misinform the users about the delay something I hated when UI systems. Whenever I was copying over some huge file from my computer to another device, Windows would show a progress bar giving a percentage of completion and an a expected time the job would completed. The window started off with 5mins, and after I came back, the expected time turn into 10mins. In the end, it took 20 minutes to copy over everything. I would have just felt a lot less frustrated if the estimated time of completion wasn't even there rather than lying to me.
Also I found the KHRM codes a little confusing. However, I did enjoy the analysis of how long it would take a user to perform an action. I think that is very important in UI design that a lot of software doesn't take advantage of.
Diane Ko - Feb 28, 2008 03:07:42 pm
Dialog boxes I think can be really effective if done well but can be more of a nuisance than anything else if done improperly or done poorly. Even yesterday when working with eclipse I came across two dialog boxes that were entirely the same that had an ok button mentioning an error with a lot of text. Naturally what ended up happening was I ended up not reading the message cause it was too long and I was trying to code and ended up just pressing the button twice because I knew that the message would come up twice. It's highly possible that the second message was different but because it came up with the same amount of text, I assumed it was the same and ignored it.
Siyu Song - Feb 28, 2008 03:16:20 pm
I thought it was odd how strict and mathematical the analysis of interfaces was. The thing that surprised me the most was how they had such specific numbers down to the millisecond of how long it took to do specific actions for an average user. I suppose this information is easy to get when the interface is of a keyboard because it has been around for so long a lot of empirical evidence for it exists and is easy to get. Something I would like to know is are those numbers (K, P, H, M) quantities that are stable or if they change as users become more comfortable with a given input device. Because it seems the rest of the analysis rests on the accuracy and stability of those given quantities.
Jiahan Jiang - Feb 28, 2008 03:18:26 pm
I enjoyed this article a lot; the discussion of the GOMS method is really interesting. Though I don't completely understand/care for all the claims, I definitely see the significance of quantification; and the discussions of things like the double dysclicksia are very real. It was definitely better than the previous reading assignment.
184.108.40.206 - Feb 28, 2008 03:24:20 pm
This reading was definitely more down to earth. It was a more realistic approach to quantifying what is most often considered to be a qualitative assessment. I found an interesting balance between speed of use and memorization, in that while certain tasks can be made very quick by shortcuts, the sheer amount of work necessary to memorize them damages the users' desire to learn them.