Qualitative Evaluation

From Cs160-sp08

Jump to: navigation, search

Lecture on Mar 4, 2008

Slides

Lecture Video: Windows Media Stream Downloadable .zip

Contents

Readings

Jonathan Chow - Mar 02, 2008 12:01:27 am

The Lewis and Rieman article was exactly what I was looking for last week! It did a good job of explaining exactly where GOMS analysis (action analysis) fits into the scheme of things. Before I found it hard to understand why someone would be so detailed in an examination, thinking that that type of analysis was the only kind performed. Now it makes sense to me that you would only perform a detailed action analysis on frequently used actions. The mention of the Cognitive walkthrough was very interesting. It seems very intuitive and a step that we all can take while designing and adding in more features. Just take a moment to try and look through the user's eyes and see what problems may arise. The Heuristics mentioned are mostly common sense, but stated in a concrete and measurable way.

Gary Miguel - Mar 02, 2008 02:27:29 pm

Well, I'm looking forward much more to doing a "back of the envelope" action analysis than a "formal" one. Both of the readings were a really interesting mix of theory and application. I was amused by the fact that both included warnings about how to do usability testing successfully in a company setting where corporate politics, design budgets, and individual egos are all factors that need to be taken into account. Thankfully I don't have to deal with any of those for this class.

Chris Myers - Mar 02, 2008 08:59:37 pm

Good stuff to keep in mind. Having a printout of the ten usability heuristics available during development might be good idea. That includes the prototyping and programming phases.

From Task-centered user interface design, I particular liked the part of telling a story about each task that makes sense. If the task doesn't make sense, then why are doing it?

Ravi Dharawat - Mar 02, 2008 10:04:30 pm

I really enjoyed these readings. They were straight to the point, and provided thorough, detailed steps in using all three techniques (well, really two techniques and one set of guidelines). The nine heuristics discussed in the second reading seemed like good things to keep in mind throughout the design process, a little list to check one's work against. Action analysis seems like a good sanity check. I'm not a big fan of over-detailed action analysis, since it seems like a lot of work for not much reward, though I think it would be wise to give it a few chances. Cognitive analysis is something that should definitely be done by more than a few people with a great deal of severity more than a few times before any users see any sort of test product, I should think.

Gordon Mei - Mar 03, 2008 03:04:34 am

Out of all these techniques, I feel the back-of-the-envelope analysis approach is the most valuable in terms of the usefulness of feedback for the effort and time spent studying the user's tasks. The type of hierarchical breakdown of the task seen in formal action analysis is useful when you're trying to fine tune what's left to fix after the back-of-the-envelope method. I also imagine it would be even more useful when involving interfaces in applications with grand scale usage, although back-of-the-envelope would retain a crucial role as developer intuition of interfaces improves.

In terms of the back-of-the-envelope action analysis, it's useful to have a list of the action sequence, and decide whether, say, "removing the lens cap" is a step that takes too long to not alter a camera from using a lens cap versus an automatically retracting lens. We could probably have discovered what fraction of a second that step contributed, but what we qualitatively know already provided sufficient information to decisively change a part of the interfacing.

Also useful are the heuristics, as having a short ten-commandments style do's and don't's list in particular provides simple guidelines based on the collective experience from those before us proves useful in a wide range of areas. Nielsen and Molich's Nine Heuristics included pointers such as "be consistent" - for instance, I normally see families of applications confusingly place 'Options' under 'Tools' in the file menu in one place, and then 'Preferences' under the 'Edit' part of another's file menu. You would, of course, use these alongside the other general tips, such as "natural language", to prevent the article's example of the confusion of 'Create Table'/'Format Cells' versus 'Create Table'/'Edit Table' in trying to be too strictly consistent between a word processor and spreadsheet application.

Nir Ackner - Mar 03, 2008 11:22:29 am

Having worked at a large web company doing design work, I find that the heuristics presented in these readings are far more valuable than most of the other approaches we've looked at. While getting real users to evaluate your product is the right thing to do if you want to really get an idea of how well your proposed design would work, the truth of the matter is that the vast majority of the time companies aren't willing to commit the resources necessary to get test subjects. Instead, relying on expert designers using heuristics is often the dominant approach, especially when adding to or modifying an interface. Knowing the guidelines yourself allows you to beat them to the punch on the majority of issues, saving yourself a lot of time.

JessicaFitzgerald - Mar 03, 2008 01:05:47 pm

I liked the contrast between the two reading and how one talked about usability evaluation done by users, and how the other one talked about how we can evaluate the interface to find its main problems without users. User analysis is a great tool to evaluate the interface, but it is often hard to have those users on hand when developing the interface, and thus we must think of other ways to correct issues potential users may face. I was particularly interested in the heuristic analysis and its nine heuristics developed by Nielsen and Molich. I thought of these 9 things as a checklist to follow when looking at your interface. If it is able to pass each of these requirements by all the evaluators, then a lot of the big problems have been dealt with. While this method seems very straightforward, it seems to only find problems with the interface that aren't too relevant to the big picture. in general, a back-of-the envelope approach seems more fitting to find the most general problems with the interface quite easily, and also without having to use many evaluators.

Hannah Hu - Mar 03, 2008 03:29:05 pm

It is great to have resources on how to evaluate prototypes without users. Finding target users to test out working interfaces is too much of a hassle in reality. For relatively small projects like the ones we are doing for this class, it is not too much of a big deal, but already my group and I had trouble figuring out how to get people to test products. Imagine doing that on a large-scale basis.

The ten heuristics (I don't know why you guys say nine, it specifically stated ten) are truly invaluable. They're great to have around when designing without users.

David Jacobs - Mar 03, 2008 03:45:18 pm

I'm not quite sure I understand the difference between user testing and heuristic evaluation. Nielsen implies that the difference between the two techniques lay in where the analysis takes place. During a user test, the observer witnesses a user's actions, inferring interface problems from said actions. During a heuristic evaluation, the observer plays a more active role, explaining difficult parts of the system and asking direct questions about interface problems. I feel like the low fidelity prototyping we discussed in class last week does not fall into either of these categories. It seems like the thinking-out-loud approach sort of settles the issues of misinterpreting user actions, while the purely observational role prevents leading the user too much. Perhaps I'm missing some detail of heuristic evaluation, but I don't really see the advantage over user testing (at least the model described in last week's readings).

Ilya Landa - Mar 03, 2008 04:47:57 pm

Great readings for this week. Especially the first article. The guidelines for analysing user evaluation results will definetly come in handy in designing the project. Although, except for the really interesting and importan point, I kept having the feeling of "Duh" (sorry). Build your interface readable. Make sure that the important buttons are large, and that the menu items allow user to easily understand their purpose, etc. Just some common sence stuff written out. Still, the most interesting fact in the reading, for me, was that usefulness of adding additional users goes down draatically at a very low number. For example, 4 users can spot appriximately 75% problems in an average interface; geting more users before reworking the interface is just not cost efficient.

Eric Cheung - Mar 03, 2008 05:27:39 pm

I'd think that when doing heuristic evaluation without users as described in the two readings, the evaluators would have to be careful to stick to the heuristics and not introduce any of their own biases. I thought the Lewis and Rieman reading provided a much more pragmatic view of the calculations described in the earlier readings. It does seem to make more sense to do back-of-the-envelope calculations when you have a more complicated interface. I think it's important to assign different severity ratings to problems (otherwise some more important problems can get lost in the shuffle), as Nielsen suggests, and that those ratings are pretty clearly delineated so that there's not a whole lot of overlap between them.

Zhou Li - Mar 03, 2008 06:43:41 pm

In the Heuristic Evaluation reading, Nielsen argues that the results of heuristic evaluation are immediate and the observer does not have to interpret the evaluator’s actions, because the evaluator has to analyze the usability problems of the interface in a heuristic evaluation session, allowing the observer to simply record the evaluator’s comment about the interface. In contrast, the experimenter in a user test has to interpret the user’s actions and then relate them to possible usability problems. As described in the reading, minor problems, such as non-uniform font for the same dialogue, which might only result in a brief pause or slowdown on the user’s part, are impossible for the experimenter to detect. These are some of the reasons Nielsen thinks heuristic evaluation should be performed in early iterative interface design stage. Although those reasons are true, I think the availability of qualified evaluators might be a problem for designer trying to test their interface’s usability, especially when three to five evaluators are needed for a meaningful heuristic evaluation. Therefore, it is hard to try to perform heuristic on class projects due to the lack of professional evaluators.

The summary section of Chapter 4: Evaluating the Design Without Users suggests when to use each of the evaluation methods described. I think it is a really good combination of the three different methods that takes the advantages of all methods. Interface designers have to think about user stories when implementing the interface anyways, so cognitive walkthrough forces them to think through all details and aspects of actions needed to accomplish the task. Heuristic evaluation allows evaluators to ask questions and get help while testing usability of the interface, so it can be used to catch major and minor problems before system is tested by users. Finally, back-of-the-envelope action analysis reveals user’s effort for a new feature, allowing designers to decide whether to add it or not.

Khoa Phung - Mar 03, 2008 09:13:26 pm

This reading has great information about our lo-fi prototyping and evaluation. Few items were common knowledge on how to perform heuristic evaluation, while others such as going through the interface twice - so the user gets a general impression and can focus on the interface elements on the second run - and splitting up interfaces into category specific heuristics - so it will reveal tasks specific problems - are excellent challenges that the evaluator should deal with. Also, the severity scale which includes frequency, impact, persistence, and market impact differentiate the problems that come up and determine whether a product is ready or not. From my experience, programmers just rate the priority of the bug without actually having identified which severity category each bug belongs into. In addition, I thought it was great to be aware of the fact that severity ratings are better collected after the evaluation session by listing them in a questionnaire after the session in order not to confuse the user while doing the testing. Also, it was nice to see how 3 users minimum has been selected to get a satisfactory mean for practical purposes as a single evaluator is too unreliable and should not be trusted.

The second text had a great example of what to watch out for. In usabilities studies we present the person with the problem, but in real life, the person might not even know he has a problem (such as the slow/fast processor mode problem). In addition, they might not notice the controls and that only increases with the options software has these days. Last, the user has to identify the right controls. I often see people browse around icons for few minutes because they think it should be possible, but cannot find it. This text also shows that the user has little time (unless you provide some compensation) and that users get more experienced over time and find more bugs as time progresses. Therefore, some problems may not be found initially.

Scott Crawford - Mar 03, 2008 09:45:16 pm

In general, heuristic evaluation is just a derivative of comprehensive testing, just paying attention to certain UI attributes for identifying problems. The description of the heuristics seems to me to presuppose what the solutions to problems found using the heuristic will look like, and indeed may in and of themselves create a gray area whereby the presupposed answer may not be the ideal solution. On Nielsen's website, I think this distinction is somewhat resolved by the 'severity' rating section, which allows evaluators to say that certain issues that the heuristics identify might not be issues at all (though it doesn't help in the case that there is a problem, but the obvious solution the heuristic implies is still sub-optimal). In chapter 4, I liked the back-of-the-envelope action analysis, but think that it may 'overestimate' the time for certain mental operations (once a mental operation is 'learned' it doesn't require a mental 'narrative' to perform). This probably isn't a big deal (in a back-of-the-envelope way), but if you find yourself listing a lot of mental operations in the high-level overview of the task, it might be wise to account for well-learned operations.

Harendra Guturu - Mar 03, 2008 10:34:56 pm

I found the Heuristic Evaluation link very helpful. I think its crucial to have a way to testing a interface without always running to a user. Having to consult a user is time consuming when its done with volunteers due to having to coordinate schedules and expensive if "users" are hired for their feedback. A good way to effectively utilize users would be by employing the general heuristics of interface design to remove bad designs. Users can then be consulted at major milestones so that they can give feedback on a large array of topics rather how much they like a small button's shadow effect. Also having the heuristics allows for blatantly bad interfaces to be found since the designers will most likely be on the look out for such problems.

I also find the idea of cognitive walkthrough very useful. I realized I use an informal version of it when designing small projects such as the recent Android application. It is good to see that intuitive ideas such as thinking through the interface are acknowledged as legitimate ways of analyzing interfaces.

Alex Choy - Mar 03, 2008 11:19:32 pm

I felt that Heuristic Evaluation is a good way to evaluate a user interface. However, I believe that it is important to have a variety of evaluators (not all from the same background) to find usability problems. In addition, those evaluators should not hold any biases towards one method over another method. Nielsen's ten usability heuristics are a good reminder of some of the basic things to look out for when creating a user interface. In Lewis and Rieman's chapter 4, I could see how action analysis can be useful for certain cases and not others because a complex task could require over a thousand actions to perform. Therefore, it would take a considerable amount of time to analyze each task. However, if the actions performed were combined with a list of directions/steps (as in back-of-the-envelope action analysis, which gives a bigger picture view), it may reduce the amount of computation and make it feasible to do.

Michael So - Mar 03, 2008 11:37:45 pm

I like the point about evaluating the design before testing it on users. The second article points out that catching bugs in your design before testing it on the user is an act of courtesy to the user and will also help the user take the "testing out the design" more seriously. The cognitive walkthrough evaluation was something that I had already in mind. That I would in my mind try to walk through the steps in completing a task using my designed interface. The Action analysis was something that I didn't really think about, especially the formal version because I never really thought of thinking about the steps in completing an action in terms of seconds in completing them. I do see the use of Action analysis because I would want to realize how many steps it takes to perform a task and how long it will actually take the user.

The Heuristic evaluation is nice because there is a list of guidelines to watch out for when trying to find usability problems. I think it would be useful to evaluate the design before testing it on users, as pointed out in the first reading. A Heuristic evaluation would be done to remove a significant number of usability problems without the need to "waste users". Even though Heuristic evaluation does find a number of usability problems, I believe it would be useful to do user testing after. I think heuristic evaluation is good to refine the interface and user testing to actually see how the interface performs with out target group.

Michelle Au - Mar 04, 2008 12:22:42 am

The Lewis and Rieman reading really clarified last week's readings by describing action analysis in a larger context. Comparing action analysis to cognitive walkthroughs and heuristic analysis was useful in seeing the kinds of problems that each method can expose. In addition, I found the underlying example of the Chooser application to be a good illustration of how each method works. Lewis and Rieman's description of action analysis was much more illustrative by describing each step of the analysis instead of jumping right into the mathematical calculations. I also like how they describe different situations where each method would be useful and that formal action analysis is useful in only special cases, which was the feeling that I also got from last week's readings.

Henry Su - Mar 04, 2008 01:07:28 am

Although the Heuristic Evaluation article was somewhat dry, I did find a couple of surprising statistics. For example, the cost benefits study illustrated how good an investment it can be by obtaining just a few evaluators to participate in a usability study. The list of heuristics is certainly useful, but I found that the one about error prevention was somewhat questionable: it suggested a confirmation dialog before the user does something potentially bad. This went a bit against what we learned about the relative futility of the "Are you sure you want to delete..." type of dialogs. The discussion about interleaving heuristic evaluation with user studies was also interesting. It seems like each evaluation method catches different types of errors, but heuristic evaluations are much easier and relatively cheaper to perform. Also, for those products that are "domain-dependent", user studies would probably be more fruitful, since they are the ones who know best how to use the product in context, thereby producing the most relevant evaluation results. I did find the Technology Transfer article boring, long, and tedious to read, however. The lack of examples makes it even worse. It does serve to illustrate the point that the "usability" of user-interface evaluation methods is important.

The "Evaluating the Design Without Users" article was somewhat interesting. I think their point about hidden controls was well-said. Too many of our products today have very non-obvious controls. While it may make the interface look less cluttered, it's not the most usable solution in many cases. Lastly, formal action analysis seems quite painful to do, and seems unnecessary for most consumer-related products.

Brian Taylor - Mar 04, 2008 12:33:37 am

For the most part, I felt like the guidelines provided were relatively intuitive and that we had already learned most of them from lecture. Then again, I guess I DID have to learn them from the lecture. Still, though, I imagine it is useful to have them all laid out as good heuristics to follow. Although I think that these are relatively intuitive, seeing them in front of you as you are evaluating a system would be rather useful and probably ensure one is checking all points mentioned. I feel that these points are particularly important when we are doing our own personal walk-throughs or using trained evaluators. Users will not want to think about such points we've outlined to them as they try to achieve tasks in a system. Doing the back-end analysis before we meet users seems pretty useful, and I look forward to not only designing but breaking my own system.

Lita Cho - Mar 04, 2008 01:25:06 am

I really enjoyed this week readings and found it very applicable to our project. Finding users and asking them to take the time to test our user interface is very difficult. I got a first hand look at this, since asking strangers to test something is almost impossible without some incentive. I would like to make the most out of the little time we have with the users, especially if I was giving some sort of compensation. The Heuristic Evaluation that Nielsen and Molich have created are very general to apply to most UI. I completely understood how making guidelines for creating a design is nearly impossible. Thus, most guidelines for design consist of what a designer should generally not do while making a product. The same thing goes with any design situation, whether it is graphic design or architecture.

Reid Hironaga - Mar 04, 2008 12:23:25 am

I thought the list of heuristics for user interface design was a good breakdown of information with adequate descriptions of each heuristic to allow for common sense completion of the concepts. I wonder why so many people avoid using the methods of designing around users. Nielson states that he found that the majority of developers not having a use for much of the methods so strongly encouraged in user interface writings. The most likely perceived drawback to the iterative process is the budget boundary that is more easily coped with in a less rigid work cycle. Lewis & Rieman present a strong argument for the usage of cognitive walkthroughs that I thought encouraged the process well. I'll definitely walk through the usage of my app as I design it many times.

Diane Ko - Mar 04, 2008 09:37:23 am

In the list of usability heuristics for user interface design, I found one of the most neglected one is aesthetic and minimalist design. Often times I see a design that is either very aesthetic or very minimalist but not the two together. For me this is one of the most important aspects of user interface. This aspect is what makes the design of the interface. This aspect can determine whether or not someone uses a product. It also seems to be one of the hardest parts of creating a user interface. It's usually pretty hard to say that something succeeds at design per se. It's easy to tell when something is wrong or something has failed in the design, but determining that the design as a whole has nothing wrong with it is very difficult.

Jonathan Wu Liu - Mar 04, 2008 10:22:51 am

I really liked the list of heuristics to follow. I think it really encompasses the elements of successful user interface. One heuristic that is hard to design for is the "visibility of system status". To determine which statuses should be shown to the user and which should not is often a wisdom call, and the profuseness of status messages really determine the appropriate placement of system statuses within an application. Some statuses need to be shown at all times while others should be plain dialog boxes. Depending on the application, the method designers take to display statuses is crucial to the user interface. Finding that balance will be the challenge to solve.

Katy Tsai - Mar 04, 2008 11:13:17 am

I think the two readings really put into perspective what is necessary to fully test the usability of our products. The reading about heuristic evaluation was very informative and pointed out a lot of important points to look for when testing usability. I thought the list of 10 recommended heuristics was especially helpful in pointing out what key categories to look for when conducting testing. I think it will help significantly with the lo-fi testing we are doing right now. In the reading, there’s also a constant reference to complementing heuristic evaluation with usability testing. While I see that they are two different processes, I agree with David’s comment that it is still a bit vague what the differences between the two are.

As for the second reading, “Evaluating the Design without Users”, I think it pointed out a lot of very basic points that were seemingly obvious, but could easily be missed when evaluating an interface or product on our own. Oftentimes, several issues can be discovered on our own that can prevent wasting the time of both evaluators and developers. This allows us to focus our time on the larger issues at hand such as the actual usability of the product and the flow of a usage. Things like hidden controls and spelling errors should be the least of our worries. However, I think the article shows how important it is to get third-party opinions. You really can’t evaluate a design without actually trying to use the interface and going through the steps, and what you think is intuitive, isn’t necessarily intuitive for everyone else.

Daniel Gallagher - Mar 04, 2008 11:45:56 am

What I found really interesting while reading Nielson cropped up in his first writing on the page: How to Conduct a Heuristic Evaluation. Near the end he talks about building a cost-benefit model for usability testing, I was really curious as to how the "benefit" was calculated. It's described in more detail in the paper cited (also written by Nielson) but honestly seemed like a lot of guesswork. I can see how difficult it would be to get solid information on whether more money was made because of using one interface over another, and it seems that would make it hard for a usability engineer to justify expenses without papers like Nielson's to draw upon. All the stuff about company politics in both articles was pretty funny and somewhat sad... they paint a picture to me of engineers needing to have an arsenal of arguments ready for their boss at all times. :p

Timothy Edgar - Mar 04, 2008 11:25:38 am

I found it quite interesting about the 15 user breakpoint for diminishing returns on usability studies. I was a bit interested in how the usability study that I shadowed last summer only used 10 people. It's interesting since only have 3 for our testing does provide a reasonably confident results, assuming we have a fair selection in our evaluators. They did create try to create a cross-section of the target user group, which the article does mention at using surveys to statistically confirm a user group's characteristics. Overall the articles did present some different points towards usability testing that do complement each other such as the non-task driven and task driven styles. I much prefer more informal formats that don't require fractions of seconds, as I believe interfaces rarely do get down to that level of detail in decision making in early prototyping.

Hsiu-Fan Wang - Mar 04, 2008 11:59:03 am

I think the heuristics that Nielson recommends serve as a very good baseline, they make intuitive sense and are relatively memorable. (And people who do not memorize how GOMS works can actually comment meaningfully on heuristic evaluation!)

One thing that I've noticed is that you can usually get amazingly good results from just showing interfaces to one or two people, and as Nielson mentions, there is a dropoff in terms of value as the number of reviewers increases. In my experience its best to run things by a few people, but doing it multiple times, instead of one big evaluation with many people, so I suppose his reality matches my reality. :)

The Rieman reading mentions the importance of naming controls correctly... and I think that this is one of the most difficulty parts of designing an interface. It is amazingly hard to make features discoverable! Copywriting is something that a friend and I struggled over when creating a web application, we have a set of stock "templates" for form layout as well as a number of different "styles" (colors, images, etc), and had great difficulty getting users to understand which was which. (The issue is still unresolved)

Glen Wong - Mar 04, 2008 11:01:45 am

I thought the first reading was a bit long-winded and dense. Both readings covered similar material on performing heuristic evaluations. I'm not sure that I was able to fully appreciate the in-depth discussion of the first article. The thing that I thought was very interesting about the first reading was the lack of concrete instruction on how to perform a heuristic evaluation. There is discussion of what is gained from heuristic evaluation, how to combine results, and even a a list of guidelines. However, there is no practical example of how one is actually performed. Still the presented statistics on the effectiveness of heuristic evaluation with varying amounts of evaluators was interesting.

I felt the second reading was a lot more informative. I liked how each evaluation method presented also had an accompanying example. I also liked how practical advice on how and when to apply to presented techniques was given. Also after all the readings presenting formulas on how to perform detailed action analysis, it is refreshing to see an author mention that perhaps this approach isn't always feasible.

Gerard Sunga - Mar 04, 2008 11:51:27 am

The first reading is helpful in the testing of program usability, especially the description of the heuristic evaluation process as well as the ten principles for general design. I found the ten principles to be one of the most important sections in the reading, providing clear guidelines for a good interface and a good program in general. Lots of programs I use don't follow these conventions (for me it's the newest iterations of Microsoft Word and the deviation from the original design, using, in my opinion, a non-intuitive interface), making their programs difficult to use.

The Lewis and Rieman article provides some great guidelines on how to test the user interface without any users. As the first poster commented, it was nice to see the hodgepodge of numbers found in the previous week's readings put to a logical use.

Robert Glickman - Mar 04, 2008 12:07:12 pm

These two readings provided some great additional insight to the testing and improvement of usability. The heuristic evaluation reading was particularly specific in describing the structure of such an experiment (number of testers vs. bugs found, how to set up the tests, etc.). They really do cover all that's needed in such a test, including the heuristics to look for when designing an interface and how to evaluate the data procured from the experiments. These flow nicely into the next reading, which then better describes the process of interface design without user testing. It goes into many issues, experiments, and questions which should be asked to evaluate an interface in this way. This reading also provides many anecdotes to better understand common issues which are faced in tests like these. I especially liked the cognitive walkthrough description and the subsequent common mistakes section regarding such a walkthrough. Also, the questions in evaluating a walkthrough and examples of why each are useful was particularly enlightening.

Megan Marquardt - Mar 04, 2008 12:38:01 pm

The most interesting part of this reading was definitely the paper on Technology Transfer of Usability Inspection Methods. As a former chemistry major, the application of theory to experimental conditions has always been very interesting to me. We've been reading a lot of theory about usability inspection methods, and the evidence found in those articles is mostly case studies, which I find much less convincing. The survey statistics were very interesting, that through this one course at a conference, the surveyed participants actually applied what they learned to their real-world projects. There are a lot of methods for judging the usability of a product, but I feel like the most important ones (such as low fidelity prototyping) doesn't get used as often as it should, just judging from obviously flawed user interfaces that I've come into contact with on a daily basis. Yet this paper suggests that the heuristic evaluation method has penetrated into industry usability methods. I think the reason for this, as talked about in the paper, is that it is very cheap to do this, and gets a lot of information about the usability through a sampling of a bigger audience. Also, I think it correlates to how easy and intuitive this method for judging usability is, heuristic evaluation seemed like a very obvious way to test how good the interface is, so it would seem this would very usable in testing interfaces.

Kai Man Jim - Mar 04, 2008 12:46:09 pm

Wonderful reading assignment, and it is good to learn all the heuristic. However, I personally think that heuristic is easy to understand but hard to use. It is because in cs188, we use a lot of algorithms that based on heuristic. We all know that we need to get the algorithms to perform the best solution that we want, but it is very hard to decide what is the best heuristic. So it would be good if we can spend more time on this topic during lecture.

Bo Niu - Mar 04, 2008 12:50:35 pm

I personally liked the walk through without user article much more useful. Doing Heuristic Evaluation with different users is often time consuming and yet it's hard to quantify user's feedback unless there's really an obvious problem to the design. But then again, such an obvious problem of the design would be caught easily by walking through the use cases using common sense as a developer anyway. So I think the best way to "debug" the interface is to let the developers act as the user, this requires the developers to have some common sense outside computer science field which hopefully we all have.

EricChung - Mar 04, 2008 01:01:40 pm

The first (set) of readings seem to help with getting information from users not in the target group (as some of the other readings seem to suggest since they will somehow feel threatened). Heuristic Evaluation also seems to be more useful in hi-fi or later mockups as opposed to paper mockups since with paper, you tend to ignore little things, something that heuristic evalution excells in. That's not to say that heuristic evaluation is worthless, it's just that it's probably a lot more useful in hi-fi. However, after doing the reading, I still don't quite understand what the difference between "user testing" and "heuristic evaluation" is. I'm going to rely on lecture for that one. The last article in the set seems to be a roundabout way of saying, "Usability is important and companies know it."

It is interesting to note that the mantra in the past (so far this semester) has been all about the user and getting information from the user and to put your own biases aside. However, due to social and (somewhat) practical pressures, doing your own evaluations is also important. Due to the reading, I also now understand what heuristic evaluation is, which the first reading didn't really do at a fundumental level. Just from the cognative walkthrough, it's clear that one can challenge assumptions without users if the designers just think a little harder and in small segments (actions at a time). Perhaps this kind of thing should be done before the first interviews with users so you are going in with something (although that might be undesirable since you want the interviews to be unbiased as possible, and also cognitive walkthroughs are best with people who know the users already). It seems the walkthrough is useful for finding things that we've been taught to find from users, except since this is another method, using both will be more thorough. Action analysis seems to be much like goms. Looking at the example, it seems to be the case, except method is important (how the user accomplishes something). Finally, heuristic analysis is not with users but with analyzers, basically which helps me understand it much better.

Benjamin Lau - Mar 04, 2008 12:41:08 pm

I don't know, the Action Analysis stuff seems awfully similar to KLM from the previous readings except not as well articulated. The Cognitive Walkthrough is nice but I have a feeling most people do this implicitly already. Still, it's nice to formalize the notion so that everyone can gain some awareness about whether or not they're doing it. I found the 10 principles of Heuristic Evaluation to be very helpful, I'll be sure to keep it on hand for my prototyping. For the second reading, the part I found the most interesting was the section on the "usability of usability methods" to keep in mind that for practical reasons that certain strategies aren't viable in the real world. Heuristic evaluation appears not be one of them, however, and had a mean usefulness rating on the questionnaire that was surprisingly comparable to regular user testing.

Cole Lodge - Mar 04, 2008 12:56:44 pm

At the start of reading the Lewis and Riemen, I was bored and the idea sounded straight forward. Of course you would want to step through the interface looking for downfalls. But as I read on, I found it far more interesting. I enjoyed the step by step instructions of how to step through an interface. The article brought up several good points I had to think about, such as, the idea of bringing your development group together to do the walkthrough. This will allow the group to discuss issues, giving your group a higher chance of finding a solution to the problems found. Keeping the group on the same power level also sounded like a good idea: making sure no ones opinion was stifled by a power play.

As for the Nielsen article, I found the most informative part to be the ten usability Heuristics. Although each of these are straight forward and well known, it was nice to see these all in one place. This page will definitely be bookmarked by me and referred to as a reminder of good design techniques.

Maxwell Pretzlav - Mar 04, 2008 10:18:10 am

While there was considerable overlap between the Lewis & Reiman reading and the Nielson readings (L&R essentially summarize one of Reiman's main subjects), I liked how L&R presented several techniques for analyzing an interface and demonstrated them on a common subject. I really like the idea of heuristic evaluation as a technique (as it appears industry does as), however I see the merits of cognitive walkthrough as well. As these are both fairly easy and low-cost approaches to evaluating an interface I see how they could be very useful during the prototyping stage to get quick and helpful feedback on an in-progress interface.

Jeffrey Wang - Mar 04, 2008 01:14:40 pm

I found both articles to be very informative. The content should be especially useful when we do our lo-fi prototype assignment. The severity ratings for usability problems (frequency, impact, persistence, market impact) really makes thing practical. Frequency, impact and persistence will help us decide whether there should be design changes. I also realized that the market impact can also be analyzed business purposes, in addition to design. It is possible to create an excellent design that no wants.

One thing I realized while reading the second article is that trying follow a user in a walkthrough is harder than I thought. Every user comes in with different habits, and as a storyteller of the program, we really have to be open-minded.

Joe Cancilla - Mar 04, 2008 01:30:02 pm

Heuristic evaluation seems like it could be addictive. I find myself wanting to conduct a heuristic analysis on every application that I use in daily life. This seems like a much more enjoyable way of doing analysis than the GOMs methodology. I did not quite understand Neilson's emphasis on having a certain amount of evaluators, wouldn't more always be better? I guess he is thinking in terms of the cost of hiring professional heuristic evaluators, whereas I am thinking in terms of amateur users.

Andrew Wan - Mar 04, 2008 01:31:15 pm

While somewhat redundant, the readings did a good job of outlining various evaluation approaches. Knowing how to use heuristic and back-of-envelope analysis (I'd probably avoid full-on action analysis, for the most part) seems useful, so it's good to learn the actual methodologies. I agree with Nielson that these approaches are most effective in the earlier stages of prototyping, before more expensive user testing. From what I've gathered, it makes sense to work out simple "bugs" and obvious usability issues (with more experienced evaluators, not users) before worrying about specific problems (which can be worked out after watching users later).

Max Preston - Mar 04, 2008 01:21:47 pm

The first article seemed to be pretty straightforward, though I suppose it was good how they quantitatively emphasized the importance of testing user interfaces with multiple users so as to find all the usability problems.

The second article also had a few interesting points, but from these two articles it seems like designing and testing a good interface just boils down to thinking about things from the perspective of an uninformed user rather than from your own. It seems fairly obvious to me, but I guess some of the more subtle details are important, and it would definitely be useful to people who might not have considered such things before.

Also, I thought that the Nine Heuristics seemed pretty useful and intuitive. Sometimes it's good to have a list like that to refer to if you get into a situation where you're not exactly sure what to do.

Richard Lo - Mar 04, 2008 01:44:02 pm

The articles were helpful, albeit a little long. However, they did clear up some questions I had regarding the evaluation process, as the interviews we have conducted previously did not help very much on a detailed level, but rather an overall level. To know that there should be 2 types of interviews, one which is more general and we let the user struggle, and one more in depth where an experienced user can critique and analyze our UI in front of us, seems much more comprehensive to me. Also, the 10 points seemed extremely practical, and though somewhat intuitive for UI designers, its probably really easy to forget any of the points while implementing.

Raymond Planthold - Mar 04, 2008 01:44:42 pm

I'm trying to see the difference between the walkthrough and the action analysis other than the level of detail. Both of them seem focused on recording the correct actions (rather than spotting possible missteps), though the example action analysis does include remarks like "there are too many actions." Still, I don't see where "Option X can be easily confused with Option Y" fits in.

Randy Pang - Mar 04, 2008 01:59:37 pm

What is this? Real world examples? Questions that make you think? Impossible! Anyways, I found the ideas in these readings to be fairly useful, although I definitely thought Lewis and Rieman were overly verbose in their descriptions (in particular, I thought the failed the "Minimize user memory load" heuristic. I felt their ideas within each concept weren't separated clearly enough and a lot of times I felt a lot of fluff and tangents that were just kind of tossed in there and that added no value whatsoever). I was also really pleased that after a long stint on quantitative analysis (of which I didn't particularly enjoy), Lewis and Rieman basically say that "Yeah... sometimes it does provide a lot of value for the resources invested, but most of the time, it doesn't." However, I did find the quantitative analysis in the Nielson articles compelling, as the graphs were easy to understand, the points were brief but full of information, and the costs were fairly clear (and most importantly, significant!).

Tam La - Mar 04, 2008 02:22:10 pm

Heuristic evaluation is a critical component of user interface design. More is better cliché does apply to heuristic evaluation. The more users evaluating the design, the more problems can be found. It is interesting to note that there is no one person who is the best heuristic evaluator. One person may only find one problem but the problem is hidden so well that no one else found it. Another person may find the most problems in a design but those problems may all be considered easy and are found by many other evaluators. There is no set definition of a good evaluator so every single one of them are important.

I found the cognitive walkthrough a little bit redundant. I think every designer is constantly walking through their design while making it in the first place. It is rare that a designer just act like a robot while designing an interface. Human beings all learn and modify while they are doing certain things. Also, as mentioned in previous readings, the designers themselves sometimes overlook the problems of their design since they just spend a lot of time doing it. That is the reason the designers need to have evaluators who are not part of the design process to evaluate the design.

Yang Wang - Mar 04, 2008 02:16:35 pm

Both readings are much better than past few weeks'. Although the readings are a bit old, but the idea is still fresh and useful. It should be great to keep those ten recommended heuristics from Nielson in mind. I think if any interface can fit in all ten heuristics, it would be a successful one. For Lewis and Rieman's reading, there were some some helpful example and thoughtful questions, but I think they overlapped some ideas. Some methods seem not much different from each other but rather added to lengthy the paper. However, the back Back-of-the-Envelope Action Analysis was rather interesting. The example was great, and the suggestions in that section are very helpful.

Johnny Tran - Mar 04, 2008 02:22:57 pm

The Ten Usability Heuristics mentioned in the first reading was very thought-provoking. They all seem reasonable, and are certainly valuable heuristics to look for in a UI. It is almost frightening how frequently they are violated, such as consistency or minimalist design. The point about help and documentation rings true all too often; I cannot even recall how many times that I've used a program (or a programming library) that lacked documentation for even basic tasks.

That does bring an interesting point about systems that can be used without documentation. I wish the reading would go into detail how to make interfaces intuitive--not just efficient or easy-to-use, but also easy to pick up on the first try, without having to read the manual. Some software is basic enough that it's easy to design, but I'd like to see how to make even complicated systems intuitive to use.

Zhihui Zhang - Mar 04, 2008 02:09:38 pm

The heuristics that has been outlined seem like a really good idea. You want to be able to fix as many design issues as you can before you present your interface to the user; if not for the sake of not wasting the user's time, then for the sake of maximizing the time spent with the user. By getting rid of these 'minor' issues to begin with, the user will be able to explore more of your product and you can get a better sense of what the 'major' issues with your products are.

The heuristic i particularly think is important are 'User control and freedom' and 'Consistency and standards'. many time i've encountered installation dialogs w/ a 'next' and 'back' button only to discover that as soon as I press 'next', the installation starts with no option to go back other than to kill the install.

Bruno Mehech - Mar 04, 2008 02:30:21 pm

It is interesting to compare the two readings since one focuses on testing with users and the other focuses on testing without users. This is especially interesting since both readings talk about heuristic evaluation. Though both readings say that several independent testers are necessary to have a good heuristic evaluation result, one reading has those testers be users while the other has them be developers. Also interesting is the graphs in the article by Nielson in the heuristic evaluation reading. Even though heuristic evaluation is supposedly more affordable and easier to do more often user testing is still done more often and is thought to be more useful. So it seems that either the industry hadn't fully accepted the heuristic evaluation model or its not viewed as wonderful as Nielson has described it to be.

William tseng - Mar 04, 2008 02:43:02 pm

I liked the "back of the envelope" example from todays reading where you should just make a higher level list of the actions and assume each action takes on average several seconds to do. I really do not think measuring tenth's of a second as suggested in previous readings was applicable to the scope of the types of tasks we are designing for our class project. I also found the list of "Nielsen and Molich's Nine Heuristics" to be particularly insightful. It is interesting however that these set of guidelines was provided after all of the descriptions of the "mental walkthrough" as I would feel that walking through an interface either by yourself and or with an actual user should come after you have already followed the nine heuristic guidelines.

Yunfei Zong - Mar 04, 2008 02:55:29 pm

I still don't understand why they have to make such a big deal about this. Heuristic evaluation can be summed up as "a bunch of smart people come in and review your system with goals in mind". Why does the author need to explain it like it's some sort of innovative new tool that nobody has ever thought of? I really don't think there needs to be a 2000-word paper telling us how much time and money it saves. I mean, if the title is going to be "Evaluating the Design Without Users", who else besides experts would you have to evaluate your design? Hobos?

Also, the technology transfer article was a totally pointless reading. The graphs displayed the amazing fact that the more respondents used an evaluation type, the more they liked it! What an spectacular piece of knowledge that nobody would have known about without 20 pages of text to explain it.

Edward Chen - Mar 04, 2008 02:52:03 pm

These readings seemed incredibly useful as they used real-life examples to illustrate how exactly to use the different types of design testing. Of course, this reading would have been a lot more useful a bit early before my group did a bit of design testing as we would have been able to apply a lot of the reading material into testing the design a bit more by ourselves. The one that stood out the most to me , however, was the heuristic testing as it gave clear points for one to focus on. If you have clear points to look for in the design and can search through the design interface for each of the points individually, I think you'd end up with a much better design than the other methods would have offered as the heuristics really act like a checklist for your design to past. However, there may be some small usability issues but all the obvious and serious ones would be eliminated via heuristic design testing and the smaller ones can be evaluated via user testing.

Brian Trong Tran - Mar 04, 2008 02:58:21 pm

I find that I am not such a big fan of heuristic evaluation. I do think it is important for the designer to think about whether or not the user can work with an interface, but I don't think heuristic evaluation is the way to go. I think heuristic evaluation restricts creativity in development of a user interface. Although many people use it for the sake of evaluating a UI, heuristic evaluation tells the designer what they should and should not do. When designers stick by these standards, they don't come up with new user interfaces that could potentially be better than those abiding by the heuristics. While I have no idea what possible user interfaces could come from not abiding by the heuristics, I am certain that designers with heuristics in mind will develop user interfaces that have already been seen before rather than developing new ideas for producing better interfaces.

Brandon Lewis - Mar 04, 2008 03:05:18 pm

I think the main point of this reading is that Usability Methods need to be usable themselves. Really, any behavior you view as beneficial is better encouraged through an efficient system rather than preached with prescriptive advice. With this outlook, a lot of seemingly mundane tasks become exercises in usability. A couple examples:

  • We use tupperware containers to store food in my cooperative. This task used to be one of the most hated jobs, because you could never find the lid you were looking for. This led to all kinds of problems, especially not covering food which is a health code violations. People would scream about this, but it did no good because the task of putting away food was too difficult. Eventually, I reorganized the tupperware. I paired down our collection to just a few types of bins with inter-changeable lids. We have not had these problems since. The system could be further improved by making the task of properly storing the tupperware more easily.
  • A recycling manager at a different cooperative found that by placing the cardboard box recycling bin outside, he could get recyclers to flatten the boxes on their way to the bin.

It's one thing to preach about the benefits of a particular approach. Anyone can do that. It takes thoughtfulness and creativity to develop a system that is actually usable. This goes for the task of improving usability itself. We want our software designers to create usable interfaces, so usability experts should develop approaches to improving usability that fit into current development practices. This runs in stark contrast to Ideo's completely vertical design philosophy, which could be likened more to a lifestyle or culture. Their methods don't necessarily translate to traditional corporate environments.

Mike Ross - Mar 04, 2008 03:13:10 pm

The Formal analysis reminds me of instruction timing used in compiler design. I'm actually surprised to see the timings for human activities come up again in practice. The most widespread use of this I can think of would be avoiding RSI (section mentioned a few others, which veered toward heavily repeated tasks found in specific jobs). I'd be very curious to see an action analysis of say, emacs vs. vi vs. Eclipse, both key stroke level and back-of-the-envelope. I'm also intrigued to hear that "time savers" can actually eat up users' time by making them analyze which to use. I've noticed that out of my engineer friends and myself, we'll discuss different ways to approach a problem for a while, but the first person to finish is generally the one who just picks a way and goes for it. The lo-fi interface reading hammered on this point too, essentially saying to set a deadline to avoid wasting cycles thinking before diving it. I know it's not the point of the article, but it's a point I keep coming back to.

Daniel Markovich - Mar 04, 2008 03:05:03 pm

Although I agree with Yunfei that the methodology of Heuristic evaluation is a bit more common sense than innovation, I still feel that the articles gave a bit of insight into how get helpful results from the process. I really believe that Heuristic evaluation is a much more efficient and practical form of UI evaluation compared to GROMS and Fitt's. Although the methods contrast vastly, it seems unreasonable to be able to critique a UI based on the expected time it takes a user to perform actions, mainly because you cannot predict such a thing. Heuristic evaluation on the contrary does not try to do anything of this sort, and derives its information based on a comparison with respect to historically correct UI design principles.

Jiahan Jiang - Mar 04, 2008 03:05:52 pm

I found the back-of-the-envelope action analysis very powerful; personally I have a feeling it might be more effective than the formal action analysis; it addresses key issues and forces the designer to think critically about the actions list. I thought the heuristic analysis was interesting; but I wonder how exactly can it be user free? It seems like the heuristic guidelines are very personal and experience-focused; how do you qualify guidelines such as "simple" and "clearly marked" without users? Wouldn't that just be the designer's subjective opinion? Aside from that these readings are quite enlightening.

Jeremy Syn - Mar 04, 2008 03:07:31 pm

I thought that the back-of-the-envelope action analysis technique was pretty interesting. I know that most developers want to make a really great product and want to add as many features as they possibly can, but it isn't always helpful to have more features. The product gets much more complex and harder to use that the users will often become confused. It is nice to keep the interface at a simple level and the developer must always make sure that users can easily use the product.

Jason Wu - Mar 04, 2008 03:14:58 pm

So I found that although in class we were doing interviews and mock interfaces to test with potential users, in many situations such as most web development that time and effort is a luxury. For a website, you really aren't going to hunt down the target demographic, wizard of oz them, and then build the interface around that. The common sense and guidelines in the article are going to be able to get a pretty decent interface at a time efficient cost.

Siyu Song - Mar 04, 2008 03:21:32 pm

I think the most important thing to note about this is that in design using these heuristics can only be educated guesses about what the user needs. I feel it is difficult and potentially inaccurate to try to model usage of an interface this way. I always feel it is more natural to use empirical data about a specific interface. But, in most cases getting accurate empirical data about test interfaces would be unfeasible, so these models have to be used because of resource constraints.

Jeff Bowman - Mar 04, 2008 03:17:20 pm

The cognitive walkthrough that Lewis/Rieman propose definitely reminds me of the books I've read on web design and navigation. Creating and following through hypothetical people is an excellent way to do this. However, Rieman and Lewis acknowledge that to use this method you have to avoid the simplistic single-task mentality that doesn't sound very realistic. One thing that I think would be interesting to propose is, bringing in other usability experts unfamiliar with the project [e.g. other people in this class], or engaging in a friendly competition within the group to see who can propose reasonable examples where the code breaks. Arguably, this is what designers would collaborate to do within the cognitive walkthrough; however, in idealizing designer behavior, it isn't a big leap to propose that a designer wouldn't need testing at all and would instead produce an ideal interface right off the bat: an additional step or game might be a reasonable way to force yourself as a designer to both (a) practice thinking like a troublesome user, and (b) accommodate those users quickly.

At the bottom of one of the Nielsen pages, I found this gem: Usability Applied to Life. It seems to cap off the readings very nicely.

Adam Singer - Mar 04, 2008 03:22:40 pm

While most of the ten usability heuristics are founded on common sense, I think they are something I would like to have nearby any time I am designing a user interface myself. Even though these heuristics are fairly obvious when reading them, they can sometimes be easy tenets to forget. Even designers of something as simple as a door can completely screw up the interface which should be enough reason to just keep these tenets in mind. Nielsen's readings about heuristic evaluation provide a novel approach to testing a user interface with users. Though we didn't do heuristic evaluation with our current user group, it is a method that could prove quite useful in the usability tests to come.

Pavel Borokhov - Mar 04, 2008 01:43:37 pm

The first (and part of the second) reading provided good methods for evaluating the interface without requiring users to be present. This, I think, is a very crucial step in user interface design as it minimizes the amount of "wasted" time during actual user testing because the more obvious user interface flaws have already been fixed thanks to the heuristic evaluation. I found some of the numbers in the readings to be interesting, especially in terms of the payback of heuristic evaluation. Personally, I tend to tune out fairly quickly when anything related to business is brought up, and I wonder if the reason that we see so many bad interfaces in today's word is due to the fact that the payback is actually not that great or that no one has gone out to fully and properly evaluate the benefit that arises out of proper interface design and evaluation. The age of these readings, also, makes me wonder why we still have so many examples of the very principles that these guys tell us to avoid. Especially poignant is the ability to easily undo/redo any action (Microsoft Excel is my favorite example of this, where time and time again I have found the "undo" function disabled simply because the action I performed was not direct data entry) and the directive to use meaningful and noncryptic error dialogs (even Apple, who claim to put a lot of thought into UI design, have pages like this and this listing the possible Mac OS "error codes", something which goes inherently against the concept of "humane" error dialogs).

Paul Mans - Mar 04, 2008 03:15:16 pm

Lewis and Rieman make a convincing argument for the value of performing design evaluation before involving users. In there description of how to go about "user-less" evaluation I found the walkthrough technique to be the most interesting because action-analysis we read about in depth previously and heuristic evaluation was described in more detail by Nielsen. One thing in particular I appreciated about Lewis and Rieman's description of the walkthrough technique was their warning not to merge the evaluator's job of coming up with a written list of actions needed to complete a task with the actual walkthrough evaluation process. I definitely can see how trying to develop this list on the fly during the evaluation process would compromise the results.

Nielsen's articles on his usability heuristics were refreshing in that they really present a different testing paradigm then we have mostly discussed up until this point. Instead of testing subjects taking the role of lab mice Nielsen's evaluators take on the role of a contracted design review board. While even Nielsen admits that there can be a lot of value from running tests on actual users (particularly in specialized applications) I really like the idea of putting the analytical capabilities of your users to work instead of simply recording their actions. Probably the nicest quality of Nielsen's heuristics is that they compose a succinct rubric that anyone can understand and use to evaluate interface design.



[add comment]
Personal tools