Conceptual Models I

From CS 160 User Interfaces Sp10

Jump to: navigation, search

Slides

Contents

Readings

Mattkc7 - Feb 08, 2010 11:01:33 pm

Daniel Ritchie - 2/5/2010 20:10:20

While dismantling the usage of "confirmation dialogues" for file deletion, Raskin proposes that any file whose deletion could truly pose problems for the user (or the system) should not be delete-able by that user. I don't think the solution is so straightforward. Take the task of designing an operation system, for example. For many users, deleting some important library would have disastrous consequences; according to Raskin's theory, such an action should be forbidden. However, there will certainly be some "expert" users of the system that fully intend to delete that library and replace it with a modified version. Forbidding them to do so would prove frustrating.

Raskin's rules are quite sound, I think, for well-targeted applications. However, when designing an application with as diverse a user base as an operating system, one may need to bend them a little. After all, there must be a reason why confirmation dialogues and error messages (another much-maligned staple of interfaces) have persisted for so long, right?


Jason Wu - 2/6/2010 0:56:05

As I read through the chapter, I noticed many real-world applications of Raskin's ideas. When he mentioned that any confirmation step and elicits a fixed response soon becomes useless, I immediately though of Windows Vista's User Account Control (UAC), which largely attempts to keep users' PCs safe from spyware, viruses, and other threats. However, the implementation was extremely overbearing and annoying, as it required users to confirm numerous times about deleting files, installing programs, etc. Rather than prevent harmful events from occurring, UAC merely made performing even simple tasks more complicated and time-consuming, and it became pointless after users formed the habit of clicking all the confirmation dialog boxes without giving any thought to whether or not their actions could be harmful.

Raskin's claim that perceptual memory generally lasts less than 10 seconds also struck me as very interesting, and I can easily relate to his example of error messages. I have often wondered why programmers even bother to include dialog boxes that pop up with cryptic error codes when things go wrong. It is a hassle to research the source of the error because users usually cannot copy and paste the code from the dialog box, and the codes are so cryptic that users may need to write down the code and the entire error message if they wish to search online for answers. I find that I generally ignore the errors and click "OK" to get rid of the dialog boxes until I run into the same problem repeatedly, which is probably not the intent of the programmers.

One thing I don't quite understand is Raskin's assertion that having multiple ways of accomplishing the same task is a poor choice of interface design. It seems to me that a user would eventually develop a habit to use the option he or she prefer most, so even though the other options are present, the user's locus of attention will not shift from the task at hand to the choice of method.


Alexander Sydell - 2/6/2010 14:15:50

Beside the general concept of a human's single locus of attention described in this reading, one point that made an impression is that confirmation dialogs do not help in preventing a user from making a mistake due to habituation. Because a user usually intends to perform a certain action, she becomes accustomed to pressing or typing "yes" immediately after performing that action. This is a fairly intuitive point, which makes me wonder why most modern operating systems, including Windows, Linux, and OS X, still show so many confirmation dialogs. For example, it would make far more sense, as the author suggests, to keep a deleted file around somewhere in case a user makes a mistake. Then, if the deletion was in fact unintended, the operating system can then allow her to recover the file - the option already exists in the form of the Recycle Bin or Trash folder. Why is it that systems still show so many confirmation popups? Has other research been done that shows a positive side to them, or is this simply an idea stuck in the mind of the programmers making these systems that no one has bothered to remove?


David Zeng - 2/6/2010 21:10:44

In the context of interface design I believe that the single locus of focus limits the potential capabilities that a user interface. If humans could actually parallel process many events with an equal amount of effort and ability, then I think the interfaces of today would be much more richer in design. Like the article says,the single locus of focus seems to be a physiological limit on the human mind. However, the locus also allows us to be just that much more focused on the task at hand. In that way, the interfaces that we use can be more complex. This is different than the richness which I talked about earlier. Similar to computers with one core, a single core computer might be faster than a duo core computer with 2 individual processors that aren't as fast. Because of this, we may build interfaces that can do more.

The reading also talked about habit forming, which I mostly agreed with. However, it seems to me that most people have an initial inclination towards habits. Much like learning to play a sport or an instrument, some people pick up on this easily, while others seem to struggle even after a long time. While it would be bothersome to have an interface that one could not pick up quickly, it would be best if the interface reminded us once in a while when we're starting to build up these habits that could lead to problematic behavior.


Richard Mar - 2/6/2010 23:11:02

The last section on interruptions during work made me think about the difference in productivity between a single monitor and dual monitor desktop setup. For certain tasks that require multiple windows that occupy all or most of the screen, the user has to do more mental work switching windows with a single monitor. Not only does the user have to take in the contents of the newly focused window, the user still needs to remember what they were doing in the previous window. With two monitors, it is possible to keep both windows visible, making context switches between windows much faster, and thus increasing productivity.


Vidya Ramesh - 2/6/2010 23:20:36

A lot of what is discussed in this chapter holds interesting design implications. Some of these are pointed out by the author while some are not. One important implication that is pointed out is that users are only able to the hold the attention of one task consciously at a time. You must always assume that the user can not remember anything and allow information to be accessed when it is needed. Even if the information is a part of the user's knowledge base, a stimulus must be used in order to bring it to the forefront of the user's attention. Another implication of human attention span and habit forming behavior, though not pointed out by the author, is that interfaces can take advantage of a user's habits by mimicking them through mimicking interfaces that the user has already been exposed to. Since humans can not avoid developing automatic responses, it is essential that interfaces take advantage of this ability especially when performing essential tasks.


Jonathan Hirschberg - 2/7/2010 2:15:28

The idea of taking advantage of the human trait of habit development and allowing users to develop habits that smooth the flow of their work is similar to some of the ideas covered in the “Direct Manipulation Interfaces” reading, pertaining to issues of automation. Even though a beginning Perl user may not know what all of the key commands are and it may seem very unintuitive, and he has to consciously think about them, it is through regular practice that his key presses become habit and he can focus simply on the action he wants to perform and automatically press the key to do it without thinking. In this case, Perl may take advantage of the human tendencies to form habits, but “Direct Manipulation Interfaces” argues that automation may not be enough. A more intuitive interface, designed according to principles covered in that reading, would probably better take advantage of the habit forming tendencies in a way that makes it easier for humans to form habits.


Bryan Trinh - 2/7/2010 13:08:43

One of the central claims in this article is that the human minds cannot consciously think of several things at once. Even when we think we are keeping multiple things in mind, we are really serially keeping track of them. Furthermore when attention is quit from one task, it should be returned to the previous task. Raskin uses this to argue that the current desktop navigation paradigm is not optimal, calling it "inhumane". If this is true than why are windows based systems the dominant interface of choice. In general I would say humans prefer this type of system. The windows based interface makes spatial sense, something I would call very human. We have control over the locally of each window frame and we can use this to organize thoughts. This method would only make sense with enough screen real estate though because the spatial organization structure is lost. When moving to smaller formats I think a stack like paradigm would be better.


Jungmin Yun - 2/7/2010 18:29:34

This reading is interesting and useful. The author starts off talking about two states of mind: the conscious and the unconscious. It is quite psychological. The conscious is whatever you are currently focusing on. And the unconscious contains everything else. When you think about something, the state is changed from the unconscious to conscious. These cognitive conscious and cognitive unconscious are related to the design of good user interfaces. I think unconscious plays important role in user interface design. As I remembered, the formal reading said that a good user interface design should have functions that users expected even though those functions are useless because it is possible for users to use the functions unconsciously.

I think one important thing is the discussion of habituation. Most programs have a lot of features. For example, Windows has tons of features such as copy and paste, open desktop. After getting used to  there features, using these features become habitual. When an application does not have these features, users cannot do their job as they expected. This failure brings the task into users' slow cognitive conscious.  


Tomomasa Terazaki - 2/7/2010 19:33:06

I like the reading about the cognitive consciousness and unconsciousness because there were many facts in the chapter that made me say that happened to me before. For example, when the chapter talks about how about how most people are not used to new interface and finds it hard, but after using it for a while, he/she gets used to it and becomes unconscious events like walking, which are all habits for humans, so we can do these activities without thinking after a while. It is important to know how human brains work to make the best software (interfaces). Also it was interesting to read how the computer asks whether the user really wants to delete a document before going into the trash cans. Even though this appears just in case the user accidently chose a wrong file, most people just press Yes as the reading said (which I have done many times before). Even though this question is being asked but since even pressing Yes becomes a habit so it is useless and I agree. It just becomes a package like when we walk, we do not think “The left foot is in front, so the next foot is right,” but we just walk, never thinking whether the next feet is left or right. It was also interesting reading about the fact that humans cannot concentrate on two things at the same time for most activities. Even though many scientists say human brains do not have a limit to how much knowledge we can input, we can only use that knowledge one by one. When I create my own interface, I will try to make it so the user do not have to do two things at a time because there is a huge chance that he/she will mess something up.


Victoria Chiu - 2/7/2010 22:09:24

Perhaps human's attention works like human's eye; the center of the eye gets the clearest image, and the surrounding of the center starts to blur. However, it seems rather unintuitive that when we are doing practiced skills, it is the best "not giving it a glancing thought." Also, it seems bizarre at first that the way to prevent habits from forming is to make the task to be the locus of attention. I know that when we are learning a new task, it takes all of the attention, and after we master the task, we do not need to give much thoughts on it. But would paying attention to a task actively prevents us learning a new task?

Returning to the last place we work on the previous time might be good for viewing pdf files, but it might not be always good such as for web browser. Most of the time I quit the browser because I am done with the page, and instead of seeing the page again when starting the browser next time, I would want a new page to start.


Matt Vaznaian - 2/7/2010 22:56:12

After reading this article, I am interested about good and bad examples of of keeping the users locus of attention focused on the application. What kinds of characteristics of an app determine whether it will hold one's attention? Along the same lines, what are examples of applications that prevent bad habits from forming? I guess I am just interested in the more practical side of what they are trying to get at. Discussing the psychology behind it is great, but I want to see some user interface examples! Interesting article, though; this is an aspect of product use I never thought about.


Charlie Hsu - 2/7/2010 23:31:02

This reading especially highlighted the importance of realizing humans' habit-forming behaviors when designing user interface. After explaining the cognitive unconsciousness and describing users developing comfort with an interface via habit forming, the article described the dangers of habitual activity and ways to rectify it.

I found the example of deleting files from a system very illuminating. In Windows and Mac operating systems, the user has a method of undoing a delete via the Recycle Bin/Trash Can. However, on the EECS Inst UNIX servers, this is not the case (and is one of the primary reasons I back up all my files on the EECS servers to my local machine!). Draconian verification methods are certainly not the right way to go, and I agree that their solution of undoing unintentional actions is the most optimal method.

I felt that the part of the article describing the single locus of attention and maximizing user absorption was connected to why direct manipulation interfaces, from an earlier reading, seemed so effective. Direct manipulation interfaces allow for complete user absorption; an user physically dragging and dropping an object using a touchscreen cannot be focused on much else: an example that comes to mind is the Maps application on the iPhone.

I also felt that resumption of interrupted work has become a staple of good user interface design as well. The article cites that in current desktop-based systems, an user must always navigate to the task, but I feel many applications nowadays directly allow for the resumption of interrupted work. Windows has a sleep and hibernate option that preserves desktop status even through powerdowns, Firefox has a tab-saving feature, and iPhone Safari, Maps, Messages, Notes, etc. all revert to the same state at which you left the application.


Wei Wu - 2/8/2010 0:59:48

Given a person's tendency to associate a certain intention with a specific way of carrying it out, as discussed by Raskin in the section about habit formation, Lewis and Rieman's "plagiarize" step in the interface design cycle makes more sense. Humans become used to doing things a certain way, even if those methods may not be the most efficient, so it is important to avoid introducing a completely new way of doing things too abruptly. We are attached to keyboard shortcuts like Ctrl+C for copy and Ctrl+V for paste, and become agitated easily if they suddenly become associated with different commands in some application. Because we are used to the status quo, it is a challenge for both existing applications to be upgraded with "improved" features without upsetting a portion of users accustomed to navigating the old interface, and new applications on the market to draw users from its competitors because a level of training is involved for the users to get used to the new product--a training process which they often do not have patience for.


Linsey Hansen - 2/8/2010 1:42:29

So, this was another one of those readings which encourages following current designs while making new interfaces to make it easier for the user to use them (since they are probably used to the current designs thanks to habits). However, like readings before it, it also encourages creating interfaces that make the user's tasks easier and quicker to complete. I feel like there is too much detail on these two things as separate goals when the real task is finding a good balance between them, since balance might not be very testable. I would imagine that the users giving feedback on an interface will have a bias towards interfaces that are more familiar to use, even if a completely new interface can technically allow the user to perform faster, since the user would need to take more time to teach themselves all the features of the newer, more efficient interface. And while this bias can be countered by making changes gradually, it still does limit the creator's ability to make their creation optimal, though perhaps this is just a blind assumption and users can pick up on a completely new interface within the time given. Anyways, this is just something that has been bothering me for a while...


Long Chen - 2/8/2010 2:08:04

Cognetics and the Locus of Attention really shined a light on how the human brain operates on a practical level, and how to harness that understanding in constructing productive interfaces. The distinct separation of the conscious and unconscious compartments really clarified the difference between active non-routine decisions and repetitive automatic habits. Coming up with more unconscious to conscious change of state questions (such as the last character of a name question) could really help during the surveying phase of the course!

The singularity of the locus of attention was thoroughly researched and supported by past writings, and the discussion prompted me to think about intelligence. Are smarter people better with analytics and problem solving because their single locus is larger (and can contain more information) or are they better at transitioning between the locus of attention and their unconscious compartment? When I am doing mental arithmetics (such as 24 x 25) I can "remember" the tens and hundreds places of the answer even while my locus is concentrating on the ones place. Does that disprove the singularity of the locus, or does something else, such as short-term memory, come into play? Either way, the important point of boosting productivity and simplifying interfaces is the key takeaway point of the chapter. Being able to leverage the natural action of forming habits can really be the trait that distinguishes a specific interface.


Richard Heng - 2/8/2010 2:57:23

I disliked the authors praise of the example of the exploitation of the single locus of attention to make users complacent with load times. The goal should be to make an interface that is automatic, not one that simply misleads the user into believing there is a better interface than there really is. Proper exploitation of a single locus might have involved streamlining alerts into primary tasks in such a way that there is a natural flow of consciousness from task to task.


Aneesh Goel - 2/8/2010 5:39:54

The discussion of forming habits in the reading brings to mind a discussion that occurred recently in CS 161, the security class. It isn't just that clicking "yes" or entering Y has become a default response to a particular command; people are now habituated to do so for any prompt provided. A warning about a bad SSL signature doesn't reduce phishing rates because people will see it as a message saying "something technical is in your way, click this button to get back to work and not be bothered again, this button to be bothered again in the future, and this button to be unable to do your work"; even people who understand the message have gotten so used to swatting away dialogue boxes that they rarely have time to read a box in between it popping up and them selecting "do not warn me again" and "OK". One proposed solution was something akin to the way firefox handles plugin installations; the confirmation dialogue has a four second timer before pressing okay is available. This makes automatically okaying the box fail, and interrupts your automatic processing because of the delay. That said, it only works because it's a rare event; if most dialogue boxes did it, waiting four seconds and swatting would be the new response, leaving us back at square one - always assume the user could make a mistake, and don't let their mistakes be irreversible, like the section concludes.


Geoffrey Wing - 2/8/2010 7:42:34

Before reading the chapter, I believed that computer experience/skill really made a difference in the way a person approached an interface. I really found the discussion of the human psychology very interesting, and have realized that this is key in interface design. I definitely find myself have a problem in absorption. I'm often typing and not looking at the screen when a pop-up box will appear, but I won't be able to read what it said because my typing made the box go away. There should be better safe guards against this, especially if this pop-up box is for a critical warning/decision. In Windows Vista, one thing I do think is a pretty good example of guarding against this is the User Access Control dialogue box. The whole screen dims, while the dialogue window comes to focus. This is a dramatic change, and users won't be able to continue their tasks since the screen is grayed out.


Divya Banesh - 2/8/2010 8:34:02

In his book "The Human Interface", Jef Raskin talks about the cognitive conscious and the cognitive unconscious. He also talks about habits that humans obtain that become like addictions and that we perform without even knowing what we're doing (cognitive unconscious). The aspect of his book that I found intriguing was was whether habits fall into the category of cognitive conscious or cognitive unconscious. Raskin says that in order to be in the cognitive conscious, we need to be actively thinking about the thing we want to do, but with most habits, we do the thing without even thinking about it. This seems impossible because, if we need to have the thought in our cognitive conscious in order to think about it or do the action, where do habits fit in, since we usually perform habits while in cognitive unconscious?


Jordan Klink - 2/8/2010 10:40:48

This reading was highly theoretical and while it did introduce me to new concepts to strive for, I didn't agree with all of the author's reasoning. Firstly, I do believe that just like in any programming environment, we need to understand the context in order to take full advantage of the situation. Hence, we should use cognetics to our benefit when designing user interfaces. Namely, we should design so that the user may easily develop good habits with the product and can quickly relocate tasks to his/her unconscious. However, I disagree that intense focus will always cause someone to be so absorbed that he has no awareness. I would argue that people can become so focused and their awareness becomes so high that they can react to the most miniscule of stimuli. This of course depends on the task, but to generalize and claim that intense focus will usually cause you to become unaware is fallacious. Furthermore, I don't believe that a system should always assume when a program is killed that it was "interrupted" and should then be re-displayed. Many users force-kill programs themselves and it more than likely irritate them if the system decided to reload the program where it left off. Imaging force-killing a program with an infinite loop and system reloads it. At the very least the system should ask the user if he/she wants to resume where he/she left off last time.


Alexis He - 2/8/2010 11:42:13

According to table 2.1, it seems most of our daily online experience is based around routine and expected events, thus qualifying most of it into the unconscious. For example, it's routine to check email/facebook/twitter daily and we navigate the interface almost unconsciously due to repetition. However, I was definitely conscious when I first started using Facebook 4 years ago and had to learn a new website layout. I feel like designing UI for software is a tradeoff between being gentle enough that its not an unpleasant surprise to the consciousness but still easily maneuverable unconsciously. For example: keyboard shortcuts become unconscious very quickly but have a very hard learning curve; click and drag UI, instead of keyboard shortcuts, are very easy to learn, but become tedious with repetition.


Hugh Oh - 2/8/2010 12:07:50

Locus of attention is a pivotal part of the human interface design and should be emphasized more during the design phase. The problem with making interfaces is trying to predict what the user will pay attention to. However, if you take advantage of someone’s locus of attention then you can turn prediction into direction. For example, notifications make use of this concept by allowing bouncing icons or flashing lights (not as dramatic as a firecracker but it does the job).


Eric Fung - 2/8/2010 13:26:08

Having one locus of attention has a significant impact on the way programs try to alert you. The article brings up that error messages and tones are potentially ineffective, either because the user dismisses the error out of force of habit, or is so engrossed in their task that they ignore these messages. Though we'd like to think we can multitask, I generally find it difficult to effortlessly switch my full attention between two programs, say, if Growl is giving me notifications that some processes were completed or messages were received in the background.


Dan Lynch - 2/8/2010 13:40:20

The text discusses the inter-related concepts involved with ergonomics and cognetics in regards to humans and user interfaces. An interesting conclusion about design is that it may often be the case that it is impossible to suit the entire range of needs for a given product. This corresponds to what we have been learning in this course because we want to focus on a particular target audience and hone in on them. That means that by default a good design should have inherent this assumption.

Poor interfaces are also not usually based on false assumptions of the users' intellect, but as a result of a poorly designed interface itself. This means that we must pay extra close attention to design details when it comes to physical and mental ergonomic design. In doing this, we must understand the ergonomics of the mind, in this book denoted as cognetics. Some terms to discuss would be the locus of attention and focus of attention. The locus is something that is not necessarily willed, and the focus the subset of our attention that represents volition, our control and will.

We should design interfaces keeping these aspects of the mind in mind (pun intended). That is, we can create interfaces that know what a human will biologically pay attention to when presented with certain stimuli. Also, understanding how perceptions are processed in the human body biologically can provide great insight into the best possible design.

Important aspects to keep in mind are repetition and perception (the decay of). These two come hand in hand when forming habits and memories. The decay of perceptions gives insight into how long information is retained and how to present data in a user interface and for how long. The repetition is something more subtle, but needed in order to make a design that over time a user can become more efficient and do the same task in more than one way so that the interface is learned and using the interface is not even a conscious thought.


Andrew Finch - 2/8/2010 14:59:17

This article covers aspects of the human mind and behavior that make designing effective human-machine interfaces difficult. The article argues that too many interfaces are desinged with the assumption that the user has some special cognitive abilities, and that this can be avoided in most cases. While I partially agree with this point of view, I personally have more faith in user abilities, and believe that designers must often make assumptions regarding the users' mental skills in order to design a good interface. If every interface assumed complete idiocy and ignorance on the part of the user, they would be drastically limited by redundancies and over-simplification.


Boaz Avital - 2/8/2010 15:46:55

One implication of the reading I found is that user interfaces should always conform to a user's expectations. If your program or interface is similar at all to something that most users are familiar with already, your best bet for making it friendly is to have the interface respond similarly to commands as the other programs your user are familiar with, maybe even if these are suboptimal. Maybe a good idea, say for a text editor in a larger program, is to have different settings for a person who is used to different text editors, like an Emacs setting and Vim setting and Word setting, etc.


Annette Trujillo - 2/8/2010 15:50:58

I think the idea of interface testing, testing to see how much energy the brain uses while the person is testing an interface feature, is a very great idea to help designers make their interfaces easier for the common user. But this can also pose a problem. I think this form of interface testing may not be too practical because it will be very expensive for the designers to perform these tests on many people in order to get enough feedback to make the necessary changes. If a company does decide to do this type of testing, then great, but not all companies will so we must find another, more cost friendly way of testing features with users. What about those smaller companies that want to provide a great, friendly user interface? They will not be able to afford this method, yet their interface may be just as important as an interface of a larger company.


Joe Cadena - 2/8/2010 16:03:55

I am curious to the extend interface designers utilize studies that focus on the user's cognitive awareness? I may be mistaken but doesn't cognitive science explore more of the philosophy of an inner being rather than the actions of our habitual nature? In any case, I agree with conducting behavioral studies that target common attributes of its users rather than trying to encompass the whole spectrum.


Jessica Cen - 2/8/2010 16:06:44

I have also experienced the “pitfall of automaticity” that is described at the end of page 9. I am a Windows user, and some of the shortcut commands that I usually use do not have the same effect on Macs. I am used to closing windows with Alt+F4, but on Macs I have to click on the red button at the top of the window. And if try to do Alt+F4 on a Mac, there are unexpected effects. If I were to learn the shortcut to close a window in Mac, there is still a chance that I would continue using the Alt+F4 shortcut at least once. In addition, I would use the Mac shortcut on Windows every now and then. It takes some time for my unconscious to break the habit and think before doing an action.

Furthermore, I believe that automaticity has both it benefits and dangers. As the reading says, turning your actions into auto can save time and work. However, letting your unconscious take over every time is not safe. I must admit that when I delete files, I always press the enter key unconsciously every time the dialog box with the recycle bin icon appears. And I have also watched how some click on “Accept” every time a dialog box appears and they don’t even read its contents. Interface design is very important in order to stop the unconscious of taking over every time and is a challenging task because not everyone has the same behavior. So the interface may work well with some and not so well with others.


sean lyons - 2/8/2010 16:11:31

Raskin points out that unconscious, automatic confirmation of standard obstructions to functionality, such as alerts or dialogue boxes, are quickly instilled by software: "any effective confirmation process is necessarily obnoxious because it prevents the user from forming a habitual response and ever becoming comfortable with it." (Raskin, 23) Many UNIX/derivative operating systems specifically avoid not only complex accident-prevention mechanisms, but by default to not even present confirmation options. Deleting a file is as easy as reformatting a hard drive, yet these same operating systems are generally viewed as the best operating systems for many tasks. Is there a discrepancy here, or are accidents just intended to not occur?


Raymond Lee - 2/8/2010 16:15:33

Exploiting human tendency to form habits is very much in line with an idea from the previous reading: it is beneficial to include a feature that users expect even though it is not necessarily part of the main feature set of the application. It is also beneficial to consider leaving things the way they are rather than introducing changes that may potentially confuse users.

I also never realized that there is a fine line between presenting flexibility in multiple ways to perform the same task and confusing users by giving too many options.


Calvin Lin - 2/8/2010 16:18:24

Although the chapter did not address how the principles of human thinking translate to user interface design to a great extent, several thoughts came to mind. It is easy for companies to try and cram as many features as they can onto their web page, but what has proven to work well with consumers is simplicity and lack of clutter. Companies such as Google and Apple understand the concept of users having a single locus of attention, and is apparent in the products they produce. For example, it is a simple yet brilliant idea for Google’s home page to have nothing but the search bar when the page first loads. This directs the user’s attention to one singular item to perform one particular task. One major reason Apple’s iPhone has been such a huge success is because Apple has hit the sweet spot when it comes to easy-of-use and directing the user on how to navigate the phone. While many other phones have more features than the iPhone, it is the simplicity of use that has made it a hit. When a user is presented with a ton of different options in one screen, it can easily turn a user off from the product.

User interfaces and advertisements are very similar in this way, in that designers must aim to direct the user on a journey of sorts. Successful interfaces make it obvious for the user as to where he/she should start, and then the flow of steps that follow are obvious and take advantage of the user’s single locus of attention. Bad designs force users to make more effort in figuring out what to do next. If there is too much going on in an advertisement, a user might quickly pass over it. This is because advertisements often must be able to capture the user’s attention and interest within a second of looking at it. Similarly, successful interfaces have the quality that when a new user looks at it for the first time, he/she should be able to figure out at least basic functionality with minimal effort. Otherwise, users may just give up on the interface and move on to another product.


Saba Khalilnaji - 2/8/2010 16:26:43

When working in a user interface a user should not be forced to remember things past their short term memory limit (about 6 random items). Also it's interesting that a sound associated with a load time, such as card shuffling in a card game, will decrease the annoyance of the load time because the locus of attention will be thinking about the sound rather than the wait. Things like this should be greatly considering in UI design. Users can't consciously work on two things at once so things must take a linear path, or the user has to cease work on one thing to continue on another, and time is wasted in these transitions (about 10 seconds). In essence the task manager does the same thing with threads since a CPU can only perform one instruction at a time. Also removing error messages in interfaces seems like a brilliant idea that should have been implemented a long time ago. But I think the reason they haven't is because a user would become agitated if they had to break their habits to do irreversible tasks, that would make a product unfavorable yet safer! So the error message is still in play because it gives an excuse of warning the users, but with little effect.


Jonathan Beard - 2/8/2010 16:39:52

This reading made me wonder about driving and whether it could be an unconscious task. Sometimes I'll drive some place and just show up somewhere thinking that I wasn't really 'paying attention'. This especially shocks me when something happens abruptly (such as the car in front of me braking) and I feel this whoosh back to consciousness, in which I react to the disturbance. I think that this occurs when the human body becomes so familiar with the car that the car just becomes an extension of the body.

Expanding upon Yogi Berra's point about not being able think and hit at the same time, I think it is interesting to note that in many tasks, we perform better out of habit than with full concentration. The theory of automaticity seems to support this lofty claim though, by stating that the more automatic something becomes, the less likely it can compete with other tasks. This theory seems easy to apply to common tasks (breathing, walking, etc.), but I'm still shocked by someone who can slam a fastball and not have to think about it.

I read this in a book about how to read faster once. It mentioned that the way we learned to read (by sounding out words) eventually limits how fast we can read. The books reasoning states that this is because our brain must convert words into sounds before analyzing them still. So, in order to read faster, it said that one must teach themselves to read visually, and break their habit of viewing words as sounds. The method for breaking our habit was to force yourself to read faster and faster and eventually you would transition over to viewing words without sounding them out.

Just thought it would be funny, but what if you had to read an SAT passage and answer questions in order to confirm a command. How horrible would that be? I don't think I'd ever delete another file on my computer again!


Daniel Nguyen - 2/8/2010 16:45:33

While the text addresses both the development of bad habits and the danger of displaying alerts during times of heavy focus, it does not seem to address a combination of these two. I have personally experienced the development of habits that ignore error messages, and while usually ignoring these messages causes no harm, it is only a temporary solution that may lead to more errors in the future. This may be partially addressed, or even inferred, during the portion concerning the danger in constantly requiring confirmation for everyday tasks and possible solutions to this problem, but there is no explicit addressal, even though both ignoring errors and developing habits are both individually addressed. I feel this is something important to be considered in designing the interface of an application because of possible consequences that may affect the durability of the application.


Nathaniel Baldwin - 2/8/2010 16:54:25

I was heartened to read this chapter's discussion of the temporally-limited retention of audio and visual sense memories, because it relates to an argument in favor of the utility of my class group's proposed app - namely, a personal beer rating app. Presumably, if audio and visual sense memory is measured in hundreds of milliseconds, taste memory is similarly limited - and so an app that enables quick note-taking on the subjective experience of taste will truly prove useful in the long term.

Overall the article rang true to me - the cognitive psychology and neurophysiology discussed was in line with information presented in several of the Cognitive Science classes I've taken for my major. I did take issue with one section, though - not on scientific grounds, but because I respectfully disagree. Raskin's claim that "any confirmation step that elicits a fixed response soon becomes useless" is, I think, a bit too harsh. While I've certainly experienced this problem in my daily life of computer use, I can quickly think of a counter-example. I use a Droid phone and its Gmail app to read my personal email much of the time. The window that shows the contents of an email has 3 fixed buttons at the bottom of the screen, one of which is "delete." This delete does not require a confirmation. Unfortunately, the physical reality of the device (a small object with a touch-sensitive screen) means that, somewhat routinely, my finger which I intend to be simply holding the device instead presses the delete button. While it does follow-up with a small "undo" bar, it's easy to miss and click on another email in confusion, at which point the process of un-deleting the email is time-consuming and tedious. I would prefer at least the *option* of a delete-confirmation dialog. I assume we'll come to this sort of thing at some point: namely, my intuition is that while there are many good practices to follow in a user interface, the best UIs are the ones that include options to do things differently. To take a metaphor from this week's reading, it would be like giving the 5% of extra-small or extra-large people an option to have a different sized seat, thereby getting closer to pleasing 100% of the people.


Kathryn Skorpil - 2/8/2010 16:54:28

This article made me ironically very aware of things that I already realized I knew. For example, the text explained that sometimes when we are doing something that we do out of habit and switch it into our conscious stream, it makes it difficult to do. One thing that I thought of immediately is when you think about breathing, it becomes difficult because you are trying to think of the correct way to do it. Once you stop thinking about it again, you do not have trouble anymore.

This also made me think about multi-tasking. When I am doing homework, I usually listen to music and sometimes even watch a TV show while I am working. While I like to think that I can do both at the same time without ruining the other, I realize afterwards that I missed an important plot in the story of the TV show, or that I have been staring at the same problem for about 10 minutes without even thinking about it. It is probably better than I dedicate my time to one or the other if I want to actually increase my productivity. Also in the case where I am listening to music, if the music has lyrics I tend to be less productive than when I am listening to lyricless music like classical music because sometimes I will be listening to the words of the song and it will pull me out of my concentration on my current task.


Michael Cao - 2/8/2010 16:54:31

I found the definition of cognitive unconsciousness from the reading very interesting. It seems that the reading describes unconsciousness almost the same as a person's actual memory; basically information that you currently aren't thinking of, but you could still produce them on command. This seems somewhat different than some of the definitions of unconsciousness I've learned in a couple of psychology classes I've taken. Unconsciousness in that field usually refers to something you are thinking of that you are also not aware of. A classic example is that of priming. An experimenter could ask you to listen to a list of items that are mostly, say, the color green, and then asks you to name the first color that pops into your head. Most people would say green because they have been primed to say it from the list of items the experimenter said. However, none of these people were aware of the question that was about to be asked, nor were they looking into their memory bank for information or the correct answer to a question they've heard before.


Spencer Fang - 2/8/2010 16:55:12

The text states that tasks done in repetition will slowly become a habit. I believe Apple made an interesting design decision regarding keyboard shortcut habits. In OS X, Apple has set up some Emacs styled shortcuts by default, namely those that can be expressed using only the "Control" key. They ignore shortcuts that require a "Meta", probably because "Meta" and Apple's "Command" key share the same keyboard position. Their rationale might be to make Emacs users feel at home, and decided that providing a partially compatible user interface is better than nothing. But this creates a strange final result, because "Control-v" performs a page-down, but "Meta-v" does not perform a page-up ("Command-v" would instead paste text from the clipboard). Among other inconsistencies, the common shortcut "Meta-w" which copies text selection in Emacs could perform a destructive action in some Mac applications ("Command-w" closes the current window).

So it easier to learn a new set of habits from scratch, or to adapt to a modified set? I argue that it's harder to adapt to a modified version of a familiar interface. I think the adaptation process would be (at least for me): assume that the modified interface is identical to the familiar interface, take a mental note each time I make a mistake, and repeat until a new habit forms. This means the I will only correct the interface discrepancies that I encounter frequently. When I try to do something done less frequently, the interface would again behave in an unexpected way. Adapting to this modified interface may also cause problems when I go back to the old interface. Ideally, users should be able to form different sets of automatic behaviors that peacefully coexist.


Jeffrey Bair - 2/8/2010 16:57:35

It is interesting how the cognitive conscious and cognitive unconscious relate to computers. They are very different in the sense that a computer wants to keep in memory something that often occurs but the human brain on the other hand tends to forget about the daily routines that happen and remembers the outliers and special events that occur.

Habits that form from the typical user interfaces are something that I believe user interface designers need to be well aware of. Luckily for us, we probably also have habits that we know of when we design our own user interface for users. However, the market that you produce designs for may not be similar to what you yourself would expect. For example, if you were designing a game for the Playstation for Japan you may not realize that the typical confirm button is actually the “O” and not the “X”. In America it is the exact opposite and you may have not even thought twice about it since it seems automatic to assume that “X” is the confirm button. Making sure that the user interface is designed for the target group is especially important since having a user interface that the target can pick up right away can make or break a design.

Data retention is something that all humans have and the Canon Cat seems like a great design decision that was made to help people remember what they were doing before being interrupted. Most computers today have a sleep mode that saves the windows in their state and position and load them back up when the computer is turned on. Saving state is an option that is increasingly prevalent, especially in portable devices. New portable devices such as PSPs, laptops, and cellphones all have features to save your state since interruptions often occur on the go. I feel that this would be an especially important design choice, especially for iphone apps that should be short and concise so that it's not an issue or have this feature implemented.


Esther Cho - 2/8/2010 17:01:00

So I understand what the article is trying to say about habit, which sounds like another article we read except this one took longer to say. The article didn't really suggest what would make good habits (it was general on that I thought) and there is the case where if we make it hard on users to complete a task (by making it impossible to become habitual), they'll end up being frustrated over it (unless it's not something they do often). There is also the case where it would be nice for similar programs are expected to do the same thing given the same shortcut, but then comes the task of agreeing upon it, or is a designer subjected to shortcuts already defined because a similar software was implemented before?


Bobbylee - 2/8/2010 17:23:26

This book really enlightens me since I have never thought there is some much knowledge behind user interface and people's attention. The book mentions that we have locus of attention and we have only one of them simultaneously. And once you absorbed into it, it is hard to distract you from. To me, it is really practical. When I was playing video games, I would always ignore that my mum was calling for dinner. To refer back to user interface, I believe that an excellent user interface should not distract you from working since human can only concentrate one thing consciously. Therefore, an user interface shouldn't have too many error messages popping up or when you are browsing the web, it is best that the browser doesn't allow advertisement to emerge or the user interface should be easy to use, such that users won't be distracted to find the corresponding button and learn how it works.


Wilson Chau - 2/8/2010 17:24:56

Being able to design a good interface requires not only being able to master the technological side of things but also an understanding of how humans work. The reading talked about a locus of attention which is something an item or an idea which you are actively thinking and focused on. A frightening example of how a humans locus of attention can be very narrow is the example of the flight crew who has their locus's of attention so preoccupied with changing a light bulb that they failed to notice that they were dangerously low to the ground despite multiple warnings.

Something that we can learn from this is that when designing interfaces we have to be careful to take into account our limited locus of attention. For example if we are designing something that is to be used as an aid or with something else, we should be sure to make it very simple because if it is too complicated, it will draw attention away from the task making it harder not easier.


Richard Lan - 2/8/2010 17:28:31

The most important argument the authors make is that humans have a complex mind. The mind has conscious and unconscious realms that humans are often aware of. At the same time, humans can only direct their attention to one object at a time, hence the singularity of the locus of attention. Given this assumption, an argument for interface simplicity can be formed, because it complements the fact that humans can only focus on certain elements at a given time. Therefore, the interface should minimize clutter, which can also increase the aesthetic appeal of the interface.

Another interesting point was the role of habits in determining behavior, and particularly the example of a mac user trying to perform basic commands on a windows PC. For instance, the mac user presses the apple-key when trying to perform basic functions such as cut, paste, save and open. On the windows platform, the same functions are accomplished using the 'ctrl' key. The 'ctrl' key is placed at the left end of the keyboard, and when I use the macs in the orchard lab, I sometimes find that I press the key on the key on the end of the keyboard, when instead I should press the apple-key, which is closer to the space bar. This was because I had developed the habit of pressing the key on the left-end, perhaps becoming unconscious of the key combinations I was using. The development of habits in using interfaces, however, was opposed as a sign of good user interface design in a previous article, as it does not necessarily indicate that the interface has closely replicated the user's intended goals and task. In a sense, however, habit development is useful for interface designers because based on previously designed systems, it points to features and functionality that users will expect and that put the user into a familiar and intuitive environment.

The ideal interface should act as an extension of the human brain, to be able to respond to human instincts and expectations, and to allow for the extension of the limits of cognitive processes to a computer or machine. To be able to do this effectively, the programmer must be aware of several key cognitive traits, including the locus of attention, memory and retention, and habit formation. The programmer should be able to think from the point of view of the cognitive unconsciousness in order to create a design that complements the user's consciousness. For example, while users may not be conscious or actively thinking about the wording in prompts or instructions, but the programmer must be aware and use proper word choice so as not to disrupt the users' thought processes and user experience.


Mohsen Rezaei - 2/8/2010 17:32:57

As mentioned in the reading it takes a human being 10 seconds to switch context. For example if we are used to driving a way to school or work at a certain time, we will most definitely drive to the same exact place at the same time even if we meant to drive to another location. Also, in the this reading mentioned that we might want to do some good checking before a user can erase a file or we might want to have a strategy to revert user's job if they didn't want to for example erase a file. I still think that a better warning methods are more effective than being able to revert a job since if a user goes on and on and erase or format an entire directory then the job might not be revertible. This is exactly like driving a wrong route at a specific time.


Andrey Lukatsky - 2/8/2010 17:34:57

I agree with the author on the importance of taking users' habit formations into account when designing a user interface. Mobile SDKs (such as Android and iPhone) seem to facilitate this by providing common UI components for use in applications. This allows developers to create interfaces that fit with users' habits. In fact, the Android platform team recommends that third party developers don't attempt to alter core system functionality (e.g., focus and touch behavior) so not to conflict with pre-established expectations.


Owen Lin - 2/8/2010 17:35:40

It is interesting to find out that interface designers must take into account the fact that users develop habits. It would then be beneficial to try to take advantage of user habituation developing our app, and be wary of it at the same time. For example, we could take advantage of user habituation by making sure that icons/buttons are always in the same place. For example, many real computer applications often share the same broad interface (such as in the top bar menu, there is almost always a File, Edit, View), and this would make learning how to use a new application much easier. However, in designing our iPhone app, we must also make sure that user habituation doesn't lead to irreversible or time-consuming errors. We should think about how we can provide easy ways to fix text fields or go back to previous views quickly without losing data that we DON'T need to correct.


Brandon Liu - 2/8/2010 17:35:51

One of Raskin's points was about how error messages and warnings should be displayed to users. He gives an example of a confirmation message that requires the user to type backwards the tenth word in a dialog box. I actually thought this was a really good idea, since in my own interaction with a computer I habitually press confirmations without considering their consequences. Raskin argues that this should never happen since anything that needs to be confirmed this way should be impossible. I feel that this is too idealistic and restrictive on user interfaces: imagine if it were impossible to overwrite files in Windows Explorer or Finder (something that it always asks you to confirm, and necessarily should). Raskin solution to this is that either it should be impossible, or it should be readily reversible; in this particular case, a user might not even notice their mistake until much later, or at all. There's also physical constraints to keeping a history of all previous actions to make them reversible: for example, how practical to keep a log of all changes in state of the system, or in a network of computer users, how realistic it is to commit a change visible to others only to reverse it later.


Angela Juang - 2/8/2010 17:37:45

The fact that people develop habits and continue to rely on them even when they're using different programs implies that when designing an interface for a new application, you might want to take advantage of those habits to make using your application easier for people based on their existing habits. Similarly, it probably wouldn't be a good idea to design something that would be counterintuitive to users based on their current habits with existing applications, because this would frustrate them before they have a chance to get used to the new interface. However, if we always design new interfaces around people's existing habits, it seems like it would be hard to make any change to the way things are done, even if changes would ultimately improve efficiency. How do you introduce a new way of working while still catering to what people are already used to? Even though people have an easier time using interfaces that are similar to what they've used before, I don't think it's always the machine's job to make sure things stay that way, as some articles we've read seem to imply. In order for people to find better ways to perform tasks they need to be challenged by new applications to try working in ways they haven't tried before - and this means designing interfaces that force users, at some level, to break out of their habits.


Conor McLaughlin - 2/8/2010 17:38:59

I honestly thought the chapter was a little too repetitive and abstract to be of any immediately apparent practical value besides the citation of the airplane accident. User interfaces naturally cause someone to form a habit, because one of the primary indicators of a good interface is someone's ability to navigate through efficiently, intuitively and quickly. The author states designers are stuck, because a habit naturally forms despite one's best efforts, without properly following up with a proposed solution to how a designer can fight against the bad habits of a user. The lingering solution only seems to be thorough testing, but what happens under time and money constraints. I believe the power of an application comes from its similarity to other applications; a user is able to become immediately in sync with it even when they have never used the exact application before. Things like the standard file button in the upper left corner of computer applications gives the user a sense of orientation. This is sufficient justification to the majority of computer programs being similar even if a user may accidentally use an application due to subtle differences between two applications (excluding extreme examples like airplane navigation software, etc.).


Wei Yeh - 2/8/2010 17:40:13

The idea of cognitive conscious vs. unconscious ties in to a principle of secure software discussed in another class I'm currently taking, CS 161 (Security). We discussed the fact that too many security warnings (as in the case of Windows Vista) may be bad, since users will habitually dismiss those warnings without reading carefully. In terms of the reading, overabundant security warnings causes psychological acceptance because they invoke the cognitive unconscious. Ironically, dismissing security warnings without reading them first is a bad habit caused by too many warnings! Instead, software developers should aim to implement warnings so that they fall into the user's locus of attention -- that invokes their cognitive conscious and causes them to think twice before allowing malicious software to be installed on their computer.


Yu Li - 2/8/2010 17:43:23

Everyone has a conscious and unconscious awareness. Usually when we put on clothes, we don't think about how the material interacts with our body afterwards, this is the unconscious awareness. However, most likely after reading the previous sentence, you now notice how the material of your clothes fall on your body. Reading about this effect has brought the thought of your clothes from the unconscious to conscious awareness. I think it's very interesting how when we keep things in the unconscious awareness our body just takes care of it; everyday tasks like walking, drinking, or even typing on a keyboard. However, when we try to actively think about typing, the task becomes hard to do. So in a way our unconscious stores many things, while our conscious has a very hard time multitasking.


Mikhail Shashkov - 2/8/2010 17:48:22

The first thing that popped into my mind when it was talking about automaticity was the Vista admin user confirmation, which is totally unnecessary because most users are roots users. This chapter portrays such approaches in a negative light. However, in security (161) we learned WHY Vista does it that way, and its for good(better?) security reasons.

This got me thinking about the balance designers must make between new interfaces that conform to all these ideas, such as the Locus of Attention, and conformity to previously done ideas that we as repeated users have become adapted to and maybe even overridden the norm.

I guess my question to the professors would be to elaborate on design decisions beyond simply "best for the user in an IDEAL situation"..taking into account the need to adapt.


Weizhi Li - 2/8/2010 17:49:29

Raskin brings the theory of cognitive in to interface design, which for me, is highly refreshing. Others might not appreciate the theory as much, but this is clearly the core of the science of UI design. It quickly moves on to discuss "cognetics", which he describes as the ergonomics of the mind. Raskin seems to have a negative view with the term "intuitive", but introduces "affordances" as a stand-in. What I get from the chapter, "Affordance" is like something that's familiar from our prior experiences, combined with "visibility", which is very important for a user-friendly user interface.


Vinson Chuong - 2/8/2010 17:50:29

I believe that the concepts discussed in this reading are of utmost importance in the design of a successful user interface. These concepts add a new dimension to user analysis, for now we must also consider interface conventions that users already use in their daily lives--pre-existing habits--and how they integrate or clash with the interface conventions that we design.

For instance, when I started using Mac OSX after only ever having used Windows and Linux, I found the experience frustrating. F2 didn't work for file renames. Shift + Delete didn't bypass the trash and permanently delete files. Window controls were on the top-left instead of the top-right. Though promoted as an intuitive and superior interface, OSX is pretty lacking in my book, and I would prefer not to use it.

That aside, it's not difficult to see that OSX _does_ have an intuitive and superior interface. The main problem is that it defies pervasive interface conventions that have been established by Windows. That makes the difference.

Aside from all of that, there's a lot more to be said about taking into account restrictions on the consciousness of users while helping them to efficiently complete tasks. For example, filling a single view with as much information as possible versus splitting it into two views, each with half the information.

I believe that this reading has given us a driving factor in actually designing effective user interfaces and that the concepts deserve a lot more discussion.


Brian Chin - 2/8/2010 17:51:14

The document talks a lot about the cognitive conscious and and the cognitive unconscious. It also talks about how when people get better and do things often, the action moves from being part of the cognitive conscious to the cognitivie unconscious. It seems that this would imply the interfaces that are most easy for people to pick up on and use are the ones that require mainly the cognitive unconscious. Otherwise the user would have to learn something new, and would find the process difficult in the beginning. This poses a problem though, in designing new interfaces. A newer design may be better and more efficient, but if it were to break away too much from the old design, users would not like it, or be able to use it well initially.


Kyle Conroy - 2/8/2010 17:51:45

Raskin, in section 2-3-2, addresses the human ability to simultaneously execute multiple tasks, mentioning that doing so creates interference and results in a drop in productivity. The majority of my computer time is spent on a laptop with an instant messenger client, web browser, email notifications, and various other applications. After reading this chapter, I worry that all this software contributes to productivity loss. I am sure that I am not alone in running all these programs at once, so other must feel the same loss of attention. However, I think there may be a valid argument that people in my generation, having growing up with computers, are better suited at multitasking than the previous generations. I feel that a study into genertaional differences regarding simultaneous task execution could be very enlightening.


Sally Ahn - 2/8/2010 17:53:43

It was interesting to read that people "have only one locus of attention," because it seems that new devices seem to consider multitasking a crucial capability. For example, one of the greatest complaints about the iPad is its failure to run multiple apps at the same time. I too, had considered this feature a fatal flaw. Today's reading, however, made me stop and wonder whether the current focus on designs that optimize multitasking is in fact more efficient. If it does indeed take 10 seconds for people to "switch contexts," then perhaps multitasking for efficiency is an illusion and an interface that discourages multitasking may actually be more helpful for the user.


Peter So - 2/8/2010 17:55:14

As mentioned in the paper, humans are limited to a single locus of attention. This singularity can be used to augment reality in the way a magician conceals tricks from his audience but also as a way to logically direct a person through a set of tasks. While the goal of this paper was to analyze how designers can take advantage of the single minded user to improve productivity I am curious about how designers can utilize the user's awareness to enhance their products experience and the flow from one task to the next. With a powerhouse of sensors integrated into personal devices like the iPhone, how can we use the information, say from an accelerometer, to predict what the user will do next and thus attempt to streamline time spent navigating through a list of programs by suggesting ones for you. In the case of the accelerometer, when you go running you can listen to your iPod and forward all of your incoming calls to voicemail, then when the sensor detects you are no longer running it will pull up a list of messages you received during your run much like a personal secretary. How can we use sensors to enhance the conscience experience by recognizing user patterns and encouraging good habits?


Kevin Tham - 2/8/2010 17:57:27

This weeks material was interesting. Now dwelling into the area of human cognition and perception, this reading assignment considers the human end of human to hardware interaction. In regards to this topic, I want to discuss an application that I thought of in regards to the material Raskin describes on perception, attention, and memory. The closest example that pops up instantly was the interface to maneuver an automobile. Raskin says that it is generally known that humans have a focus of attention and cannot be aware of everything that goes on. Now, driving cars, lets say, is not 100% safe. Much thought goes into consideration about safety when designing not only the machine, but also interface, ergonomics and cognetics is also examined. Driving in many cases can be life or death situations. I think it is only recently that engineers and designers have given much thought on the interface, than on the construction of the actual machine. You've probably heard about crash test ratings, and those sorts of tests, and it's probably the first thing you think of when evaluating the "safetyness" of a car. But I feel that is the wrong philosophy. It also should take into account "features" that the driving interface in the car provides. These days, you can see features like rear cameras which assist in backing up cars to avoid children, breathalizers that prevent drunk people from driving, and ideas regarding windshield electronic displays have been brought up. I think there is much more we can do to improve the driving safetyness and experience (to address the needs of information that is "unconscious" to us, or that we have forgotten/not aware), if only more research was placed into this area, which fortunately, seems to be in today's time. What seems innovative now, will become standard safety features in the future. There was one more thing that I found interesting and that I agree with, and that was the concept of automation. Once accustomed to a certain task, it is hard to change, since the task becomes "automatic" to us. This is one thing to consider before setting standards for the field. Such as the windows vs mac situation. Windows has their own windows key, while macs use command, and though there are similar keystrokes for certain functions the spatial arrangement of the keys differ. Imagine memorizing pi to a thousand digits, and realized that at the 501th digit, you were off, and you had to rememorize it, it will leave a permanent mark and you will never be as fast as before with the wrong enumeration of pi.


Darren Kwong - 2/8/2010 17:58:50

Delving into the cognitive aspects of user interfaces ties into the idea of recording a user's interaction with an interface as well as the master/apprentice interview relationship. With the concepts of habituation and the single locus of attention in mind, it is beneficial to study what kind of habits a user has with an interface. The things that require additional cognition should be reevaluated to try to make them easier to complete and eventually habituate. Studying interactions through observation will reveal these aspects of human interactions.


Jeffrey Doker - 2/8/2010 17:59:03

The article speaks strongly in support of interfaces that remember the last settng the user was at when they resume using the interface (eg tvs that turn on to the last channel before they were turned off, etc). I think this is a good idea in many cases, but it is important not to confuse continuity of user experience with efficiency of a user's interaction with the interface. For example, a browser that automatically loads the most recently visited webpage is both efficient and continuous, however a computer setting that automatically centers the mouse on the default button of a dialog box is efficient but acts as a discontinuity in the user's concept of the location and control of the cursor.

Also, it's good to be able to resume your last session when you return to an interface after a break, but i think it is also important to have some sort of homepage/homescreen that you can return to when you want to reset your experience.

WIth regard to automatization, are there metrics that exist to measure whether a user's comfort with a given interface vs an alternative one is due to the quality of the interface or the user just being used to the old interface? For example, I find Microsoft Word 2007 to be confusing and difficult to use. Is this because it is a worse design than Word 1997, or am I just so used to Word 1997 that I don't realize that the 2007 version is a better interface?


Long Do - 2/8/2010 17:59:16

If a person cannot truly multi-task and trying to do so actually decreases performance then designers must take that into account. I believe a hierarchy of alerts would be best and each higher alert would actually take control of the screen and inform the user in big bold letters. Of course, we should not want this to happen very often, so most things should be done automatically, or at least set it so that such unimportant or non-critical alerts can be dealt once and the same action be memorized. If an alert is needed for something critical, the user should have both auditory and visual signals that are bold and not brief. The example with the pilots in the plane fidgeting with the landing brakes could have gone better if a large screen displayed the dangers of their loss of altitude, or that the sound alert was continuous and grew louder and louder as the dangers grew more immense.

Perhaps the iPhone's incapability to multi-task is actually Apple's design choice. Perhaps the ability to multi-task would cause the user to create more mistakes or decrease their efficiency (even if surfing the web is not important, one should focus on one's task!). I know that like a human, a processor is often not as fast or efficient when it has to multi-task, and until technology that is capable exists, perhaps multi-tasking is not as important as many make it seem.


Arpad Kovacs - 2/8/2010 18:05:33

It is unfortunate that during their quest to add features and flashiness, many programmers of multitasking user interfaces seem to have forgotten the human tendency to focus on a singular locus of attention. I find the gratuitous animation and sound effects that are prevalent in operating systems such as Windows Vista to be extremely distracting when I am focusing on a task requiring extreme concentration. Another example of failure to account for user focus are the animated "helper" figures such as Clippy in Microsoft Word that draw the user's attention and divide concentration. Fortunately, it seems that some designers at Apple and other progressive companies have embarked on a new trend of minimalism and "clean design" that eliminates as many such distractions as possible.



[add comment]
Personal tools