Human Information Processing (Perception)

From Cs160-sp08

Jump to: navigation, search

Lecture on Feb 26, 2008

Slides

Lecture Video: Windows Media Stream Downloadable .zip

Contents

Readings

Gary Miguel - Feb 21, 2008 10:38:56 pm

Wowzers, that was long. I certainly found it interesting to see that such a crude model of a human being could so accurately predict so many phenomena. One part that struck me was the analysis of a person solving a fairly complex puzzle. The conclusion was that people are not complicated, but they act in complex ways if it is necessary to achieve their goals. This reminds me of computers, which are basically very simple, but can be made to perform very complex operations. Overall, I found this reading to be in a similar vein to Raskin's reading: to better design interfaces, we should start with a better understanding of how humans work. But I found Raskin's writing much more fun to read and much more relevant. This reading's super low level descriptions of tasks don't seem as applicable to the projects we're working on. I don't think we're expected to take into account how many milliseconds someone's finger takes to use our flickr app, are we?

Brian Taylor - Feb 22, 2008 04:06:41 pm

From that ridiculously long reading, I found the portions about how people remember things most interesting. It was funny to think about how a person might have trouble searching for a paper he saved with the title "light" (in a light vs. dark context) if he had, at the moment of trying to find it, the idea of light as being the opposite of heavy inside his head. Along with the importance of considering context when accessing memory, the readings also discussed the idea of memory interference. Specifically, if a person tries to remember some event or task that is incredibly similar to some future event or task that differs by only a few details, he may find that he has forgotten the previous task altogether. I found this strange, because I would have first believed that two similar memories may be easier to recall together rather than actually hurt the person's ability to remember the former. Of the text, the calculations about how long it takes for memory to fade considering the number of 'chunks' that a person can retain could potentially be useful for our projects (i.e. when we consider how much information (perhaps with respect to some state) we should display on the "limited" screen). We need to make sure that we do not force the user to remember too much data at any time, however, we also don't want to clutter up the screen. Similarly, the calculator example may also be useful when considering efficiency. Overall, the various calculated examples were fairly interesting to peruse, but I don't think I would have enjoyed it at all if I had read it at 52 words per minute.

Michael So - Feb 23, 2008 11:44:26 pm

That reading had lots of equations. I am wondering if we need to know those equations for the midterm or something. Like how to calculate the time for a user to complete a certain task, such as how fast a user can push a certain combination of keys on some type of keyboard. But maybe this reading just goes to show you the amount of math and thinking is needed to analyze human computer interaction. The thing that I found most interesting to read concerned the Working Memory and the Long-Term Memory. According the reading, items in Working Memory are more sensitive to acoustic interference. This means that items that a person is currently working with in his head will be confusing if the items sound alike. I found this interesting because when I thought about it, I could relate and agree with that observation. Same thing with long-term memory. Items in Long-Term Memory are more sensitive to semantic interference. That means a person will get confused dealing with items that have similar meanings. I found the example they give about a user learning a new text-editor that has the same functions as the old text-editor except they have different command names pretty relatable. That would interfere with my ability to remember the old command names. I have had experiences like that where after using a newer version of some application, I forget how the older application worked even though I had spent many hours with the old stuff.

Benjamin Lau - Feb 23, 2008 11:46:41 pm

I didn't think the reading was very interesting, honestly my eyes glazed over after looking at like the 12th bar graph. It brings up a few good points but most of the observations (as the authors admit) lend themselves to empirical models rather than "guyz this is siriously how it rly works" models. The analogy of processor cycle time seems a bit arbitrary, it could've been anything else, like the time it takes an OS to do a round robin over all processes on the CPU ready queue, why processor cycle time in particular? I'm not sure actually what the book the except is from is all about, it's probably not about interfaces is my guess because as someone else pointed out, a lot of information is bit too low level (eg auditory/visual decay time for letters) to be relevant, compared to say the Raskin reading. A lot of the 'laws' given look interesting until you realize that they're just mathematical gibberish for things we already knew as obvious in real life, eg all Fitts's law really says is that the farther away something is and the smaller it is, the harder it is to move your hand to it. That practice time follows a power law is interesting, but I'm not sure what good it is to know. It doesn't really suggest a mechanism for how learning works nor does it tell us how to improve a user's practice time. On the bright side Section 2.3 does go into a good deal of detail about the failings of the Human Processor model as well as its ancestors. And page 2 of the PDF is a summary of all the principles so you won't have to read this all again, hah.

Gordon Mei - Feb 24, 2008 01:48:43 pm

In discussing motor skills, the Model Human Processor article describes human-computer interaction as consisting of the movement of the hand towards a target, and keystrokes. These are limiting factors that come into play with any interface. Particularly, the article references a pocket calculator with a frequently used "f" button placed in the wrong place. This is akin to a notebook computer manufacturer moving certain keys, like the Windows key or function key, into the far right corner, above the backspace key. Sure, it helps them with their cramped key space at the bottom, but at the expense of greatly increasing the distance of these keys to their combination QWERTY keys, and consequently the hand movement time when using those keys in tandem. This is probably why we see some common layout decisions like the position of the "equals" key adjacent to the number pad. It also explains opposite decisions of selecting combinations far apart, like Ctrl+Alt+Del, whose distant spacing arguably prevents accidental steps towards killing processes.

Additionally, how fast a user can push two keys differs whether it is performed in sequence by the same hand or with alternate hands. The article printed a diagram of the layout of the QWERTY keyboard with indication of which keys are used the most. It's interesting that they point this out, because it's not something all of us consciously think about. Typing adjacent keys CV (as part of, say, CVS) is a tad slower by human limits than typing adjacent keys CN (as part of, say, CNN) with adjacent hands instead of the same one. Considering that keyboard layouts on touch screens can be reconfigured (ex. alphabetic vs. QWERTY vs. etc.), this factor is worth a look.

Or regarding response times, if a computer started a long process of deleting files you did not mean to remove, having the prompt ask to hit the prominent "Spacebar" key to cancel would allow for a quicker response and less damage done than if it asked to hit "S" to stop, or "C" to cancel.

And moving away from hand movements, those who read hundreds of words per minute do not actually read every phrase of a text, so in writing instructions in large blocks of text in the UI, it's important to write it in such a way that the skimming user can immediately grasp the gist of the message. Likewise, the placement of such text is equally important, as visual areas large enough to require movement of the fovea of the eye will influence the time to perform a task. All of these elements demonstrate the importance of understanding the limits of the human in order to understand human-computer interaction.

Hannah Hu - Feb 24, 2008 03:57:04 pm

While the mathematical material of the reading made my eyes wander, discussions such as key distance and chunk memorization made for some good insight. Concerning key distance, the section immediately brought to mind the Dvorak keyboard layout. That layout was mentioned in passing, but though the article stated that it was no more efficient than the standard QWERTY layout, I believed that more experiments and widespread adoption of the alternative layout would disprove that claim. I have tried out the Dvorak layout, and though I have delayed indefinitely learning it, I found it much more efficient than the standard, and moreover it was easier on my wrists. The article seems to ignore the physiology of human hands in discussing human reaction time; it isn't so much the distance covered by fingers on the keyboard as it is having keys arranged in a more natural layout. Nor is it about speed, but rather about ease of use. (As a side note, there is an alternative to QWERTY and Dvorak, called Colemak, which claims to be easier to learn than Dvorak while keeping ergonomics and efficiency in mind).

On chunk memorization, I found it somewhat ironic that the article isn't written in a chunkifiable format :).

JessicaFitzgerald - Feb 24, 2008 04:40:37 pm

I found the approach to this article quite interesting. It seemed interested on analyzing the time it took for humans to complete certain tasks, and using that information to support their idea analogy of a human as a processor. It seemed to me that the author tried to model how humans process things in the same light as a computer does. By understanding what a human can do easily, it seems to me that we can apply this information and use it to our advantage when creating user interfaces. For example the qwerty keyboard is shown to have a faster typing rate than an alphabetic one. I was surprised that the rates were so close, because I expected the alphabetic one to lag far behind the qwerty one. The author's discussion of how memory works, especially the part about long term memory and if we have several things that are very similar in memory, than it makes recollection harder for those memories. Also, the idea about memorizing things in chunks was something I found that was something that everyone uses and doesn't realize. I also was unaware that there were only a certain number of chunks that we are able to memorize and then thereafter we forget.

Ilya Landa - Feb 24, 2008 08:24:57 pm

Well, one more unbelievably long reading. Did you people run out of short funny readings about bad interfaces? I believe there is still much more left to cover on the topic. However, this reading showed some useful methods for designing and analyzing a user interface: how long would it take a user to execute some task, or how long to even enter a command based on a keyboard layout. In older simpler applications, this information would not be very useful: an application starts -> user enters a command -> something happens -> user enters another command … The time it would take to complete some process depends mostly on the speed of the application, no the user. However, in newer, user-centered applications, much more attention is given to the process of user-application interactions. For optimal designs to such applications, designers need to make the user experience as smooth as possible. Also, they may want to cheat a little – if the designers know that it would take a user at least 30 seconds to analyze the output and enter the next comment, they may use this time to complete some processes that user thinks are already done. Still, I don not understand, why there has to bee such a long article about a seemingly simple concept – if in your application, you have gestures (points for using a term from the lecture) consisting of pressing many buttons, it’ll take user a long time to execute an average action on such interface. The moral – put as many big colorful buttons on the screen as you can without making it too cluttered.

Chris Myers - Feb 24, 2008 10:00:56 pm

What? They took the metaphor in reverse! Now the human is evaluated as if it was a computer. Um, ok. Interesting data though. Good to know when giving the user messages and expecting them to be remembered. How long to flash notations on the corner of the screen and such. Response time is important too. I've noticed things like autocomplete in a text editor can be completely worthless if it's too slow.

Bo Niu - Feb 25, 2008 01:00:56 pm

There were some interesting and informative data being shown in the reading. The metaphor Moran & Newell used to explain human mind in term of computers is very effective to computer scientists i guess. Instead of explaining the not so regulated functions of human brain, the metaphor seemed much easier to manage. It shows us human brain's capability and limitations so the UI we design is suitable for human to use. Overall it was a pretty useful reading to keep in mind when developing UI.

Brian Trong Tran - Feb 25, 2008 01:58:34 pm

It is most definitely different learning about the way that the human brain works in terms of formulas. I guess these formulas help us understand how the brain functions enough to develop interfaces catering to the way that humans think. This kind of reminds me of how we can take advantage of the locus of users to do things that they won't notice. I think the article would be more beneficial if it actually showed how the metaphor of the human processor could be applied to user interfaces.

Hsiu-Fan Wang - Feb 25, 2008 01:46:05 pm

I'm reminded of how in the Raskin reading he cautions against getting carried too far with the use of metaphors for the brain... I think this paper takes it a bit far. Things like decay rates seem a sign of a very poorly designed computer (though I understand the point that is made). I thought one of the more interesting things was near the end when they discuss limiting factors, and they describe the debate as the difference between the brain being cpu-bound or memory-bound.

The reading had a number of short discussions about different topics, so I'm having difficulty trying to build toward a larger theme, but I was really curious about the discussion of the rationality principle, as a number of studies have shown that human behavior is irrational. I mostly feel insulted that I can be reduced to some FSA or whatever, but I think the idea is primarily useful when considering relatively small tasks, but begins to break down when one begins to examine more high-level motivations ("I want to make things look cool" as opposed to "I want to add a gradient wipe").

Eric Cheung - Feb 25, 2008 02:29:11 pm

I'd have to agree with some of the comments made about the abundance of formulas. Personally, the calculations and the formulas didn't really help my understanding of the chapter. It seems like the authors would introduce a formula and some numbers and then immediately ignore it for the next example. While I do appreciate the importance of modeling, the model of the human processor seemed a little contrived. I didn't get a good sense of how useful it would be to apply to real world tasks or how accordingly accurate it was.

That being said, I did find parts of the chapter useful. Namely, the section about the problem space principle. I think it provides a useful way to generalize how people approach certain problems on a more qualitative level than the previous examples.

Kai Man Jim - Feb 25, 2008 06:49:21 pm

Regarding to those three systems in the article, I am very interested about the cognitive system and the perception system because these are what exactly we learned in cogsci 101. And the way of thinking about how human brain works like a computer is the main point for this article. In some reasons, computers cannot work 100% the same as human brain because our brains are more complicated with all the neurons and axons together firing with signals. But physically, how information is store in our brain is very close to the way how we save data in the computer memory. Therefore, this only make sense for perceptual system. On the other hand, I feel like this article is too long to read with a lot of physics formula that is not really related to CS.

Yunfei Zong - Feb 25, 2008 01:50:15 am

These formulas in the reading remind me of the famous drake equation, explained most simply here: http://xkcd.com/384/ . Seriously though... summing up a human in terms of simple discrete equations is laughable at best. No further comment.

David Jacobs - Feb 25, 2008 09:22:49 pm

Reading this text, I couldn't help but think of our class discussion of metaphors. I understand that the authors wanted to pose the brain in a framework that can be "easily" understood -- in this case, a system of processors and interconnected memories. However, I think the computer metaphor is severely lacking, in that beyond the existence of a something we call memory, it really doesn't seem to map that well. I also have to comment that find it particularly amusing that the computations the model is used for (like minimal frame rate commputation) are likely closely related to the source or experimental parameters (all the taus and such).

Khoa Phung - Feb 25, 2008 08:17:27 pm

I am actually very impressed by how much research has been done in this field of human mind/process modeling and how accurate one can predict with these equations on how this model works. The example of the mores code was very interesting and how it the tones were calculated. However, a lot of the text is very mathematical and not always easy to follow. Having more abstract examples such as the chunks metaphor helped a lot to understand certain ideas behind our human processing.

Gerard Sunga - Feb 25, 2008 09:58:45 pm

This article was interesting, to say the very least, and a bit hard to understand. He seems to take the metaphor of a human processor to the extreme in listing a what seems to be a hodge-podge of equations in a single chapter, making the reading very difficult. Nevertheless, his quantifying of the human brain function was pretty interesting (especially considering the vast amount of time that compiling such a large list would take, especially in doing the various timing equations and such). I simply wish the organization was better, with this dumping of this number of equations all at once making the reading quite a chore.

Randy Pang - Feb 25, 2008 10:13:41 pm

I actually found the quantatitve analyses presented to be intriguing at first, though as I kept reading I came to agree with everyone else that it was overly long and not very useful. My biggest qualm with the article was that here was all this structured, thought-out analyses and number crunching, but I felt the article really fell short by not going that one extra step beyond the numbers. For example, the total time to complete the task was 53.15 seconds... so what? What does that tell us about the interface? About the design? What if that number was instead 5315 seconds? What sort of implications would that have? What are the thresholds for human distinction? If the measured variation was 23% higher than calculated, why was that? Was it because there was a mistake in the model? Or perhaps a mistake in the measurements? Or perhaps something completely different? I feel that all these equations, laws, and graphs could have been really insightful, if they had just taken the time to analyze them after crunching them. If they wanted to leave the numbers as us as a mental excercise for us to think about the ramifications of them, I could almost see that being as somewhat acceptable, except for the fact that I really couldn't draw that much insight from most of them (some things like the Keyboard wpm example were easier to draw conclusions from, but as for others like the reaction time, seriously..?).

Jason Wu - Feb 25, 2008 10:27:47 pm

So usually isn't the metaphor a computer as a human being? I think that works a bit better, as this metaphor felt very extended. The sheer number of equations was overwhelming and was hard to follow. It seems that the more equations I get explaining the human thinking processes, the less likely I am inclined to give them credit. While I'm not too sure how much practical knowledge I got out of that, it was an interesting thought process at least.

Jonathan Chow - Feb 25, 2008 10:32:40 pm

As I kept reading the paper, I couldn't help but ask myself, what's the point of all this? Really, it seems to me that the only reason that you would bother with all of these details when dealing with user interface is if you plan on using a "simulated person" as a tester. Other than that, while the numbers are cute and everything, I fail to see the purpose. I suppose that taking some of these things into account might help initial iterations of the project, I don't think they help very much. I suppose that if I were designing a billiards game, it might be nice to know if I can do computations faster than the user would expect a ball to move. Nonetheless, I feel like I could get the same information by just testing users. After all, how else did the authors obtain these numbers? They tested people. So why not just test your product with real people? I'm sure that more useful information will come from it.

Andry Jong - Feb 25, 2008 11:32:13 pm

This article by Card, Moran & Newell was surprisingly extensive. And does it really related to User Interface design?

There were only a couple of points of this article that I where I can relate in designing a User Interface, one of which was how distance between most used tools in an interface should be designed to be closer to each other. Card, Moran & Newell's example of the 'f' button placed closer to the numbers in a calculator prooved this fact.

Another thing that I felt relevant to this class might be the part where they talked about how many names a person can remember when the names make sense to him/her ("CAT" vs "TXD"). I found this relevant to this class (and other project CS classes) is because this relates a lot to file and variable naming convention in building collaborative projects.

Those are, honestly, the only couple of points in this article that I found interesting and useful. Other than that, it's just annoyingly extensive.

Diane Ko - Feb 26, 2008 01:51:00 am

The perception times and the equations related to them reminded me of an exercise we did in i247 dealing with perception of visual elements. The ability of the human brain to process the differences in certain objects vs certain words is incredible. Processing times of the human brain are especially important for user interfacing. Interacting with a website, for example, is almost instantaneous in terms of perceiving what you can do on the page. A user should be able to quickly look at the page as a whole and see which objects are clearly buttons, which text phrases are links, etc. A poorly designed interface is one that requires linear time sequential processing of information. For instance, on this wikipedia editing page, the 2nd to last button above the text box has some squiggly line that I guess could be construed as a signature. I couldn't figure out what it was until I hovered over the button and read what it actually was. The same is true for the trumpet icon which is supposed to be indicative of media.

The keyboard layout reminded me of a discussion we had in class about whether or not it was better to have the phone with 2 letters per button in qwerty format vs standard phone lettering with 3 letters per button in alphabetical order. It was brought up that it is faster to predictive text something that is all on the same button, but actually it is much faster to predictive text something that has different buttons assuming that you are using more than one finger/hand. For instance, typing with one hand is very slow for words that are spread across the keyboard (like "lake") but is significantly faster for words that are all on the same side (like "free"). However, typing with two hands, it is faster to type "lake" than to type "was". This is related to the fact that you can start moving your other hand/fingers before you finish with the previous letter in words that are across the keyboard. For words that are on the same side of the keyboard, specifically those that use many letters that use the same finger, the next movement on the keyboard cannot be made until the previous one has completed. These sort of considerations make a major difference in the total time sometime types. It seems trivial on a small scale, but adding up all the times, while writing a paper for example, creates a major difference in overall typing time. By separating letters by their relative frequency, it's much faster to type once you learn how to type with the different configuration of keys.

Jonathan Wu Liu - Feb 26, 2008 09:51:41 am

When I first read this paper, the first thought that came to my head is... this paper oversimplifies things. Can the brain be reduced to a series of formulas and laws? I highly doubt it. My second thought was, this seems it could be very different for men and women. Other than failing to mention these two factors, the paper made some interesting studies, especially in interferences with working memories. Acoustic vs. semantic interferences is a great concept to remember to watch out for while designing items in the menu. Also, we also have to take into account the interference with previously learned material; some standards should just not be overthrown because every single previous application contains them. It is for certain standards like this that we should shelf our "innovative" desires and go with the status quo.

Nir Ackner - Feb 26, 2008 09:13:46 am

Card, Moran, and Newell seem focused on providing a model of the user to the computer, rather than a model of the computer to the user. It is interesting how this approach compares to the other approaches we have discussed, which emphasize modeling the computer for the user. (ex: conceptual models) Specifically, the idea of quantizing the user might be prone to ignoring common-sense principles that aren't necessarily easy to model by equations. It seems that a hybrid of these two approaches would be especially useful in creating good user interfaces, since any realistic interface needs to take into account both the limitations of the user and of the computer.

Glen Wong - Feb 26, 2008 11:19:32 am

This article is quite intriguing. I would have to echo what some others have said about there being an overabundance of formulas. The model of the human mind presented in the reading was very different from descriptions that I've typically heard. The presentation of the three distinct systems at work went counter to my belief that the human mind was best modeled as a single processor system. I also found the idea of working memory being a subset of long-term memory very fresh. However, all this aside, I feel doing complex mathematical calculations to predict the behavior of an individual on a system is not very conducive to design. I think this article gets too much into the low level details. If we were to design HCI at this low of a level, I think our creativity would be stifled under the calculation overhead.

Edward Chen - Feb 26, 2008 11:47:08 am

While the article was rather long, it was also rather intriguing in the way it presented and modeled the user as if they were a machine, giving formulas that would describe how a user would function mechanically. In addition, it also gave observations of the user like they were specifications of a machine.

While all this may seem useless, from the perspective of an interface designer, it could be very useful in determining how the average user will react to an interface or how well they'd learn it. Knowing how the user would store information in their "memory," would tell interface designer how to make their interface more intuitive or easy to learn. Similar to how hardware would have specifications so that one would know how design software for them, Card, Moran and Newell provided specifications of the user so that one would know how design an interface for them.

Adam Singer - Feb 26, 2008 12:10:34 pm

Admittedly, I didn't read every last word of this paper. The sheer amount of equations and graphs was dizzying. What I did grasp from the paper was the basic analogy of comparing the human mind to a computer, with several processors and memory "units". This was definitely an interesting spin on the "computers are people" metaphor, but I do think the reverse metaphor has some interesting attributes; I particularly enjoyed reading about comparing a human interpreting a letter to a computer reading and processing a keystroke. While this paper was very informative and incredibly detailed, I think the better metaphor to discuss would be "computer is a human" since we aren't trying to make people more computer-like, we're trying to make the experience of using a computer more humane.

Michelle Au - Feb 26, 2008 12:22:38 pm

Despite all the formulas and numbers, this reading brought up some interesting considerations for designing user interfaces. The human ability to learn is an important consideration and the discussion about the power law of practice suggests that when designing interfaces, designers should keep into consideration that over many trials, users' interactions with the system can become quicker and more habitual. This can become important perhaps if the application does some sort of intensive background processing that is perhaps masked by the slower novice user interactions through the testing phases. However, as the user interactions quicken, perhaps this long background processing will be noticed and can possibly hamper the user experience. However, this problem wouldn't be noticed until users become experts with the application, which may not occur during the entire development phase.

Another interesting discussion was in the learning and retrieval section. The examples about a programmer remembering the names of a number of files illustrates how the presentation of the filenames affects how effective the information is transferred from Working Memory to Long Term Memory. Interface designers have to also keep this in mind when considering how to display information in a way that is easier for the user to remember and recall later.

Jeffrey Wang - Feb 26, 2008 12:36:05 pm

With regards to human perception, I thought it was very interesting that movies are simply an exploitation of the frame rate of human perception. Although movies are simply multiple pictures shown at a fast pace, we percieve them to be moving objects. Because of the delay in human perception (which is only a few ms at most!), the motion picture industry was able to take advantage of this (perhaps without these sophisticated models in the past), and show "moving pictures" as movies that we enjoy today.

On slight problem with the article with regards to perception, is that they break everything into mathematical formulas. However, with things such as object recognition, how would this conceptual aspect be broken down into numbers? For instance, distinguishing a glass cup on a table with many plastic cups nearby. This would involve depth perception, possible color perception, etc. Is it possible for this to also be described using mathematical models?

Mentioned in the article is selective attention and perception. I think the most interesting experiment is the Stroop test, which is an incongruency of color and concept when presenting words. Participants are told to read the list of words (which are color words presented in different colors). In addition to simply percieving the word, at a subconscious level, humans are subject to the visual presentation of the word. Even though they are told to only pay attention to the word, they otfen say the color of the word or take longer to read the word. Thus, at a level of perception and cognition, there is a combination of the two. According to this authors, would this suggest a really quick processing rate between cognition and perception? or an inseparable nature of perception/cognition (dual processing)?

With regards to working memory, I studied an experiment about how a list of items is not likely to be remembered if they are all similar concepts - semantic congruity. (i.e. candy, sweet, lollipop) In these cases, if you asked the participant, "was sugar on the list?" they will most likely answer yes. Would this suggest a semantic interference in working memory or that the words had gotten into long tem memory (where semantic interference is most prevalent)?

Yang Wang - Feb 26, 2008 12:48:39 pm

First of all, that was long. Second, what was the point of this reading again? Yes, the memory processing part is pretty interesting. It teaches us how to feed information to users and don't assume they would remember anything that only goes through short term memory. But other than that, why? I know there is a limit of how fast human can process information such as sound, text, choice and etc. But as far as interface design goes, we would not *nearly* approach that limitation. Why would anyone design a program flash letter under over frequency of 10 per second? It was a interesting piece of reading; however, i didn't find this one as useful as previous readings.

Jeremy Syn - Feb 26, 2008 12:53:05 pm

I must agree that this reading did have a lot of formulas and equations, as of that of a physics book. I don't know how relevant these formulas are to us in the context of this class, but it certainly could be useful I suppose. On an aside, I thought that it was interesting to note that many of the animations and effects we see are just tweakings of our own perceptions. Such as the case in motion picture, there are specific formulas involved to make the illusion of a moving picture to the perception of the human. I was also amazed at the many different kinds of data they provide, such as how fast a person can read a text, a lot of which could be useful when deciding how to design our interfaces.

Max Preston - Feb 26, 2008 12:50:13 pm

Although it is true that the human mind shares similarities to a computer in some ways, I am surprised that the author attempted to define a human in such concrete and deterministic terms. Despite the vast amount of detail in this article, I don't feel as though the content in the article is accurate, especially when considering our own extremely limited knowledge of the workings of our own brains.

Even though our minds do process information, I don't think our brain's processing method is anywhere near similar to that of a silicon-based processor, and I likewise do not agree with many of his other far-too-direct comparisons between human beings and machines. Even though the author bogs down the reader with a tremendous number of equations and facts, I feel like this is a misdirection tactic designed to bring attention away from the fact that he doesn't have anything to actually support his thesis.

William tseng - Feb 26, 2008 01:32:19 pm

I thought the best example of using all the detailed calculations presented in the article was the example of why the keyboard looks the way it does today with the Shoel's layout as opposed to an alphabetical one. Even so I find it interesting that the article also presents the fact that the modern keyboard layout is still only 8% faster than if a keyboard were laid out alphabetically. I do not want to discount the validity nor usefulness of approaching user interfaces with this level of analysis but I think any developer must balance any proposed "improvement" of efficiency with how much user reluctance would be towards adopting a new interface. Again the keyboard example brings up the Dvorak keyboard which is 2.6% better. However I'm sure if someone used that layout on a mobile phone instead of the standard qwerty keyboard it would be met with significant negative feedback.

Joe Cancilla - Feb 26, 2008 01:43:21 pm

The section on Learning and Retrieval was particularly interesting. Placement of items in an user interface will cause a user to remember certain items over others. It seems that from the research, one could guess that items placed at the top of an interface will be remembered better than items placed in the middle. While this seems like common sense, I think that it is an important thing to remember in the design of user interfaces.

Zhihui Zhang - Feb 26, 2008 12:27:32 pm

I've seen analogies of humans being treated as computers in various other classes. In fact, outside of CS, it seems to come up quite often (I've seen biology books thats tried to make this analogy). But more to the point, i learned in a psych course i once took that the human brain actually works nothing like a computer. For example, humans operate primarily by heuristics and "insight" and these thought processes cannot be described quantitatively.

In regards to Dvorak, I think the it's not so much a speed factor as a comfort factor. I've tried the Dvorak layout and typing seems to be more comfortable - no bending your fingers at awkward angles.

Benjamin Sussman - Feb 26, 2008 02:29:09 pm

By far the most boring and painful reading yet, I was glad to be finished with it. I am a strong believer that the mind is a computational device, however the pathetically simply computational models presented in the reading do not do justice even to statistical machine learning algorithms which current computers are capable of performing. Considering that we are discussing painfully straightforward aspects of human ability such as reaction time and finger dexterity these computational models are not wrong, they are simply uninteresting. There are very few interfaces today which butt against the physical limitations of humans as it is, in my opinion, a solved problem. The much more interesting problem is that of the mental models humans have as well as developing a model for the human mind itself, and this paper does not provide any insight in that direction.

Zhihui Zhang makes a good point regarding the Dvorak keyboard. While experiments measuring efficiency may show insignificant changes (this is a point I would contest as my experience with Dvorak users is that it is extremely more efficient and speedy) it does not reflect the thoughts of the users. Do they prefer Dvorak, despite this insignificant change? If so, why? What is it about the layout that draws them?

Cole Lodge - Feb 26, 2008 02:25:26 pm

I found it odd and a bit backwards compared to the rest of the readings. In all of the humans I have run across; they definitely do not react like logically and are more of the antithesis of a computer. In my opinion, I feel that we should consider humans as, well, people. We should not try to put them into a nice logical box; every person will react differently base on a situation.

Richard Lo - Feb 26, 2008 02:32:13 pm

Goodness. What craziness. Not only long, but extremely technical and in depth, far more than I honestly would have cared to read. I skimmed over many of the extremely technical portions and just tried to pick out the main points. Definitely an extremely important topic for anyone who is designing some sort of interface for a human to use, but certain subtopics I felt more just interesting than practically helpful. Many applications won't deal with human reaction time or motor skill recognition, unless you're doing something like a game. Still though, interesting read and cool to know that people actually have technical equations and formulas for these types of things.

Timothy Edgar - Feb 26, 2008 02:09:23 pm

The reading was a bit dense in the amount of information it had. Between the equations and math, my eyes did glaze over a bit as I kept asking the question what was the point of this. It later did explain the point a lot more clearer in the end about figuring out the bottlenecks and reasons for various optimizations. There is limitation in how things are perceived, how things are recognized and reacted, which I found the reading and typing aspect the most interesting. The speed readers don't read individual words but handle multiple independent events. It seems this type of analysis is good for optimization on devices or interfaces that are commonly used (such as the evolution of the keyboard), however for initial designs, I don't think one should worry about it so much. It's more or less if information is being communicated well rather than if they can do the entire task efficiently.

Katy Tsai - Feb 26, 2008 02:09:24 pm

I thought it was interesting how the reading pointed out the different thresholds of time that affected our perception of motion and visual cues. It shows how timing can greatly alter how we perceive a movie, causality, and sound. However, while it’s nice to know the precise calculations of the thresholds of time, I find that these things become rather intuitive to most people. Because we can measure motion, and analyze the different visual and audio cues within our lives, we can intuitively judge how fluid a sound or a movie is.

Johnny Tran - Feb 26, 2008 02:33:54 pm

I thought the reading was fairly informative, especially when it talked about the human mind in terms of a computer. I find it doubtful whether UI designers nowadays take into account human factors such as the human depth of computing or short-term memory capacity when designing interfaces. While some of the information may be difficult to apply all the time, it is certainly somewhat humorous to read about the human mind in a way that an average computer scientist or programmer would understand: in terms of expansion, complexity, and equations.

It makes me wonder how things would have been different if all UI designers kept these parameters in mind.

Ravi Dharawat - Feb 26, 2008 02:29:16 pm

It was interesting and simultaneously quite painful to read. That interest, however, is derived primarily from this observation: it would appear these authors are quite avid aerial architects, if you catch my meaning. That is, they enjoy building castles in the sky. There are quite a few assumptions in this reading, among them being that humans are rational (rationality principle). We do not behave rationally. Another is the uncertainty principle. I know more than a few individuals, who, when faced with situations of sufficiently great uncertainty, being able to ascertain that uncertainty, will make whatever decision comes to them first. Like this many of the assumptions made in this article can be debunked. I must say my opinion of this article is low, though there was some redemption in the keyboard example. In ending, it was a fine step in strange direction, and though often times such steps bring us to treasure unknown, it is a finer thing to not be so sure the next step is not into fresh spoor.

Alex Choy - Feb 26, 2008 02:38:15 pm

I agree with some of the previous posts that the article has too many formulas and symbols. I do not feel that the formulas in the chapter helped my understanding of the material. Some of the descriptions of the human mind and how it processes information reminded me of some of the psychology classes that I've taken. I felt that the cognitive memories that the article talked about was interesting. I can relate to the separation of working memory and long-term memory because not all long-term memory is "active" at one time. While I know many things, I am not conscious of every piece of data at one time.

Harendra Guturu - Feb 26, 2008 02:02:25 pm

The metaphor of the brain as a processor was over drawn and not properly used. Usually metaphors are used as a technique to clarify confusing materials, but in this case the metaphor is badly setup and the article is hard to follow due to the mixture of terminology and equations to describe the brain followed by the metaphor. I feel like this paper should be two separate papers. One that is just the technical aspects with the equations and one that covers the high level details of the brain so that we can interpret it and apply it to user interface design.

Jiahan Jiang - Feb 26, 2008 02:58:24 pm

The perceptual, motor, and cognitive systems analogy seems to be a fine metaphor, but it seems that in practice, they are not as clear-cut. Some questions I had were: how do you map/handle all perceptions ahead of time and what would you do with unidentified perceptions; what goes into motor activities being triggered? Some of the diagrams were quite interesting to look at though.

Robert Glickman - Feb 26, 2008 02:50:01 pm

I will try to interpret as much as I can from this very tedious, very technical chapter. While it was interesting to learn about the physiology of the human mind and how it processes information the first time, I have taken numerous classes on the subject in the past and most of them offered bettered explanations than this. This was just simply too unnecessarily techinical, and it was so just for the sake of being technical. However, I must say there were a few tidbits that were interesting, mainly having to do with the steps of storing and retrieving information (particularly with Working Memory), but they were few and far between...

Reid Hironaga - Feb 26, 2008 02:58:49 pm

Card, Moran, and Newell provide a very thorough analysis of their model of the human mind as a set of dedicated processors working in parallel. I think it is a good alternative perspective to consider the brain as a creation of humans, but at the same time it is almost assuming there was a designer and nearly avoids the idea of evolution merely resulting in a haphazard arrangement of thought processing that was "good enough" to sustain the species. The graphs and mathematical figures are interesting to go over, and I'm sure each of them can have a thorough examination in their own right. The tests that are performed on people to see how they compare with the computer model is very interesting in my opinion. It could be compared to an inverse Turing test, where the human has to fool the computer into thinking it is a processing piece of silicon, but fails utterly. I never previously thought about such topics as the complexity of memory deterioration as roughly a function or system of inputs. Ultimately, the model the authors examine works for many purposes, which is sufficient to constitute a model.

Roseanne Wincek - Feb 26, 2008 03:01:11 pm

I thought that this reading was really long and dense. However, I did enjoy it, especially the section on choice reaction time. I'm a physical chemist, thinking about things in terms of entropy and probabilistic models seems very intuitive to me. Really, it makes you think about what are the true underlying processes. Here, these models that are applied to reaction times are also applied to seemingly very different things, like a protein folding trajectory, or a set of photon emission events, or market price fluctuations.

Scott Crawford - Feb 26, 2008 03:08:18 pm

I found the experiment whereby the Carnegie-Mellon student trained himself to effect an encoding of symbols to increase his effective buffer size particularly interesting. Though the paper only glosses over it as an example, I think that it suggests a particularly rich vein of human cognitive abilities. That is, it is possible to learn algorithms so as to integrate their use into our normal thought-process (not like some complicated math problem that requires you follow a systematic algorithm that is broken into stages, but rather an algorithm that performs within the human mind's quantized computational cycle) so as to enhance our net performance as a thinking machine. In a theoretical sense, developing algorithms that can be learned in this manner, and then teaching them to people can actually improve the perceived intelligence of those people. From a UI designer's perspective, it adds another possible consideration, by which the processes required to complete a task can (aught to, in fact) be decomposed into responses that can be easily learned in a reactive sense (ex/ I always find that doing things in photoshop requires deliberation because the mapping from goal to necessary tasks is cumbersome enough to prevent me from developing a reactive ability with the system; whereas, hotkeys/shortcuts are things which I am able to learn in a reactive way).

Henry Su - Feb 26, 2008 03:20:33 pm

This paper certainly looks at the human brain from a unique angle. It tries to explain the workings of the brain from a computer science and mathematical perspective, as opposed to a biological perspective. For some purposes, this is actually a fine way to model the brain. For example, the discussion about the processor time draws a nice analogy. I also like the analogy of working memory : CPU registers :: long term memory : hard disk. In terms of access times and capacity, the metaphor really makes sense. I also found the discussion about user performance interesting. In particular, the idea of a learning curve for repetitive tasks was intriguing. Although I've seen the idea many times before, the fact that the article had hard data made it more concrete.

Jeff Bowman - Feb 26, 2008 03:08:26 pm

While the reading was very much in-depth, at times I thought it was a little low-level for the nature of the course. I suppose that the research that Card, Moran, and Newell present is the quantitative data that drives true optimization of human computer interaction, but at the same time I remembered a recent article that noted that users perceived some tasks as faster, even when—by the clock—they were slower. This article does not seem to cover user mentality, or user preference; it seems to prefer hard facts instead. And that's okay: Sometimes you optimize for one, and sometimes you optimize for the other.

This was especially apparent with the calculator example: "Of course," the authors note, "it is important to keep in mind that the design of the entire calculator will entail some trade-offs in individual key locations." That dismissive clause is the sticking point: You can produce an interface that is quantitatively more optimal, but if it doesn't make sense in the user's mind, it will have quantitatively fewer users when other competing systems have more intuitive layouts and interactions.

I would also be curious to find out how much of the authors' research is definitively about the physiology of the human brain, and how much also deals with education and the environment. The causality study, for instance, may have different effects in the real world, or in computers of today's era: Was there any affordance given to the test merely because the computer systems involved were from 1983? Does today's culture of in-your-face video games have any bearing on reaction time, or does the multitasking reality of 2008 affect the interplay between Long-Term Memory and Working Memory? I would be curious to have many of these studies (here published 1983, some citations date back to 1926) reevaluated to see if there is a cultural change across the decades.

Zhou Li - Feb 26, 2008 02:57:35 pm

This reading has much more scientific and researched data backing the author's idea compare to all the other ones we have read so far. It is interesting to see how a human is modeled as a computer, since I have always thought computers as tools designed to help people achieve or be more efficient at tasks our brain is not good at, such as long term memorization of details and massive computations. As described in the reading, human have a hard time memorizing large amount of data at once. It is like diminishing marginal return law from economics, the more you try to cram into your brain within a short period of time, the less you will remember in the long term. The reading explained the reasons behind some interface and I/O designs. For example, the design of keyboards separate letters into two groups to maximize hands alternation while typing, so that the average typing time is shorten by half. So a successful interface design should take "human model" including working memory span and response time into consideration.

Raymond Planthold - Feb 26, 2008 02:53:53 pm

This was next to impossible to read. Pages scanned as images are hard enough to read on their own, but when so much of the material is essentially useless for our purposes, it's hard to justify the effort. It's especially disappointing because the subject is so interesting. Of the stuff I was able to wade through, the part about perceptual causality grabbed me the most, though there was not much there. A well-established perceptual connection between users' actions and their consequences is key for a good interface, yet so often this is ignored.

Siyu Song - Feb 26, 2008 03:22:09 pm

Obviously as engineers we could all appreciate the concreteness of the analysis of the human brain provided in this chapter. In an HCI class I would think that it is important to use analogies close to computer science, that’s just knowing your audience. However, I think as clear and structured as this analysis was, its flawed in that it makes analogies of the brain to something inorganic and incapable of creativity. A processor won’t tell you an interface is bad, a person will.

Maxwell Pretzlav - Feb 26, 2008 03:28:17 pm

I found this article very fascinating, but awfully long. The discussion of directly calculating response time using general cognitive rules intrigued me -- I liked how the article would first calculate how long something should take, and how long it actually has been shown to take experimentally. I was surprised and impressed that such a reductive model of the human brain as a machine achieved such accurate results.

Pavel Borokhov - Feb 26, 2008 03:26:31 pm

As I have taken an intro psychology course at community college about a year ago, some of the information presented in the reading was familiar to me; however, what was different was that the information was a lot more quantitative rather than qualitative. It is interesting that the rather simple models of the mind's inner workings were fairly close to the empirical data found through experiments, though I wonder how accurate these tests can be even hypothetically. The relevancy of this reading to UI design is certainly not 0, but a bit obtuse, because the thing that I think is most important is not the specific numbers mentioned but the more basic concepts of human cognition and thought process. For example, the speed at which a key can be pressed repeatedly is probably useful in a very specialized program, such as a game, where the decision might be made that the user should be able to continuously fire a weapon by holding down a key since the the user's ability to repeatedly press the fire key is much lower than the weapon's theoretical/designed firing speed. However, the working of memory, and more specifically working vs. long-term memory, are much more important in designing interfaces. For example, we want to make sure that the interface is easy enough to navigate that the user does not forget about what their intended task was before they started looking for ways to perform it in the interface (imagine going down 10 levels of menus to try to perform some task, only to forget what that task actually was). We also want to keep in mind that certain activities might not need to be "optimized" for speed because they will be learned with practice. One key thing, imho, that can be taken away is that providing semantically-logical cues between successive steps in a multi-step process is crucial, because that is the way in which we can keep track of and recall information in our brains.

Tam La - Feb 26, 2008 04:25:52 pm

It's interesting how the article presented human cognition in terms of a "set of memories and processors." The way the perceptual, cognitive and the motor systems relays information can be a useful model in designing a system. Understanding the concepts presented can also be a good tool in designing a good interface. Designers can present information in chunks so this can easily be remember. They can design features so that these make use of the "power law of practice" concept. Or they can decide an interface so that important task could be readily moved to long-term memory. Speaking of long-term memory, I often wonder why I can't readily write codes using a language I've learned and used heavily before. The discussion in "interference in long-term memory" explained why.



[add comment]
Personal tools