Perception

From CS 294-10 Visualization Sp10

Jump to: navigation, search

Lecture on Feb 1, 2010

Slides

Contents

Readings

  • Perception in visualization. Healey. (html)
  • Graphical perception. Cleveland & McGill. (jstor)(Google Scholar)
  • Chapter 3: Layering and Separation, In Envisioning Information. Tufte.

Optional Readings

  • Gestalt and composition. In Course #13, SIGGRAPH 2002. Durand. (1-up pdf) (6-up pdf)
  • The psychophysics of sensory function. Stevens. (pdf)

Jeffrey Patzer - Jan 27, 2010 10:23:52 pm

Tufte: I found the idea of negative space the most interesting topic in this reading. The idea that by the shading and spacing of objects, you in fact create visual distractions is something I had not really thought too much about. When you create these visual distractions you are taking away from the focus on the data, which accomplishes the exact opposite of what you intend (presumably). Knowing about the negative effects that negative space can cause, it makes it more easy now to design visuals that fade the dividers, and allow for focus on the data. Although this seems obvious, I don't think it is (case in fact: Microsoft Office).

Healey: This article is awesome (despite its length). I think all the various theories on preattentive processing are valid in some way. As I was reading through this article, I couldn't help but think to myself why our visual system functions the way it does. Most of the theories are addressing how our system works, but not why. What I mean by this is why do we distinguish color better than shape or orientation or any other various visual cue. I think I might have an idea that is based on the idea that humans eyes are designed to see difference. That is, our visual system works best at picking out the differences in a scene. We don't function well when trying to look for things that are extremely similar to one another. This makes sense to me from an evolutionary standpoint, in that if I am hunting some animal in a forest, then I need to be able to see it. The best way to see it is too look for color, shape, movement and other differences from the surrounding environment. So maybe the reason our systems works the way it does is borne from survival necessity. If we were only able to see the forest, the we'd always miss the trees, or the bear, well you get what I'm saying (I think).

Jiamin Bai - Jan 31, 2010 12:39:09 am

Healey: I think perception in visualization is a very important in creating effective and accurate visualizations. Abusing perception tricks can mislead people when they are interpreting data. One example which came to mind was apple's iphone market share visualization. It had created an illusion (due to perspective) of the iphone having a larger market share than it does.

Tufte: I think the idea of separating a visualization in different layers produces a profound effect. By using stronger-eye catching colors/stokes for more important data and a subdued presentation for less-important but supporting data allows the reader to focus on interpreting on the crux of the visualization and (if needed) the rest of the information.

Danielle Christianson - Jan 31, 2010 11:20:40 am

Healey: I also thought this article was awesome. The information on preattention were particularly interesting as it highlighted the importance of and gave a description of what may first catch the viewer's attention -- super useful when designing a visual system. I found the Boolean theory the least compelling as creating AND / OR combination seemed like it would be difficult if only one map at a time can be held. I would like more discussion of visual hierarchies; some sort of quick reference guide that synthesizes the literature would be useful to consult when designing (if enough empirical data exists). I think that Healey gives a more detailed explanation of the concepts that Tufte presents in the chapter from Envisioning Information.

Tufte: Good examples of how to maximize focus on important details and minimize attention to less important information, especially concerning unintended focus on negative space. I am little curious about his reasons for using color for less important details (e.g., the tan highlights on the train schedule pg. 55, the blue grid or tan grid of the electrocardiogram tracelines pg. 59, the blue bounding frames on pg. 64). Color other than (neutral) gray often suggests some sort of meaning -- not sure what it is in these cases. One missing point may be consideration of the final media because color is not always an option -- it would have been nice to have some examples without color correcting the emphasis. In general it is typically worth considering how color will look sans hue (e.g., the changed the marshalling signal diagram) unless it is certain that color will always be available.

Cleveland & McGill: A difficult read. The takeaway messages for me are the following: 1) testing perception is extremely difficult; 2) assessing differences between 2 lines along one axis is easily conflated with the shortest distance between the lines; 3) framed rectangles are a good alternative to statistical maps; and 4) positioning descriptive text close to the described figure greatly improves readability (particularly if on the same page). I found Cleveland and McGill's analysis techniques difficult to understand -- I did not grasp why they were logging the error data at all, especially not log2. Also, I did not find their test of position vs. length convincing -- seemed like both were different types of length assessments.

Jaeyoung Choi - Jan 31, 2010 09:22:11 pm

Tufte

The author gives us nice examples for understanding principles of good layering. One question here. What if you are only allowed to use black and white (and shades)? Is there an effective way for good layering without using hue?

Here are links to two examples of 'rare exceptions' on p.61; Turgot-Bretez map of Paris and the Nolli map of Rome

Subhransu Maji - Jan 31, 2010 10:03:34 pm

Cleaveland and McGill
The authors present a set of quite convincing experiments and conclude that some representations make it perceptually easy to convey the desired information. I found the comparison of bar plots and pie charts was quite interesting. I still prefer pie charts when there is a dominant category as it gives a better numerical estimate of its proportion out of 100. I also think that the playfair charts are wrongly bashed as often you also care for the values of exports and imports rather than just their difference.

Perception in Visualization
Quite an interesting read. I loved the change blindness demos, though I have seen them before, I still find them fascinating. You have to be looking at exactly the same location, scale and thinking about the corresponding semantic entity to detect the change. The change did not draw my preattentive attention and in many cases it took me a while before I could detect the change.

Jon Barron - Feb 01, 2010 12:05:57 pm

Healey: Really good survey of pre-attentive vision, but I had trouble finding actionable visualization advice. The basic lesson seemed to be that color is most distracting, and shape is less so. Even the section that explicitly talks about visualization advice does a poor job of generating advice from the results of the human-vision community.

Cleveland & McGill: Finally, a paper with results. The ranking of techniques by accuracy is actually useful and actionable. I also like the (unsurprising?) result that length is unambiguous, area is more ambiguous, and volume is most ambiguous. The novel graphical methods they propose are actually very nice, though I don't see a huge difference between bar charts and dot charts, and though the framed rectangle chart of map data works well, I don't know how versatile that method would be in other circumstances.

Tufte: More of the same, not that I didn't enjoy myself. This material seems somewhat redundant with the other book.

Yotam Mann - Feb 01, 2010 11:13:22 am

Healey

I have read an article on the preattentive visual processing before; but it is still very fascinating stuff. I could see many implications of this kind of research in visualizations and marketing (as someone pointed out above with the iPhone market share example above). Designers could leverage such techniques to direct or misdirect people's attention.

Aaron Hong - Feb 01, 2010 12:16:56 pm

I remember some of the Healy material from CS 160. The fact that there are preattentive (incorrectly named since it takes attention to detect these things) processing makes a lot of sense. Things like color and shape that is unique from a group will naturally stand out. One of the more interesting observations that is made later in the paper is "change blindness." It's a common misconception, and I think I have that misconception still ingrained in my head, that our visual system is like our brain taking photographs. It's not true at all; our perception is largely based on what we've focused on. Just trying out the change blindness examples, it's ridiculous how hard it is to find some of the changes.

From Tufte, one important notion that he brings about in beginning of the chapter is that things don't combine linearly, or 1 + 1 = 3 or more. That is so true of design. One we add elements we have to do so with utmost caution because of this phenomena. Color makes such a big difference in separation that without it something can easily become cluttered.

DavidZats - Feb 01, 2010 12:46:53 pm

Today's readings had a very interesting analysis of the human visual system. They described how the human visual system is actually capable of decoding/determining several properties in parallel. For example, we are capable of determining whether squares exist in a sea of circles almost instantaneously. However, we are unable to determine whether an object exits that contains two properties (each of which is used on its own in other objects in the image). Based on this, as well as other information about how we process visual information, we see that certain graph types are clearly superior to others in terms of accurately conveying information. For example, bar graphs are superior to pie charts because it is much easier to determine position than it is angle. Additionally, it is often difficult to accurately determine values in statistical maps as problems such as area differences get in the way as shown in the United States murder rate map.

One question I had about this reading was about the new types of graphs being advocated for by the authors. Since even bar graphs are considered suboptimal, how often are graphs such as dot charts currently used?

Arpad Kovacs - Feb 01, 2010 12:50:03 pm

Healey: I found the discussion of low-level perception to be very useful for understanding why our attention can be instantly drawn to particular elements of a visualization, while we gloss over other aspects even during active searching. In particular, I was intrigued by the finding that the human eye can pick out a target object that differs in a single characteristic (color, curvature, size, etc) so quickly in the parallel preattentive stage, but can only handle combinations of non-unique features in a conjunction target with slow serial processing. The discussion on change blindness was also a stark reminder of how volatile our visual memories are, and how details that cannot be abstracted away by the mind are easily lost. Both are strong arguments for removing clutter and effectively directing the viewer's eye through a visualization, so that the salient data pops out in a way that can be easily abstracted and retained.

Cleveland & McGill: This paper gave me an appreciation for how difficult it must be to design and perform scientific experiments on psychophysics such as perception. I thought that the most useful part of this paper was the ranking the accuracy of certain traits (position along common scale > position along nonaligned scale > length > angle > area > volume). As a result of this finding, Cleveland & McGill advocate replacing pie and divided bar charts with dot charts, and using framed rectangle charts instead of shading on maps. I found the data reduction techniques an interesting exercise in trying to strike a balance between accuracy and conciseness in describing visual clusters, but ultimately I think that the nonreduced (figure 29) visualization both the most informative and easiest to notice trends in.

Tufte: Like in earlier readings, Tufte again provides effective and elegant examples of emphasizing data ("differences that make a difference") by removing chartjunk, in this case through muted/gray grids, and small spots of intense, saturated color. Tufte also shows how the interaction of various elements in an unlayered, undifferentiated surface can create unintentional "optical art", which is confusing and distracts the viewer from the data. Most of this chapter seems to be a rehash of the minimal data-ink ratio advice from last week's reading, although it was nice to see additional examples that further support his conclusions.

Chetan - Feb 02, 2010 08:17:40 am

I enjoyed the readings today on perception and its relationship to visualization. I thought it was apt to group the tufte reading in here as the principles of layering and separation stem from how our perceptual system processes visual information.

However, it's important to be careful in how we use studies of perception in crafting principles for information visualization. Many times, perception studies are conducted in a certain context, but the principles learned only apply to that context. In another context, the human visual system will work differently. For instance, many times studies are conducted using artificial, laboratory stimuli but the results garnered only apply to that context. If one used natural, real-world objects the findings would be different.

Akshay Kannan - Feb 02, 2010 09:23:06 pm

The Healey paper provided a lot of insights, especially in light of the last lecture, on effective coding in a visualization. Especially with the upcoming AS2, this reading definitely exposed me to a variety of ways in which information could be encoded, along with the efficiency associated with each method, backed by a psychological study of the effectiveness in each case. I found the discussion of nonphotorealism quite interesting as well. While such imagery may seem misleading at times, distorting the true perception of the original image, the extra information that can be conveyed by removing irrelevant photographic details can be very useful to the viewer, especially in fields such as medical imaging. I was also fascinated by Tufte's discussion on layering and seperation. Especially in IBM's copier diagram, I was amazed by how so much information could be effectively encoded in an easy to understand manner with the use of color to separate annotation from visualization. Often, when the visualizer's attempts to visualize too much data in a single visualization, it becomes very hard to properly encode the data in an effective format.

Stephen Chu - Feb 04, 2010 11:00:59 pm

Tufte

The graphic of the marked up music composition on page 59 reminds me of when my piano would "layer" (mark up my repeated mistakes) the pages of my music sheet in red. This definitely helped my understanding of the piece I was playing. I like the quote "information consists of differences that make a difference." Visuals shouldn't make us work hard to see the data variation.

Healey

Healey provided very interesting and clear examples that show the impact of preattentive features. When designing visuals I'll focus on these preattentive qualities to make data variation clearer and to aid cognition. However, this is two-sided. A feature like color can be horribly misused to be distracting from the data. On the other hand, coloring can also be useful in layering to help with differentiation.

Cleveland, McGill

Reading this paper, it seems that more accurate elementary tasks require less data-ink. For example, position along a common scale is ranked as the most accurate task, while area is the fourth. Less accurate tasks like area are often misused, perhaps because of their lower data-ink ratio? Could we then say that increasing the data-ink ratio means an expected increase of accuracy from the readers?

Anna Schneider - Feb 05, 2010 09:54:22 am

I was interested to see the wide variety of cues that Healey considers preattentive. Some of them were fairly difficult for me to pick out, especially the stereoscopic and lighting ones, which highlights the difficulty of accurately representing 3D aspects of data in 2D plots--3D cues may be better suited to representation in 3D space even if they're technically preattentive in 2D. On the other hand, I was impressed with the strong preattentive effect of closure, termination, and intersection, which are underused in the visualizations I spend my time with.

The boolean map theory in Healey seems closest to the ideas that Tufte advocates. One should make the most important data layer seperable by a boolean map from secondary data layers (annotations, grid, etc) so that the viewer can focus on each layer individually with minimal interference. Within the feature hierarchy model, it makes sense that most of his redesigns focus on increasing the hue or luminance contrast between layers, while none add shape features to the secondary layers.

Mason Smith - Feb 06, 2010 04:20:22 am

Healey: I found the research on post-attentive vision much more interesting than the preattentive discussion, if only because it was significantly more counterintuitive. Whereas the preattentive features might help you decide on the details of a visualization, the post-attentive discussion seems more relevant in guiding the overall presentation of the visualization.

Tufte: As always, I thought the Tufte reading was very engaging. I agree that one large take-away from the chapter might also be to maximize the data-ink ratio, but I think it's different from his chapters in VDQI. Minimizing non-data-ink is, more or less, a corollary of his notions of layering and separation, rather than an axiom or goal in and of itself. The most intuitive way to reduce unwanted interaction is to reduce the components of the interaction. I don't think he advocates in this section (as he does more strongly in VDQI) that this is the only way to go about things.

Prahalika Reddy - Feb 06, 2010 02:45:57 am

In Healey's article, the images illustrating the different preattentive features were pretty interesting to look at. I was actually quite surprised that some of the features shown are considered preattentive, because they didn't seem that obvious at first glance. For example, when I first looked at the "lighting difference" image in Table 1, I couldn't detect the different circle until I actively looked for it. Another hard one was the "length" image; since the difference in lengths of the elements shown is not that great, the presence of an abnormal element was hard to detect.

Also interesting in Healey's article and from lecture was the part on change blindness. Even after reading about it and going over it in class, I'm amazed that we can miss something at first that becomes so obvious once you see it. More interesting, though, is that while I'm looking for the change, I notice a lot of little things that seem to be different between the pictures, but don't see the big thing. For instance, in the image with the soldiers getting on the plane, I kept seeing a slight color change in the hat of one of the soldiers right in front of the disappearing turbine, but I didn't notice the actual turbine.

In Tufte's chapter, "1 + 1 = 3 or more" is a good concept to learn about, but I don't think all the images in the chapter show good examples of how to avoid "1 + 1 = 3". The image of the medical record of a "Mrs K" is said to be easily understandable because of the annotations along the sides. However, I think even with the annotations, it's a very cluttered and busy image. In order to make the data more understandable, the annotations were added, but nothing was done to try and improve the actual data, making the data itself irrelevant except for the few points the annotations refer to.

Jonyen - Feb 07, 2010 10:06:21 pm

Healey: I remember Treisman's feature model from a cognitive psychology class that I took. It's an intriguing idea that we break down what we see into features, which I suppose is a way of top-down processing of visual information. I guess that a lot of it is still theory, and there's still a lot of room for research and understanding exactly how it is that we process this data. I've yet to see any useful applications of this theory though.

Kerstin Keller - Feb 08, 2010 12:15:19 pm

What I really liked about Healy were the apps where you could percieve the assumptions that were made in the text. Using the application for color/shape perception, it strikes me how quickly I am able to actually percieve if the picture has a red dot versus no red dot, while shape takes a little longer and mixed recognition is impossible without going through all shapes present.

I also find the change blindness phenomenon intriguing. I always want to believe that I am well aware about everything that is going on around me, and it just amazes me how recalling certain things (as in finding the difference) can be so challenging. And what is even more intriguing is that once you found the difference, it just seems so obvious.

Priyanka Reddy - Feb 08, 2010 06:22:33 pm

Like Kerstin, I enjoyed the Healey reading because of all the examples that were provided. They enabled us to really understand and test out the theories that were put forth in the text. The examples for change blindness were really fun to do. What I found to be interesting is that when I looked at these images, my eye would usually be drawn to the area of change, but my brain wouldn't be able to understand the change until I looked at the area for a while.

I did have a question on the idea of a feature hierarchy in our visual systems. I'm curious if the different features have some intrinsic property about them that makes them harder or easier to detect or is the underlying property how well the features can be differentiated. For example, in the shape vs. color examples, would color always be faster than shape if the 2 colors were really similar and the 2 shapes were hugely different? My inclination is to say no.

I think the Tufte reading had some good general rules to follow for layering data, but I didn't really like the examples he used. I felt like there were better examples he could have used.

Shimul - Feb 09, 2010 05:37:53 pm

The Healey reading brought out some good points regarding the limitations of human vision. It is good to know these points in order to determine what are good parameters to choose while making a visualization. I especially liked the section on Change Blindness - it was an interesting read and had some good examples. The section reasoning the "change blindness" concept made some valid points and I would like to read more about it; it seems to be a study of how the mind works. The discussion in class regarding this topic was also fun.

The content in Layering and Separation sounded like a repetition of the content in one of our earlier readings about chartjunk and data ink. The point raised was similar in that one should abstain from representing useless information in a visualization. The argument using the 1+1=3 example was an interesting way to put it though. The McGill reading was rather dry and theoretical.

Boaz Avital - Feb 09, 2010 10:27:09 pm

The Cleveland & McGill reading had a few suggestions for laying out visualizations. I thought it was interesting that some things they said were definitely better to do, like using log2 in logarithmic graphs instead of log10, are not used very often. In fact I've never seen a log2 graph. I wonder how much of visualization is reliant on continuation of past practices so that people don't have to adapt to something new.

The Tufte reading was very similar to the ink reduction section from his other book. I had a feeling he wrote this one second (and a quick Wikipedia check tells me I am right) because he expands on it in a useful way with the concept of layering. You don't often thing in 2d images what is foreground and what is background if the two layers do not overlap. He introduces the concept of tempering information that should not be on the forefront of the image, and therefore your attention. It's a good way, I think, to think about what and how to emphasize different segments of ink in your visualizations.

Zev Winkelman - Feb 10, 2010 03:40:29 pm

What I appreciate most about this material is that it is a formal treatment of many concepts that intuitively make sense.

The phenomena described, such as preattentive processing, are things that I have felt as both a consumer and a producer of visualizations, but lacked a well defined vocabulary to describe.

Also seeing how experiments have been conducted to judge the accuracy of perception within a given sense such as vision ( position vs length vs area vs volume vs shading ) and across senses (sound, smell, taste, temperature, weight, shock) helps to understand the foundation of certain claims about the efficiency and effectiveness of certain design choices.

RyanGreenberg - Feb 26, 2010 08:24:42 pm

Tufte is obviously one of the prominent individuals in visualization, but it's clear that his view is that data graphics should be the product of hard labor. Even though it's just an itemized hospital bill, the piece replicated from Harper's is clearly the results of lots of time, both in terms of precise layout and commentary selected from the explaining physician. In a few places in chapter three, he talks about data graphics in terms that are more familiar to artists than engineers: proportion, harmony, weight, negative space. Both these examples raise the question about what the lessons are from Tufte for automatic visualization. Fortunately, I think that the rest of chapter 3 provides some low-hanging fruit.

Even small, seemingly simple things can screw up a visualization. Making a grid for a chart, for example, seems like one of the simplest things to do. Calculate the bounds, divide by some number, and draw lines. But since it seems so simple, lots of people skip thinking about it, producing data graphics with dark grid lines that distract from the overall picture. Remember: use low contrast grid lines, and try leaving them off when possible to see if it's an improvement.

Healy's discussion of preattentive features is a fascinating "reverse-engineering" of the human visual display systems. After reading this paper, however, I'm not sure how to combine the quick detection of preattentive features with the more intricate displays from Tufte. The data graphics that Tufte described often require a certain amount of cerebral explanation before understanding, and they yield more information after extended study.



[add comment]
Personal tools