Identifying Design Principles

From CS294-10 Visualization Sp11

Revision as of 10:05, 17 March 2011 by Mhsueh (Talk | contribs)
Jump to: navigation, search

Lecture on Mar 16, 2011




  • Pictorial and verbal tools for conveying routes, Lee & Tversky (pdf)
  • Rendering effective routemaps, Agrawala & Stolte (pdf)
  • Identification and validation of cognitive design principles for automated generation of assembly instructions, Heiser et al. (html)

Optional Readings

  • Designing effective step-by-step assembly instructions, Agrawala et al. (html)


Brandon Liu - Mar 16, 2011 05:43:09 pm

An observation I had on the assembly instructions project was that the computer-generated instructions had a consistent, isometric perspective. I would be interested in seeing the hand-drawn instructions, and how the use of a consistent perspective relates to spatial ability. My intuition tells me that people with high spatial ability would minimize the number of times the drawing changed its perspective; only in the cases where a piece goes in an occluded spot does the perspective changes. In most cases, it seems that 2-3 perspectives would be enough to cover all cases. Another interesting facet of this area is how to use zooming in instructions. In some cases, we may need to 'zoom in' on a component to describe more detail. A great dataset for this would be those LEGO instruction booklets.

Michael Porath - Mar 16, 2011 07:37:10 pm

The Mapblast project reminded me of a conversation about mental maps I had lately. Most of the maps I come across are drawn to scale. Mental maps, just as Mapblast, recognize the fact that this is not how we perceive the world. While spatial information is important for many purposes, it doesn't reflect our representation of space.

For my final project I'm looking at visualizing people's driving and mobility patterns. One thing this discussion sparked was whether I could show mobility patterns in some way other than based on spatial distances. A more obvious way to do that would be to distort the distances based on the time it takes to get from one point to the other. The data I'm working with samples a person's location, speed, and velocity based on GPS information. Showing two data points equidistant from each other would do that distortion in a very obvious way. San Francisco and Palo Alto, around 40 minutes away from each other, would be as far away from each other as the two sides of the Bay Bridge during rush hour are when driving during rush hour. Another representation along the same lines could distort the spatial distances based on CO2 emissions or gas consumption. Does anyone know of seminal papers about map distortions?

Julian Limon - Mar 16, 2011 08:27:56 pm

I was particularly intrigued about the framework that was proposed in class today to work with design principles. I believe there is a lot of value in applying the concepts of identification, instantiation and validation of design principles before settling on a final design or algorithm. I think this is particularly true on the web: where multiple experiments can be run at the same time and a lot of variables can be controlled. Of course, not all visualizations are meant to be seen in a computer screen, but in the cases in which it is possible it would be really valuable. For example, one could imagine a GPS manufacturer trying out different visualizations and measuring results.

On a totally different note, I just found a recent blog post that features LineDrive and discusses the motivations of an atlas (general purpose, similar to the wedding map problem) versus the motivations of a set of directions (getting from point A to point B). It's interesting how Google Maps shows the same interface for two different use cases and doesn't default to the most suitable one depending on the situation.

Siamak Faridani 23:50, 16 March 2011 (CDT)

The first two readings were very interesting. Tversky's paper made me think though. In Agrawala's paper the whole assumption is that one would like to take a look at the route or print it. In other words the diagram is static and won't change.

I am now wondering what if we want to build a GPS systems that shows effective routs. We can obviously use Tversky's principles but I am wondering if such a device would be as effective as current GPS systems. These maps lose the linear mapping to the real world and that might make them harder to understand. Is it still crucial to render the effective route if we have a GPS display that can show the routes dynamically? Is there any way to abstract out unimportant and natural turns of the routes and still maintain a mapping from the visualization to the reality that does not impose a cognitive complexity?

Candidate solution

In the dynamic display setting perhaps we can maintain full linear mapping from the local display to the current surrounding and relax the mapping from display to what comes later in 10 minutes. As car moves through the route the display can be updated and become more relevant to the reality. What bothers me with current GPS systems is that I have a good understanding of the local route since I can see it on the display but I have no idea about the whole route since my GPS does not show that to me at the same time, unless I change the zoom and zoom out, I am wondering if we can combine different zoom levels and for nearby places use a realistic map and for the places near the destination use a zoomed out map with effective routes (I just own a very low end GPS so may be this problem is solved in other systems)

Krishna - Mar 17, 2011 01:50:47 am

The more I think about Route Maps, the more convinced I am that the system is a complex, wonderfully hand crafted smoothing algorithm. This makes me wonder if the same hierarchy of optimization steps - shape, road(curve), label, context can be applied for visualizing generic time series . In other words, would it make sense to use a similar algorithm , with almost similar constraints, to summarize a filtered output of a time series where each turning point would correspond to an event - either specified or inferred from the user's query ? Unlike in maps, there wont be intersection(s) and any adjustment to the length should correspond to some distortion to the scale of the axes. I am just thinking if such a strategy would highlight the general trend of the time series in relation to the events.

Michael Hsueh - Mar 17, 2011 04:07:30 am

Lee & Tversky's comparison of depictions versus descriptions of routes uncovered a common conceptual structure that should more or less allow translation between the domains. Regarding the conceptual structure, one could think about the minimal set of "conceptual elements" a toolkit must embody to be sufficient to specify any arbitrary route. Once established, the task of constructing more pragmatically sized toolkits becomes a seemingly more tidy task, at least theoretically. Optimal sets of non-essential elements can be added, all the while balancing redundancy, complexity, and just adequacy in general, with size. The paper mentions several examples including stop signs and u-turns.

Another thought: A picture in the Lee & Tversky paper really helped me put a finger on one of the ways route depictions help facilitate cognitive ability. I got the insight (and a chuckle, for that matter) viewing the student created map that had "Serra St. = TOO FAR." Depiction readily lends itself to specifying features not directly relevant to the target route: contextual clues, if you will. An advantage inherent to depictions, as described in the paper, is the enforced specificity of depicted elements. Contextual clues are of course non-essential, but I think depictions can much more readily incorporate them. Sprinkling in conceptual clues can greatly enhance the effectiveness of route maps, as seen in the demos in class today. They can be readily added as additional non-essential elements to LineDrive-type generalization techniques given appropriate data, parameters, and so on.

[add comment]
Personal tools