From CS294-10 Visualization Sp11
Lecture on Feb 14, 2011
- Postmortem of an example, Bertin (pdf)
- Visual information seeking: Tight coupling of dynamic query filters with starfield displays, Ahlberg & Shneiderman. (html)
- Visual exploration of time-series data, Hochheiser & Schneiderman. (html) (pdf)
- Generalized selection via interactive query relaxation. Heer, Agrawala & Willett. (html)
- Exploration of the Brain's White Matter Pathways with Dynamic Queries. Akers, Sherbondy, Mackenzie, Dougherty, Wandell. Visualization 2004. (html)
- The visual design and control of the trellis display. Becker, Cleveland and Shyu. (ps)
- Fry's zipdecode
- Wattenberg's NameVoyager
- LA Homicides. Heer's example of generalized selection.
Saung Li - Feb 14, 2011 07:24:00 pm
I found the LA homicide interactive visualization to be quite compelling. It packs a lot of information and may seem daunting at first, but the generalized selection really helps to narrow to specific dimensions and find interesting correlations. Changing the topics with the buttons at the bottom is a pretty nice feature, though it crashed my browser. These interactive tools bring up a relevant and important topic: user-interfaces. It is important that users can easily select parts of the data they want so that they can focus on studying the visualizations. Developing these applications thus brings up the difficulties of both creating compelling visualizations and simple user-interfaces. An interesting challenge would be to develop an interactive tool that is general enough to take in user-inputted datasets and immediately allow for interaction.
The Homefinder tool reminds me of Redfin's visualizations: http://www.redfin.com/home I think this is a really nice interactive tool for finding homes in a particular area.
Krishna - Feb 14, 2011 08:56:08 pm
Both the Homefinder and the Generalized Selection engine are wonderful examples of how most of the data points and visualization components can be used for querying purpose, makes me wonder if like Tufte's data-ink ratio, we should come up with a query-ink ratio. The question that came to me when I started reading Jeff Heer's paper was how the query relaxation engine can be implemented for arbitrary domains and how the underlying semantic structures can be visually and interactive expressed by the user across domains. The paper mentions these issues in the later sections, the authors suggest that allowing users to specify such structural, semantic relations and interactions at runtime is one possible approach.
Another thought I had was on the effectiveness of the query relaxation engine when data points are multi dimensional continuous variables. In other words, the notion of 'like this point' is arbitrary for such data sets, one approach would be to have some threshold of similarity and rank other data points based on the similarity measure. The more I think about this, I think extending the notion of 'like this point' beyond exact matches, even for discrete variables, for such visualizations should be interesting and challenging.
Michael Hsueh - Feb 15, 2011 02:02:39 am
Ahlberg and Schneiderman's idea of fuzzy queries caught my attention. What if, in the spirit of dynamic refinement, a system could generate "random" visualizations? Perhaps a user, grappling with a large and complex data set, just keeps refreshing the software, each time generating starfields or scatter plots that show the relationship between any two randomly chosen dimensions? Contrived, perhaps, but what if this "fuzzy" methodology happens under the covers? That is, the software does these random visualizations, much faster than any human can analyze them, and internally uses heuristics to determine which particular visualizations may potentially be of interest.
To this end, and for the sake of better interact response times (especially when dealing with huge data sets), I wonder how much has been done to cull data when the software runs interactively-- like level-of-detail (in 3D computer graphics) applied to information visualization. Ahlberg & Schneiderman do talk about zooming starfields, but it seems this was used to handle clutter rather than to reduce the set of data being handled at any given moment. Such techniques recall the idea of tight coupling, requiring only the appropriate amount of data to be shown at each "zoom" level.
Jessica Voytek - Feb 15, 2011 01:05:15 pm
In addition to being fundamental to a really fun game, the cards used in the game Set are also used in an actual neuropsychological test called the "Wisconsin Card Sorting Task." In the task the test administrator begins by showing matches to the person being tested, but she doesn't explain why they match. Usually the administrator doesn't start with a match requiring all 4 dimensions to match/not match as in the game set, but rather makes matches based on one dimension, color, shape, shading, or number. She then hands the deck to the person and asks them to make matches. The administrator does not tell the person how to make matches but she does tell them whether or not the sets are correct. Healthy people are assumed to have a certain normal range of capacity for identifying patterns, if a person falls out of that normal range it can lead to a more specific diagnosis. According to the Wikipedia article (and backed up by my husband the Neuroscience PhD) the Wisconsin card sorting task is used to asses "strategic planning, organized searching, utilizing environmental feedback to shift cognitive sets, directing behavior toward achieving a goal, and modulating impulsive responding" in people who may be suffering from various ailments. http://en.wikipedia.org/wiki/Wisconsin_card_sort
Michael Cohen - Feb 15, 2011 01:35:22 pm
I found the change blindness examples pretty compelling and I think they have an important implication for interactive visualizations: if you have any delay during your redraw, don't expect your users to easily be able to identify what's changed (especially if multiple elements are changing). I can think of a couple of strategies to help where identifying changes between different scenarios is an important part of the analysis:
- Smoothly animated transitions, where semantically appropriate (e.g., if a data point is moving, but not if a data point is being replaced by a different data point). I used to consider this pretty superfluous, but the change blindness examples have caused me to reconsider.
- Minimize redraw time. Without the "blank page" flash between the two images, the change blindness effect is much less dramatic.
- Highlight elements that have changed with a different color/brightness/pattern/etc., or use a marker (like an arrow).
Natalie Jones - Feb 15, 2011 03:29:53 pm
Something I find interesting about the perception and change blindness effects is that the user is most likely not aware, and doesn't need to be aware, of why one graphical representation might be more digestible or pleasing than another. If designing and testing a visual interactive, I would probably ask people to test it, and then ask for their feedback. Obviously I would expect to find some things out from the person's behavior that they wouldn't have been able to articulate, but I also would probably ask them to tell me what they liked and didn't like. While that information could be valuable, I am more likely to be aware of how much testers are not aware of, and perhaps be skeptical of what they say and the reasons they give, or at least be looking out for clues of perception issues that they might not be able to identify. That's probably one of the reasons people have a relatively easy time saying that they "like" or "dislike" something they're looking at, but have a much harder time saying why.
Matthew Can - Feb 15, 2011 03:01:43 pm
I liked Michael Hsueh's suggestion that the system can rapidly generate many visualizations and attempt to determine which are of interest to the user. This is more generally framed as an optimization problem, and it's not unlike Jock Mackinlay's APT system. I think the real challenge here is that it's hard to know what we should optimize for. I don't think we have a solid understanding of what makes an interesting visualization, let alone a way to formalize it for an optimizer. This would be much easier with a strong problem definition. At least then we would have a better understanding of what the space looks like and what parameters our system can manipulate. I think this is a promising area for future visualization research.
The TimeSearcher's "query by example" is a great technique for interactive visualization (Hochheiser and Shneiderman). I think in general, it's useful to say "show me more examples that look like this one." Regarding Krishna's point about similarity thresholds, the way TimeSearcher handles this is by creating a timebox for each time point of the queried example. Each timebox extends over some value range around the example, at that point. Of course, this is for quantitative data. It gets harder (and more interesting) when you consider data such as graphs or natural language.
Brandon Liu - Feb 15, 2011 04:25:14 pm
I liked the evaluation section of the paper "Generalized Selection via Interactive Query Relaxation" - especially as a framework for gauging the effectiveness of other visualizations. The part about distinguishing between users with previous experience/no previous experience was especially good. I would be interested in evaluation frameworks to assess how well users explore the data - since all the tasks presented in this paper are for predefined descriptions, such as "select all victims who were over 60 years old"
Here's an interactive visualization I came across that's similar to the brain imaging one: http://cs.stanford.edu/people/mbostock/iv/dependency-tree.html You can draw a line across the network and it tells you what software components depend on each other.
Dan - Feb 15, 2011 07:03:14 pm
The Visual Information Seeking article was very interesting, and I liked how they started off discussing the capacity of the human visual system's information processing, something I like to call the bandwidth of the human visual system. I find it interesting that they directly go into direct manipulation and user interaction to suggest that this increases the bandwidth of the human visual system. I feel that the hybrid system of filtering information with queries as well as direct manipulation has enormous potential if implemented. Tight coupling is a good concept that provides modal information, or the state of the machine. This is imperative for interfaces of this nature! Tight coupling also seemed to share some attributes with direct manipulation.
I think that the Bertin paper had a lot of good insights in it. The topics of intrinsic and extrinsic information within visualizations, was a fantastic way of describing how choices impact the transmission of underlying data. It was also interesting to know that the human eye simplifies things. The process by which a visualization can be formed was quite elaborate: 1: defining the problem, 2: defining the data table, 3: Adopting a processing language, 4: processing the data: simplifying without destroying. I think the last stage is very important since integrity of the data is the underlying answer to the problem or question at hand.
Karl He - Feb 15, 2011 11:48:33 pm
The most compelling aspect of the interactive visualizations suggested in the Schneiderman articles is the ability to easily filter upon filtered data. It is much more intuitive to perform a serious of actions such as "Apartments in Berkeley" -> "Less than $2000/month" -> "5 people" rather than reissuing the query each time the user realizes he needs more information. Visualizations specifically designed to allow this type of query-drilldown are extremely beneficial. The starfield display described in the VIS paper is by normal standards a somewhat poor visualization, being a cluster-hell of a scatterplot type visualization. However, the ability to interactively affect the display suddenly makes it a great way to visualize the info, as changes in the data displayed can be easily picked up by the viewer.
Sally Ahn - Feb 16, 2011 12:24:51 am
The Visual Information Seeking article discusses several key design principles in interactive software, and it helped me to see the specific features that make interactive tools more effective in information seeking tasks. I particularly liked the idea of the starfield display--it is such a simple and logical way to maximize efficiency in both displaying and searching the data. I am surprised I haven't seen more examples of this prior to reading this paper. For example, most photo libraries I've worked with organize photos along a single time dimension (sometimes grouped into events), but I would imagine that a starfield display in which the abscissa captures the time and the ordinate captures a dynamic, user-specified feature (personal rating, number of "tags" of specific people, etc., alphabetically by event or location name, etc.) would make photo browsing quicker and easier. The light points could be thumbnails of the photo and clicking on it could show an enlargement. Actually, even better use case might be viewing files in a directory. I think it might be really helpful if I could specify the ordering parameter of the y-axis (file name, last date modified, etc.).
The authors seem to favor their Alphaslider, which is mechanism I haven't seen much of prior to reading this paper. I can see how this helps certain applications like the FilmFinder by removing the need for a keyboard, but one also loses control by losing the ability type specific queries. It is somewhat interesting that the authors of this article seems to favor all-mouse interaction as opposed to all-keyboard interaction (I would think that requiring both would the least optimal in terms of efficiency and convenience…but keyboard shortcuts are lifesavers even for image-editing programs like Illustrator where you also need the mouse or tablet pen…I am not so sure anymore). I suppose this is an application-specific issue, but redundant input methods where users can either use a slider or type in the value directly would probably be the best way to resolve this precision (sliding to the exact spot on a slider) vs. accuracy (typos) tradeoff.
Siamak Faridani 01:39, 16 February 2011 (CST)
The idea of graphical queries and graphical envelopes seems to be a major visualization idea. One of the places that it can be used is in visualizing demands for retail stores like walmart. OR analysts typically rely on numbers instead of graphs for analyzing demand for stores. And this idea can help them dramatically. Lets assume we have 5 walmart locations in the Sacramento area and these stores receive customers from all the zip codes around the area. We can visualize the flow of these orders with an interactive tool and if the analyist wants to see orders from specific zip codes to 2 of the locations he can simply select those two locations in addition to the zip codes and filter out everything else. We can also use the idea of brushing to show the percentage of total demand that comes from those zip codes.
I have recently found this website http://www.visualizing.org/, which seems to be supported by GE. It has a collection of visualizations and features interesting visualization pieces. I'd like to share it with our class.
Michael Porath - Feb 16, 2011 01:35:47 am
When I was reading through the Bertin paper, I was amazed by how much effort and how many manual steps the compiling of this visualization required. Yet the compilation of data was done very carefully, which led to much value for the hotel manager.
When I compare this to the Business Intelligence dashboards in many enterprises, the data visualizations seem very generic and created without much dedication. The data is digitally available, and systems are in place to collect, process, and transform the data. What this means is that there is a huge potential for interactive data visualization for exploratory data analysis.
We have great tools at hand to create interactive visualizations. It is fairly simple to create something that looks good. The fact that technology is more accessible cannot mean that we're less careful in devising a useful visualization.
David Wong - Feb 16, 2011 02:00:34 am
I also found the discussion and application on change blindness to be particularly interesting. It is quite related to the work I am doing and that work, in combination with pre-attentiveness studies, can help a lot in UI design. For interactive visualizations, change blindness can also be applied to ensure that there is a noticeable difference in the visualization. For instance, if one were to use brushing to link more than one set of data within different visualizations, I believe that change blindness is also a necessary component to consider (aside from intefering and/or correlated dimensions) as your user will be switching between two different areas of the interaction.
Julian Limon - Feb 16, 2011 01:00:57 pm
I agree with Mike Porath about business dashboards. They usually present a single point of view and do not allow the person who is looking to go further and explore more. I feel that visualization designers usually fall in the trap of underestimating the analytic ability of the user. As Tufte warns us, if we can understand it others will usually understand it as well. A multi-dimensional graph like the one presented in class (Tukey's first efforts with computers) is probably not the most intuitive one, but a matrix such as the baseball stats graph generated by GGobi could be exploited more deeply in business settings. Of course, that assumes that management cares about the causes and wants to go deeper. In many cases, however, those dashboards are presented as part of the long list of reports that have to be seen but are never analyzed.
On a totally different note, I had a hard time connecting the Gestalt principles to specific techniques in visualization. As the professor said in class, they are more of a high-level principle to keep in the back of your head when you're creating a visualization--but I'd be interested to hear if others feel they can be somehow programmed in a system.