Authoring Visualizations and Prefuse

From CS294-10 Visualization Fa07

Jump to: navigation, search

Lecture on Sep 24, 2007

Slides

Contents

[edit] Readings

  • Toolkit Design for Interactive Structured Graphics. Bederson, Meyer & Grosjean. IEEE Transactions on Software Engineering, 30(8), August 2004. (pdf)
  • prefuse: A Toolkit for Interactive Information Visualization. Heer, Card & Landay. ACM CHI 2005. (pdf)
  • Review Chapter 1: Information Visualization, In Readings in Information Visualization (particularly from page 17 on). Card, et al. (pdf)

Optional Readings

  • Building Highly-Coordinated Visualizations In Improvise. Chris Weaver. IEEE InfoVis 2004. (pdf)
  • Software Design Patterns for Information Visualization. Heer & Agrawala. IEEE InfoVis 2006. (pdf)
  • Past, Present, and Future of User Interface Software Tools. Myers, Hudson, & Pausch. ACM TOCHI, March 2000. (pdf)
  • An Operator Interaction Framework for Visualization Systems. Chi and Riedl. In Proceedings of the Symposium on Information Visualization (InfoVis '98), pp. 63--70. IEEE Press, 1998. (pdf)
  • 2D Graphics Primer (useful for those with little experience in 2D computer graphics)

Demonstrations

[edit] Ken-ichi - Sep 23, 2007 11:26:40 pm

Interesting to read about toolkit design. I guess most programmers have a pretty refined appreciation for good toolkit or API design, but it was interesting to read about such detailed dissections of design approaches. I really think web service design and API design should receive as much attention as UI design. Engineers are users too, and easier tools just let engineers spend more time making cool stuff and less time doing pushups. The toolkits describes look interesting too. Looking forward to hearing Jeff Heer speak.

[edit] Omar - Sep 26, 2007 09:19:16 am

at the end of class, jeff discussed the threshold and ceiling concepts. these are pretty widely used ideas, and wrt APIs, to reiterate the threshold is like the learning curve, how hard is it to get going using the API, and to learn specific features and become productive with the tool. so different API features can have different thresholds. the ceiling refers to how much can be done with the API (its expressiveness) before you hit a barrier (the ceiling).

one thing that wasn't clear was who should consider these concepts. i think it's important the API designer consider first what tasks are important to complete with her API (maybe gleaned through use of other APIs, or developer interviews or other inquiry methods) and then develop an API with threshold and ceiling in mind, and then evaluate the API on end users. it's really important to consider the task you're targetting, because i've used many APIs, and sometimes they make one thing particularly easy to do (low threshold) but it's not what any users of the API actually use it for! finally, it's hard for an end-user to reflect on threshold and ceiling until they are an expert with the API, though these users then can reflect on future iterations of the API and consider threshold and ceiling issues.

[edit] Robin Held - Sep 27, 2007 02:45:03 pm

My favorite part in the Card et al. reading was the concept of "cost of knowledge." One can easily relate to how difficult it can be to simultaneously keep track of multiple pieces of information while working on a complex task--especially on a computer. Typically one is forced to keep some windows occluded or minimized, while focusing on a small number of programs at a time. But there are several obvious solutions to reduce the average cost of knowledge in such a scenario. For instance, using extra monitors allows one to display multiple windows at the same time, thereby decreasing the effort required to view each window. Mac OS X also has the Exposé feature, which instantly and automatically arranges all the open windows on the desktop, allowing one to simultaneously view all of their contents. By making every window so quickly accessible, Exposé effectively lowers the average cost of knowledge. Windows Vista's Flip3D feature is similarly helpful, although perhaps not to the same extent. It provides essentially a 3D rolodex of application windows, through which the user can quickly flip to access a desired piece of information.

[edit] Mcd - Sep 30, 2007 07:34:17 pm

Congratulations to Jeff for prefuse. As an novice programmer I found most of the discussion of toolkit construction a bit beyond my experience, but like Ken-ichi I think attention to toolkit and API design is important. I'm impressed with the examples (though some demonstration links are broken), and I look forward to playing around with it.

I've used Processing a bit for another class (http://courses.ischool.berkeley.edu/i290-13/f07/ Tangible User Interfaces), and found it easy to learn and use for quick and dirty visualizations of input. The PixelBrush demonstration blew my mind. I recommend it.

[edit] Jimmy - Sep 30, 2007 09:48:15 pm

I like the model-view-control concept being applied to the prefuse visualization framework. Originally used for software engineering, MVC makes the visualization process clear and easy to manage, and we can get a higher-level view on the visual construction. The abstract data (base model) is filtered to the visual form (visualization-specific data model), and the filtered data has a set of controllers for rendering purpose. The tiered components in this framework are flexible and reusable.

I would also be interested in using prefuse API to develop visualization projects. The usability test for prefuse looks promising. I agree with Ken-ichi that engineers are also users that need the good tools to save their time coding and focus more on the design part.

[edit] James O'Shea - Sep 30, 2007 9:55:36 pm

I thought the Card et. al. reading '('Information Visualization: Using Vision to Think") provided a good discussion of data and what it means to convert raw data to a "data table" before mapping it to visualizations. Essentially, they are talking about the need to reformat data, and I think this is a critical step in the visualization process that is often overlooked or underestimated. I found this to be the most difficult aspect of our second assignment (creating visualizations with Spotfire or Tableau), but it forced me to think more about the data and gain a greater understanding of what it was and (more importantly) what I could do with it. Additionally, I believe making careful and thoughtful decisions about the data at this early stage (i.e. reformatting the raw data) can greatly simplify and expedite the actual creation of useful visualizations later on. Although there is usually the need for interactive data transformations, I believe the whole process is made easier when extra care is taken to reformat the data initially.

[edit] Amanda Alvarez - Sep 30, 2007 09:53:50 pm

Bederson et al.: "Deciding between composition [polylithic] and inheritance [monolithic] is...mainly a matter of identifying who the users of the toolkit are, and what the expected lifecycle of the toolkit will be."

I feel like there is something else that should be contributing to this decision, namely whether on a theoretical level we think that any new functionality we could create really is uniquely different from that which already exists. Is the new widget or interaction feature we want to implement just a re-hashing of an old feature, or is it really new, unique, and completely unrealized in the existing set of features? If the former, seems we should go with the monolitic architecture; if the latter, the polylithic. I am bringing this up because it seems like there is a whole string of interaction methods (each with their own special buzzword) that boil down to the same thing. Perhaps there are only a limited number of ways in which we would ever need to interact with our visualizations, and these have all already been realized. After all, all the data we ever visualize seem to be describable by a few well-defined categories; perhaps only a radical new data model or data-visual mapping would require the customizability of polylithic toolkits. In any case, if mobility and reconstruction are valued qualities, it seems like there is a clear choice. (Conversely, like Jamie pointed out above, judicious reformatting of the data beforehand may obviate the need for extensive interactions (restricting the mobility of the data prior to visualization, so making the need for extensively customizable interaction tools obsolete.)

[edit] N8agrin - Oct 01, 2007 12:08:17 am

prefuse is certainly a great toolkit, and reminds me somewhat of Processing, not in terms of its purpose, but rather in terms of its mission to reduce the overall code base down, allowing programmers to write very little code in order to rapidly produce dynamic visualizations. Conducting a usability study on what is essentially a coding language or API was definitely an interesting approach. I've long wondered if anything interesting would come from a usability study of say, Ruby and PHP to determine what it is that has made some languages gain popular support so quickly. Back to the visualization software, one aspect that seems to escape quite a few interactive visualizations is that they are great for producing a highly customized data-view, but then there is no clear way to export that view. That's to say that they allow the viewer to develop a specific selection of data, but not to necessarily save or export that data. The ability to export data from various views would be an extremely useful function from many visualization packages.

[edit] David Jacobs - Oct 01, 2007 01:09:03 am

N8agrin: I agree that being able to export visualizations would be a useful feature to include in these various visualization libraries. The trouble I see there, however, is that as soon as you leave the visualization creation environment, you lose the interactivity. I have a feeling that there are datasets out there that can never really be expressed well as a series of static visualizations or even animations. As I understand it, a lot of the focus for the prefuse project (I might be confusing this with some of Jeff's other projects) is on providing a means for users to share their visualizations in an interactive setting. This way, I can show you the visualization I found interesting, but you still have the power to explore the data in the same neighborhood. If you could export this interactive experience (say, generate a compact standalone application, linking to some standardized prefuse libraries), that I think would be the best case senario. But as you describe it, it sounds like we'd be exporting to flat CSV files to be imported into something like excel, or simple screen shots. It's a start, but I think we could do better.

[edit] James Andrews - Oct 01, 2007 03:26:59 am

Amanda -- The poly vs mono decision doesn't seem to strongly relate to the creation of truly new features. At least, it seems to me that a situation where the visualizations are never fundamentally new, but adopt the properties of previous visualizations, may still be better off being polylithic -- if the new visualizations draw concepts from multiple previous visualizations, one could draw appropriately from each type instead of only drawing from the most similar type. On the other hand, in the case of truly new visualizations, where concepts are not re-used, a monolithic approach may be just as good as a polylithic approach, since in both cases nothing is being reused. So it's really about how much you expect to create 'derivative' visualizations that combine features, more than how much you expect to create completely unique visualizations.

A particularly strong example of the benefit of polylithic systems even in the absence of 'real' innovation would be the visualization that smoothly transitions between visualization types (Jeff showed one that I believe was looking at crime statistics).

[edit] Kenrick Kin - Oct 02, 2007 01:42:32 am

Maybe I missed something in the paper, but what is the rationale for organizing the visualization in figure 3 of Jeff's paper from left to right?

[edit] Jheer - Oct 03, 2007 09:24:21 am

Kenrick: The tree in Fig. 3 of the prefuse paper is oriented from right-to-left because it is showing text in both Farsi and Hebrew, both of which are read in a right-to-left direction. DOI-Trees were designed to be re-orientable, so you can animate changes in orientation (left-right, right-left, top-down, bottom-up) at any times. I forgot to show that in the demo...

[edit] Daisy Wang - Oct 03, 2007 11:25:54 am

According to Prefuce system architecture, it seems that source data is decoupled from the rest of the interactive system. The data, which are needed are preloaded into the in memory data structure, while the indexes are build on top of them, for efficient query processing. It all sounds like database job. Is there a good justification that database should be excluded from this data exploration cycle?

[edit] Mark Howison - Oct 08, 2007 02:06:59 pm

James wrote above:

Additionally, I believe making careful and thoughtful decisions about the data at this early stage (i.e. reformatting the raw data) can greatly simplify and expedite the actual creation of useful visualizations later on.

I agree, and I think this also relates to one of Tufte's points that you can't use a visualization to clean up bad data; instead, an effective visualization makes salient any important information that already resides in the data.



[add comment]
Personal tools