A3-KrishnaJanakiraman

From CS294-10 Visualization Sp11

Jump to: navigation, search

Contents

Description and Storyboard

I wanted to visualize the graph built over paths taken by Indian vocalists when they render melodic phrases from one starting note to an ending note. My goal was to build a visualization that is similar to the Word Tree from IBM's Many Eyes. My initial storyboard looked as follows.

Storyboard.jpg

In short, I wanted to group phrases in a collection by grouping common sub-phrases. I also decided on some controls to filter the vertices in the graph - for example, visualize phrases that pass through only a subset of the vertices and also a slider that would allow the user to easily change the starting and ending notes.

Data Collection and Processing

I generated the phrases data using a collection of free-form improvisational audio recordings by four Indian musicians - the total length of the collection comes to about 40 minutes of vocal music. The fundamental frequency from each of the recording was computed using the YIN algorithm [A de Cheveign├ęb - 2002]. Using the tonic frequency that was manually determined from each of the recording, the resulting data was quantized to pitch-classes (C, C#, ..) according to the Just Intonation ratio. A simple pattern matching code was written to discover phrases between pitch-classes, the phrase collection was then abstracted by a directed graph. The link to the python code that does most of the processing is provided at the end, a Matlab implementation of the YIN algorithm was used for computing the fundamental pitch.

Changes from storyboard

Generating a word tree like visualization for a given set of musical phrases was much harder than my initial estimates. I spent considerable amount of time on this without any progress. My initial idea to build such a visualization using suffix arrays and substring matches didn't work. I then decided to scope down my visualization to show a directed graph. The vertices in the graph correspond to the 36 notes (3 octaves, 12 pitch-classes). The edges represent transitions between the notes, the strength of the transition is given by the edge thickness and the color of the edge would represent the direction. This changed my visualization as follows
Storyboard-dag.jpg
.

Final Implementation

My final implementation shows a simple directed graph that abstracts phrases between two notes. The visualization was written using Protovis. The graph layout is determined by randomly assigning pitch-classes to to the x-axis. Ascending and descending phrases (or) direction of the edges are coded using colors - blue for ascending and brown for descending. The graph can be updated dynamically using the text boxes. My initial plan was to implement the start and end note selectors using a slider, but I found this difficult to implement using Protovis.

http://people.ischool.berkeley.edu/~krishna/viz/v2/raga.html

http://people.ischool.berkeley.edu/~krishna/viz/v2/phrase.py


It took around 20 hours to get the final visualization out. But I spent considerable amount of time deliberating on how melodic phrases can be represented like word trees



[add comment]
Personal tools