LoFi-Group:RRBG

From CS 160 User Interfaces Sp10

Jump to: navigation, search

Contents

Introduction and Mission Statement (5 pts)

The Whiteboard Planning App helps teachers plan out the layout of the lecture content that they'll be writing on a classroom's whiteboards. It then allows them to view their planned boards while presenting their lecture so they know just where, when, and how to put each element on the board. Our goal is to make lecture planning quick, easy, and effective for inexperienced teachers, and to help them utilize their whiteboards efficiently and in a way that is easy for students to view and follow.


The roles of each team member in this assignment are as follows:

  • Boaz Avital: Attended Test 2 and 3, created video prototypes
  • Daniel Ritchie: Attended Tests 1 and 2, did the interface sketches and Appendix documents, lots of writing
  • Eric Fung: Attended Tests 1, 2, and 3, photographed prototype and scanned notes
  • Richard Lan: Attended Test 1 and 3
  • Spencer Fang: Bought construction materials, attended Test 3
  • Everyone helped build the prototype and write up this document

Prototype (10 pts)

Interface sketch

Below is an early sketch of the interface screens for Editor Mode. Our design changed somewhat between this sketch and the prototype we actually built, but the core components are very similar.

Description

Here's a photo of the entire lo-fi prototype:

The Frame

We built a simple replica iPhone frame inside which all of the prototype interface screens sit. The frame is significantly larger than a true iPhone, but the aspect ratio is very similar.

Main Menu

This is the first screen users of our prototype see. They have the option to either plan a lecture or to start presenting a planned lecture.

Lectures (Plan)

If the user taps the 'Plan' button, the 'Lectures' screen comes up. This screen shows all the user's saved lectures and also gives the user the option to create a new lecture. If the user taps the button to create a new lecture, the 'Lecture Settings' screen comes up. Here the user can enter a name for the lecture and also set timing information (not implemented in this prototype). We also made a replica iPhone keyboard for text entry (such as typing the name of a new lecture).

Boards Viewer

Upon accepting the settings for a new lecture, the app then displays the boards viewer, which shows an overview of all the boards that the user has added to the lecture. Before displaying this, though, the app first presents a static "tutorial image" (not shown in the video tutorial) which gives the user an overview of how to use the somewhat novel widget that's central to the boards viewer. This 'rail' widget allows users to drag boards around to rearrange them and to simply drag boards off the rail to delete them. This functionality would have been hidden without a tutorial screen, thus we elected to create one.

The wrench icon in the upper right of the boards viewer takes the user back to the lecture settings, where they can revise the name of the lecture or the lecture timing information. The start presentation button at the bottom of the screen allows the user to launch presentation mode, which they'd do when starting to give their lecture in class. This button does the same thing as the large 'Present' button from the Main Menu.

Take a look at Task Video 1 to see the boards viewer in motion.

Layout Chooser

If the user taps the 'Add Board' widget in the boards viewer, the app loads this screen. Here, the user chooses how they'd like to lay out elements on the board. The board is split into multiple regular regions for better organization (we observed expert users doing this with their boards during class). The user swipes the screen to slide through the many different layout options available, tapping the desired layout to select it.

Take a look at Task Video 1 to see the layout chooser in motion.

Board Editor

The board editor is the heart of the app. The app loads this screen once the user selects a layout via the layout chooser, or if the user taps an existing board in the boards viewer. In the board editor, the user drags board elements from the scrollable menu on the right onto the board (displayed in the center of the screen). When the board has no elements on it, a text notification instructing the user to drag elements onto board is shown (not shown in the video prototype). When the user drags an element over a particular board section, a piece of blue transparency highlights the section to note that the element, if we released, will lock into place on that section.

The user can tap an element to bring up a menu of possible options (delete the element, add text to the element, add a photo, etc.) These interface elements are also shown.

The buttons on the bottom toolbar are: Change Layout (goes back to the layout chooser), Cancel (return to the boards view, discarding changes to this board), Accept (return the boards view, retaining changes to this board), New Board (create a new board immediate after this one. A shortcut for going back to the board view and adding a new board there).

We highly encourage the reader to look at Task Videos 1 and 2 to get a feel for how this screen works.

Page of Notes

The scenario we provided our prototype testers had them lecturing about Maxwell's equations. One task involves the user taking a photo of their notes. We provided the users with this paper defining Maxwell's equations.

Lectures (Present)

If the user taps the big 'Present' button from the main menu, this screen appears. Here, the user chooses a lecture to present from the lectures he/she has previously created.

Presentation

Here's the actual presentation screen (loaded once the user selects a lecture to present). The user can see all of the boards in the presentation and can swipe to transition between them. If a board element has a photo attached to it, the user can tap the small camera icon in the lower left of the element to view that photo. The exclamation point in the upper left, if filled in (and orange) indicates that this element did not work well in class and needs revision. The user can toggle the status of the exclamation point by tapping it.

Take a look at Task Video 3 to see this screen in action.

Video Prototype (35 pts)

Narrated Task Videos

The videos are narrated from the point of view of a new teacher, preparing for and delivering a short lecture on Maxwell's Equations. Each video begins with a narration explaining the context of the task, followed by a video of the task being completed along with narration introducing the video and explaining on a high level what the user is trying to accomplish.

Task 1: Create a presentation with 2 new boards - Hard

Media:rrbg_task1.mpeg

Task 2: Attaching a picture of your notes - Medium

Media:rrbg_task2.mpeg

Task 3: Go through a presentation and view notes - Easy

Media:rrbg_task3.mpeg

Complete Prototype Video

This video consists of a concatenation of the three task videos along with some video footage to add extra context.

Media:rrbg_complete.mpeg

Video Creation Process

Tools

The recordings of the tasks were taken using stop motion animation with the SAM Animation Studio tool from Tufts University. They were recorded using the tools provided by the professors. The title screen and narration was added using Windows Movie Maker 6, and the videos were converted to .mpeg format using online tools such as Media Converter.

Process

The stop motion animation was recorded in short segments by screen. Each screen interaction was recorded at 10 frames per second (time lapse photography at .1 second intervals). Once the interaction with the current screen was completed, the recording was paused and the screen was rearranged to reflect the application's reaction. Recording was then started again and interactions with the application continued.

Difficulties: The most difficult part of recording in this fashion was capturing the seamless interactions, as opposed to the button presses that subsequently change the entire screen.

  • One example of this is the scrolling interfaces. To properly replicate iPhone scrolling functionality, great care had to be taken that the scrolling segments of the paper prototype would not get caught on the stationary segments while the application was in use. These scrolling segments had to be recorded continuously to show the fluidity of the interaction, and so often required two people to emulate: one to show the interaction and a second to hold the paper in place and help pull the scrolling segment.
  • A second example is in the planning feature where elements are dragged from the right menu onto the board while the sections are being highlighted. The main difficulty here was dragging a paper along other loose papers and slick transparencies without the everything moving around. This was accomplished by temporarily taping down certain elements and by trying to inconspicuously hold the transparencies down with a thumb while dragging over it. Again, this section had to be recorded in real time (10 fps) to represent its fluidity.

Benefits: The biggest benefit to video prototyping in general and stop motion animation specifically is the ability to represent complex interactions seamlessly and instantaneously. Screen transitions and complexities like the aforementioned element section highlighting can be represented instantly and faithfully so viewers are more likely to understand their purpose and the desired functionality of the final interface. An additional benefit to stop motion animation is that elements can move and act on the screen without apparent interaction from the user's finger, so if the user starts an action like sliding an element that will continue to move and then snap to a new position on its own, it can be presented accurately.

Method (5 pts)

General

User Selection

We wanted our participants to be college-level math/science teachers at the beginning of their academic career, so we sought out graduate students. In particular, we looked for students who either are currently teaching or have taught in the recent past.

Procedure

More specific procedure details can be found in the Demo Script document in the Appendix.

We began each test by introducing ourselves to the user and describing the goal of our project and of the test. We then had the user read and sign the Consent Form (see the Appendix). Here, we also took the time to ask the user to think out loud and to explain that we could not help the user through the test. After this, we explained the general scenario in which the user was to imagine him/herself while participating in the test. Having given the user this preliminary introduction, the facilitator introduced each task to the user one by one, since our tasks flow together and result in eventually accomplishing a larger goal (preparing and presenting a simple lecture). Observers took note of critical incidents on 3x5 index cards. At the end of the test, we asked the user to explain any unusual behavior in order to get an insight into the cause of various critical incidents. We concluded by thanking the user for their time and promising to show them our finished interface.

Measures

Particularly, we observed how quickly our users could adapt to using our application, since it is largely a unique and custom interface. Our main concerns centered on how well the user could perform the tasks on first use compared to how quickly they could move around the app towards the end of the session. We also wanted to see whether all of our app's functionality was 'discoverable', making little use of tutorials or showing the user how to use our application outright.

Test 1 (User 1)

Team members in attendance:

  • Daniel Ritchie ("Computer")
  • Richard Lan ("Facilitator")
  • Eric Fung ("Observer")

User 1 is a Mechanical Engineering grad student. She's currently TAing a course and has TAed in three prior semesters at Berkeley. She also has taught for four semesters pre-Berkeley. When she teaches, she makes heavy use of whiteboards and blackboards. She does not own an iPhone or an iPod touch, but she has some familiarity with the devices from using her brother's iPod touch.

Test 1 took place in an open area on the ground floor of Hesse Hall (a Mechanical Engineering building on campus). The participant and all of the team members sat at a large, rectangular table. The prototype was set up by spreading out its components on the ample table space; we did not use any other equipment. The Facilitator sat to User 1's right, the Observer sat to her left, and the Computer stood at the table across from her.

Test 2 (User 2)

Team members in attendance:

  • Boaz Avital ("Computer")
  • Daniel Ritchie ("Facilitator")
  • Eric Fung ("Observer")

User 2 is a third-year Computer Science grad student. He's been a TA twice in the past, but he's not currently teaching. He doesn't own an iPod touch or an iPhone, nor has he gotten familiar with them by e.g. using a friend's device. When he teaches, he makes regular use of the whiteboard.

Test 2 took place in the graduate student lounge on the 5th floor of Soda Hall. The participant and the team members sat on couches around a low, square table on which the prototype was set up; the small space of this table (compared to the table from Test 1) added some difficulty to the Computer's job. The Observer sat next to the participant on one couch, the Computer sat to their left on another, and the Facilitator sat across from them on yet another.

Test 3 (User 3)

Team members in attendance:

  • Boaz Avital ("Computer")
  • Eric Fung ("Facilitator")
  • Richard Lan ("Observer")
  • Spencer Fang ("Observer")

User 3 is a Computer Science grad student, and has taught for 1 semester. He has owned an iPhone since October (six months), so he is considerably familiar with the affordances of iPhone interfaces. He primarily uses the whiteboard when teaching.

Test 3 took place in the alcoves of the 6th floor of Soda Hall. The participant and the team members sat on chairs around a long table on which the prototype was set up. The User sat at one end of the table, the Facilitator to his left, the Computer to his right, and the Observers at the other end of the table.

Results (5 pts)

Overall, our participants had great success with our prototype. They ran into a few difficulties, which we discuss below.

Task 1: Create lecture with 2 boards

This was the longest and most difficult task, so we actually split it up to into sub-tasks that we gave to the users one by one.

Create a lecture

Our users had no real problem with this. They all made quick work of adding a new lecture and setting its name. However, most users were not sure what the "Time this lecture" option would do (we did not tell them because it was not part of our tasks). This didn't trip any of them up; they simply ignored it and moved on.

Add a board

This was our biggest problem spot. We were that the functionality of the boards view would be invisible, so we provided a one-time-pop-up tutorial screen that shows how to use it. Unfortunately, most users did not realize that this screen was a tutorial: they thought it was an interactive part of the app itself. User 1 took two or three minutes to eventually realize this and to find the 'Dismiss' button (which gets rid of the tutorial and brings up the real boards view. Alas, our efforts on behalf of the user ended up confusing them further.

To get a little more insight on the situation, we tried not presenting the tutorial screen to User 3. Since our tasks don't actually require the user to move/delete boards, he had no problem using the interface without having seen the tutorial. After the test, we showed him the tutorial screen. He said that he realized that boards could be rearranged by dragging them, but he wasn't sure how to delete boards until he saw the tutorial. This reinforced our belief that the tutorial is necessary. It's presentation obviously needs to change, though.

Once past the tutorial screen, users had no problem with adding boards and selecting already-created boards.

Add content to a board

This task exercised most of the functionality of our prototype, so this is a long section.

First, the good news. Users were able to get into a "flow" with the board editor, quickly laying out boards by dragging elements from the right menu onto the board. They also knew immediately which elements in the menu corresponded to which board content (proof, graph, list, etc.)

Now for the problems. One minor problem was that none of our users noticed the "New Board" button at the bottom of the board editor; they all went back to the boards view and used the big "Add Board" button to add new boards. This leads to a slow workflow that we hoped to avoid, but it's not a critical usability issue.

We had one user (User 1) who forgot that board sections need elements dragged onto them before they can be annotated. She repeatedly tried to tap a blank section to add notes to it. She eventually remembered that she first needed to drag an element onto the board section, but she said she would have appreciated some form of reminder.

We also had a user (User 3) who thought that board layouts represented the layout of multiple boards in a classroom, not sections of the same board. This conceptual error caused him to try adding multiple elements to the same board section. The app simply does not respond to this action, so he eventually figured out that he could only add one element per section.

Our users also seemed to compulsively annotate every board element they added. We didn't intend for this to happen; our goal was for users to only add text/photos to elements when they need the information to remind them of what to write on the board. When asked about this, users said that they on some level realized that annotating was optional. However, they felt like they "ought to" annotate for the sake of completeness. We may want to look at ways to change this behavior.

Related to the above problem: When adding text to an element, one of our users (User 2) kept trying to type the entire contents of the board section into the text field. We intended for this text to be used as a label; typing huge reams of text here could really slow a user down. User 2 also didn't seem to realize that typesetting issues would prevent him from accurately typing in the content of equations, etc. into the text fields.

Task 2: Annotate board element with photo of notes

Again, this task generally went well. Users had no problem navigating the annotation menu and taking the photo. One user was confused when the small camera icon appeared on the lower-left of the element, but he fairly quickly discovered what it meant by experimentation.

Task 3: Give presentation, call up photo of notes

Here, users had no problems navigating through the presentation. However, they did have a few issues with some of the extra functionality, especially that related to photos.

No users had problems calling up the photos of their notes (by tapping on the camera icon). However, two users had problems dismissing the photo. There's no button for this; we intended for users to tap the screen to dismiss the photo. One user tried to press the home button to go back--this was near-fatal, as it would have quit the app had she held it down. Another user discovered how to dismiss the photo by accident: he tried to rotate the photo by a rotary swipe gesture, which dismissed the photo.

Speaking of the rotary swipe: other users wanted additional functionality when viewing notes. User 3 wanted to zoom in on the notes via the pinch gesture, but we had not intended to support this gesture.

Even though our prototype didn't implement any of the functionality related to tagging things for revision, we were curious to see if users could figure out what the exclamation point icon meant. User 1 guessed correctly, but Users 2 and 3 did not. User 2 thought that it would allow him to edit the board on-the-fly; User 3 thought that it would serve as a bookmark in the event that he had to quit the presentation early.

Discussion (10 pts)

What we learned

Our app generally works quite well for our target users, even those who haven't use iPhones before. Users were able to quickly ramp up and develop a workflow with the app, even after only a few minutes of use. There are a few rough spots in the interface that need ironing out (see the Results section above), but the overall structure of the app seems solid.

What we couldn't learn

Since none of our tasks involved the timing or flag-for-revision functionality of our app, we couldn't really learn much about the effectiveness of these features. As described above, we tried to get some information about flag-for-revision. We learned nothing about timing.

We also couldn't learn if our proposed method for planning whiteboards--declaring layouts and then adding content to these layouts--is the "right" way to plan whiteboard presentations. Our users seemed familiar with this approach (it is similar to the approach taken by software such as PowerPoint), but perhaps a comparative study with a more atypical interface would have allowed them to think outside the box and suggest alternative planning paradigms.

What will change

The boards viewer tutorial, in its present form, clearly doesn't work. In the final version, we will make it clear that this screen is just a static image. We might also give it a header like 'Tutorial' so that this is even more clear. User 1 suggested that this screen not come up unless a 'Help' button is pressed. This would also further reduce confusion.

Users didn't notice the 'New Board' button at the bottom right of the board editor. To encourage them to use this (and thus speed up their workflow), we're thinking of placing a more attention-grabbing icon in its place (instead of the text 'New Board'). A color change might also help here.

To help users remember to drag elements onto board sections before they can be annotated, we plan to have the 'Drag Elements onto Board' notification on every empty board section, instead of just once per board. This might be a very light, greyed-out text that says something like 'Drag Element Here.' This fix should eliminate the problem.

As for User 3's issue with thinking that the layout chooser represented the layout of multiple boards in a classroom, we don't think that we need to change anything here. We think that the problem was with the instructions we gave the user (telling them that the classroom they're teaching in has two whiteboards right before they see this layout chooser screen). Without these slightly conflicting messages, we think that the function of the layout chooser is clear (as it comes right after "Add Board" every time the user taps it). The user confirmed this after the test, saying that on second glance the meaning of the layout chooser was quite clear.

The above conceptual error also, as noted in the Results section, caused User 3 to try adding multiple elements to the same board section. We don't see this as a problem for the final app, as elements that have been dragged onto the board will in some way expand to fill the entire section that they occupy.

We would also like to discourage compulsive annotation. We think that changing the 'Tap to Edit' notification that pops up for each element to something like 'Tap to Annotate (Optional)' might do the trick.

Along similar lines, we'd also like to discourage users from trying to type the entire contents of their boards into the element text fields. We think that both providing a small text box and calling this text something like 'Element Label' will get the message across fairly clearly.

As mentioned above, one user was confused by the appearance of the camera icon after he attached a photo to a board element. The final app will have a much sharper, nicer-looking photo icon, which will help. We also might want to add a confirmation screen where the user accepts/rejects the photo they just took before it gets attached to the board element. Transitioning from the photo back to the board editor by having the photo minimize into the new camera icon would also help clarify the situation.

Our tests clearly indicate that we need to provide a clear way to dismiss photo annotations once they've been brought up. A 'Dimiss' button that comes up when the user touches the screen should do the trick.

Our users also made it clear that we need to support more interactions with photo annotations. We should definitely support pinch zoom, and we might also want to consider rotating the photo with different orientations of the iPhone.

The rudimentary analysis of the flag-for-revision system we conducted indicates that we need a more informative icon that an exclamation point for this feature. One user also suggested adding some text that pops up briefly to say that an element has been tagged for revision.

Appendix (5 pts)

Additional Materials

Critical Incidents

  • User doesn't realize that tutorial screen is just an image, tries interacting with it. Difficulty in spotting the dismiss button right away.
    • Severity: 4. Major usability problem: important to fix
  • User forgets to drag element onto board, tries tapping either a blank region or the next element.
    • Severity: 3. Minor usability problem
  • User doesn't know how to dismiss photo of notes, tries pressing home button.
    • Severity: 3. Minor usability problem
  • User tries to rotate photo of notes by rotary swiping gesture.
    • Severity: 1. I don’t agree this is a usability problem.
    • The swipe gesture is a nonstandard iPhone gesture, and we do not intend to allow the photos to rotate.
  • User tries to zoom into the photo of notes by pinch gesture.
    • Severity: 3. Minor usability problem
  • User does not notice 'New Board' button in the board editor, goes back to the boards overview to add a new board.
    • Severity: 2. Cosmetic problem
  • User tries to drag an element into a section that already has an element in it.
    • Severity: 2. Cosmetic problem
  • User believes that the layout choices refer to the arrangement of boards in the classroom, not arrangement of topics on a single whiteboard.
    • Severity: 3. Minor usability problem
  • User tends to edit all elements on the board, though it's optional.
    • Severity: 2. Cosmetic problem
  • User types all their notes verbatim into the text field of an element.
    • Severity: 2. Cosmetic problem
  • User is unsure what the camera icon is for after taking a photo of his notes.
    • Severity: 1. I don’t agree this is a usability problem.
  • User doesn't know what the '!' icon does in presentation mode. Each user has a different idea of what it might mean.
    • Severity: 3. Minor usability problem
  • User doesn't know what it means to "time this lecture".
    • Severity: 3. Minor usability problem

Raw Data

These are the index cards our observers created during our test sessions. Some of the cards have additional annotations on them that were made immediately after the test session (to ensure that we wouldn't later forget what our notes meant). They otherwise reflect the raw data we gathered during our studies.

User 1

User 2

User 3



[add comment]
Personal tools