From CS160: User Interface Design Sp12
David Squeri - Greeter, facilitator
Shuqun Zhang - Computer, minor changes/fixes to paper prototype
Arturo Wu-Zhou - Note taker, test-participant
Our application is meant to be an aid for students interested in developing their public speaking skills but do not have access to an audience to practice with; it will act as a mock audience that will monitor the speaker's performance and provide feedback on factors that the user may have difficulty keeping track of alone.
- Participant A was one of our participants from the contextual inquiry (Person A). She teaches SAT classes once a week as a student instructor and does presentations for a club.
- Participant B is a tutor in the SLC who has to hold tutoring/review sessions. He was a quick learner, and it showed during the paper prototype testing.
- Participant C is a physics student working on group research projects at the LBL labs. He is regularly required to give presentations regarding progress of the experiment, as well as training new, incoming students.
The testing took place in a medium sized conference room with chairs, a table, and a wall with a large whiteboard. Magnets were used on the white board to hold part of the paper prototype, making it easier for our human "computer." The remaining portions of the prototype were simply laid out next to the "computer" for easy access, and she was responsible for navigating through pages and holding them up for the participant to interact with.
- Have the participant monitor the length of their speech while presenting (easy)
- Have the participant use the good/bad posture detection (medium)
- Have the participant add a new unwanted gesture and test in a presentation (hard)
We met an hour before the scheduled testing time and finalized our script, made any necessary changes to our prototype, assigned roles, and did a few practice run-throughs. Since we only had 3 people, we assigned one person to be the greeter and facilitator (ie do most of the talking with the participants), one person to be the computer, and one person to be the observer (note-taker), who also acted as our pretend-participant in our practice runs.
During the actual tests, the participants were each given a quick background briefing concerning the application and then a brief demonstration on how to navigate through the UI. Each of the participants were then told to perform the list of easy, medium, and hard tasks that we had laid out for them. We decided that it would be more efficient and would make more sense for all the tasks to be done in one run-through of our app (ie just do one presentation instead of three), so we listed out these tasks on the whiteboard in case they needed a reference during the test. The pages were held up by our human computer for the most part, except for the presentation screen, which was hung stationary on a whiteboard with a magnet. When the user made a navigation change, the human computer would change the current page based on their actions. Cursor location was also handled by the human computer through the use of a paper cursor on the end of a paperclip. Facing the participant with the current UI page, the computer would mirror the cursor to the participant's movements in order to simulate the cursor following the movement of their hand. If the user ever fell silent for a long period of time, we would pose questions asking them to comment on the page they were currently using and say what (if anything) was confusing or could be made better.
Photos from our test:
Overall, our experiments showed successful results. All three users were able to complete their given tasks without much (if any) help. Our easy and medium tasks didn't actually require the user to do anything explicit other than simply starting a presentation. The descriptive text on our buttons and screens successfully guided our users to perform our hard task.
Some things stood out throughout the three trials. All of our users had some difficulty reading the text on the buttons and information on the screens because they were too small to read from a distance. On the other hand, the text on the buttons were informative enough that users quickly navigated to accomplish the three tasks. Two out of three users had no difficulty in adding a taboo gesture; the third participant did not know when to start registering the taboo gesture (but still successfully performed the task). All three users were also able to get the program to register a bad/taboo gesture during the presentation.
- Squinting: All three users were forced to squint because the detailed descriptions on the buttons and information on pages were too small. We rate this as a Minor usability problem because once the user has used the system a few times, there would be no problems.
- False Starts: Two users started their presentation right away, as they did not realizing there was a countdown to begin the presentation. We rate this as a Major usability problem because users would need to re-do the first 5 seconds of their presentation once they see the 'Start!' notification pop up, which could be very annoying.
- Saving Video: Two users expected the application to save the video recorded during their presentation. We rate this as a Major usability problem because our users expressed that being able to see themselves present again would be a needed feature
- Ending the Presentation: While ending a presentation, one of our users expressed that the gesture to end the presentation was kind of similar to a taboo gesture he put in. We rate this as a Major usability problem because (even though rather unlikely), if a taboo gesture happens to overlap with the ending gesture, the user might accidentally end the presentation.
- Back Button: One participant mentioned that he had become accustomed to "Back" buttons being located in the upper left-hand corner of a screen, while ours was in the bottom left. He found our button very quickly, but he still felt that our design had broken the accepted norm. We rate this as a cosmetic problem because we chose to put the back button where it is to keep it consistent throughout all of our screens.
Our results showed that our interface was simplistic and gave users a clear view of how to navigate. However, the information displayed on the buttons and the pages themselves were too small. This means that while the information we supplied to the participants in the UI was sufficient, even for a first time user, the design will need to be changed in order to supply the same information in a larger, more easy to read format.
All users seemed to like the two handed selection: right hand moves the cursor and left hand push activates a button. They also appreciated the large button design that we chose to use. This made navigation much easier as there was far less precision necessary to make a selection.
To address our critical incidents, some changes we might make are:
- Change the countdown to the start of the presentation so that large numbers pop up to count down rather than having our timer in the corner do it.
- Change how to end presentation.
- Add a save video feature.
One thing our experiment could not reveal was exactly how well the posture/gesture/voice recognition of the kinect will work during the presentation. During the test, our human computer picked up gesture and voice input with human senses, but the kinect might be more limited in its ability to do so.