From CS 160 User Interfaces Sp10
The system being evaluated is a mobile social network for veterinary emergency doctors who treat small, non-exotic animals (such as dogs and cats). This system takes the form of an iPhone client application, which allows users to ask questions on disease diagnosis and treatment, and receive answers from other physicians in real-time. The replies have ratings associated with them, provided by other doctors. Each physician also has a profile that includes the amount of questions they’ve answered and the average rating of their answers.
All group members contributed equally. The majority of the work was done with all group members sitting together at the same table. This allowed for a lot of discussion, which was especially important when analyzing the results of our interactions with the target users.
- Michael Cao - Results, Discussion, Narration for Video Prototype.
- Andrey Lukatsky - Introduction, Mission Statement, Video Prototyping Discussion.
- Victoria Chiu - Prototype description and sketches.
- Anthony Chen - Method, Video Prototype editing and compilation.
Our objective is to create an application that will improve how veterinarians diagnose and treat animals, by creating a system that facilitates the spread of knowledge. Most of the members of this team either own or have owned pets, and we understand how important it is for our animals, who we consider to be part of our families, to get the best treatment. We feel like we can really make a positive impact – not only on the lives of animals, but on their owners who care deeply for them.
Although, at first glance, it seems like similar offerings exist (Yahoo Answers, Aardvark, etc.), our system is very different. We’re creating a targeted application for a very specific user group. Even though we’re achieving our goal using a proven mechanism: a social question/answer network, the fact that we’re targeting such a distinct niche makes our application incomparable to such general-purpose offerings. Essentially, we didn’t want to reinvent the wheel to achieve our goal, but perhaps improve upon it to make it more suited for particular environments - much like putting snow chains on tires.
Easy task, 0:00 - 0:56 : rating other vets' responses to questions
Medium task, 0:57 - 1:50 : posting a response to other vets' questions
Hard task, 1:51 - 3:37 : posting a question with attachments
The video prototype was created with a combination of paper cut-outs, video camera, and editing software. None of our group members had done anything like this before, so it took us a few iterations to build the low-fidelity prototype and create the video. The whole process (from cutting paper to exporting the final edited video) took us roughly 16+ hours. We learned a lot of skills along the way, from how to design the actual paper prototype (and make it easy for a user to navigate) to specifics of Adobe Premiere Pro CS4.
The prototype was built using standard 8.5x11 printer paper. We considered using construction paper, but after a few test-runs using normal paper, we didn’t find any issues. We used regular scissors and an X-Acto knife to make the cuts. The X-Acto knife proved to be the best tool for the job due to its accuracy – we could make very delicate cutouts.
There were three major difficulties we ran into with the paper prototype. The first involved making the frames transition as smoothly as possible – without having the prototype jump a few centimeters in any direction with each frame. The solution of this problem was three-fold: using a very solid/sturdy surface, fixing the camera, and printing out an iPhone frame that we taped to the surface. We picked a heavy table for our working surface because it was very stable and flat. We didn’t have a tripod for the camera, so we used old textbooks and tape to suspend the camera (rigidly) above the prototype. The final step was printing out an iPhone in the center of an 8.5x11 sheet of paper, cutting out the screen portion, and taping this to the desk. The significance of cutting out the screen portion will be described next.
The second major difficulty we encountered was creating a scrolling effect where the screen would move with a user’s touch. This problem was solved by using the iPhone frame. Whatever we wanted to scroll, we simply placed it under this frame. For example, to implement the list of questions, we had a piece of paper (with questions written on it) which was the width of the iPhone screen cutout, but the height was much taller than its iPhone counterpart (about twice as tall). We placed this rectangular piece of paper under the iPhone frame we made (we only taped the top part of the frame to the desk, so we could easily lift it up and place things under it). To simulate scrolling, the user would simply drag their finger and the paper under the frame would move – providing a scrolling effect. We found this technique worked quite well.
Finally, the third difficult we ran into was simulating a user typing – particularly having text appear when the user taps the keyboard letters. Initially we attempted to have the user tap a letter, then write that letter into the text box, and repeat this process. This unfortunately proved too tedious – especially for screens that had a lot of text fields. We solved this problem using video editing software. It turned out it was easier to digitally add letters to the text fields.
One final technique that we used was recording all the way through instead of in chunks (ie, instead of stopping the camera, re-arranging the prototype, and resuming). Anthony had some experience with video editing before, and he volunteered his skills for our project to make the whole recording a lot easier. He chopped up the long video that we recorded, grabbed the chunks that were important, and re-pieced them together. He did this while the rest of us were filming the prototype and whenever he saw problems or needed us to record extra video for him, he would come over and tell us what he needed. Finally, this method also made it really easy to have custom audio dubbing. After the video was completely edited through, we came up with a script and had Michael record himself talking synced to the video. Then, we just overlaid the audio on top of the video to make the video complete. Although the additional video editing required a little more coordination on our part, we found this to be much easier and alleviated the stop-rearrange-start recording problem.
This time, we went again with the emergency vets at the Berkeley Dog and Cat Hospital because it is one of three main animal hospitals in the Berkeley area (one of the two emergency hospitals), and they were the only ones we were able to contact and agree to our participation for the test. They were also very enthusiastic about working with us and expressed interest in our proposition to have them demo our prototype in the future.
We conducted our test after we did our own task sketches/demonstrations so that we were very familiar with how our system worked. We then divided up the roles based on what we thought fit us best:
- Andrey - Greeter/observer
- Michael - Facilitator
- Victoria - Computer
- Anthony - Observer
When we rehearsed it before the actual test, Anthony played the role of the participant. Through our rehearsals, we refined the required tasks/scenarios that we would give the vets during the actual test to align with the tasks that the gave us in the feedback from the contextual inquiries.
We set up an appointment with each of the vets individually around half an hour before their shift started. We performed our test in one of the open offices in the hospital on a large desk. The vet would sit on one side of the table and the computer and the facilitator would sit on the other side. We brought all of our paper supplies and had them use the same ones that we prepared for our video.
We reused the three tasks that we used for the prototype video demonstration: Rating other users' responses (easy), posting a response (medium), and posting a question with attachments (medium + difficult). We decided not to prototype sorting/filtering questions because of the long, tedious, and uninteresting preparation required in reordering/repopulating lists of questions. We also decided not to prototype users' profile creation because we felt that it was more of a secondary feature to our app's main purpose.
Before we met up with the vets, we briefly explained our lo-fi prototype and its purpose and also gave them some info on the app through email. We interviewed a couple of them for the contextual inquiry last time, so they were for the most part pretty well informed about our app.
When we met up with each vet, Andrey briefed each one about our interview/test again and had them sign a consent form. We spent quite a bit of time on this step showing how we would conduct the test because they were unsure about how to use the lo-fi prototype and we wanted to make sure that they were not limited in their performance/reactions by the system.
Next, we had Michael go through the tasks with the vet one by one. As the vets started each task, we timed them to get temporal feedback (none of them consented to audio/video recording). For the most part, the vets used our app pretty easily, but when they did seem to get stuck, Michael would ask them some questions so that we could understand what they were having trouble with and then guide them along their intended task.
Finally, after we had them do the tasks, if we had time (we only did for the second vet, who finished every task quickly), we had a short interview with him to see what his impressions were, and to get some of his direct feedback.
We focused on the ease of use of our app because we saw that as the first thing that would cause users of our app to stop using it. We took the time that they spent on each task and the sort of questions they asked Michael or the kind of responses that they had when Michael asked them questions as a metric of how easy to use our app was. These sort of questions during the test that they had would tell us how easy/difficult it was for them from a higher level to go from the concept of the desired task to actually doing it.
As stated earlier, the three tasks we had our three target users perform were to rate a response, attach a file, and answer a question. Most of the users had a relatively simple time navigating our user interface and performing the tasks we gave them. However, there were parts of certain tasks that did cause some difficulty and minor problems.
Our first target user had an easy time performing almost all the tasks we asked of her, except the task of attaching a file. She was able to get to the Post a Question screen right away. However, after she got there and finished filling out all the information about the question, she checked the box for attaching a file, but just waited expecting something to happen so she could attach a file. Eventually she noticed we weren’t doing anything so she just finished her question post. That’s when the screen for posting an attachment finally came up, allowing her to finally attach a file. But besides attaching a file to a post, all the other tasks were performed quickly and intuitively for her.
The next target user basically had no problem doing any of the tasks we asked of him. He performed every task relatively quickly and did not bother asking for any advice. The only task that he performed a little slower was rating a response. Once he got to the Answers/Questions screen, he saw the response but didn’t realize right away that he could click on the thumbs up sign. He sat there for a couple of seconds before attempting to try clicking it. But clearly, he eventually made the right decision on his own.
The final target user had a couple of small problems while performing our tasks. First, she had the same problem as the first user when trying to attach a file. She also waited for something to happen after checking the box indicating you’d like to include an attachment. She did eventually realize that she needed to finally submit the question before she was allowed to attach a file. The next minor problem she had was just trying to get to the Answers/Questions screen. She took a couple of seconds on the main menu looking around before she realized the list shown there was the actual titles of all the questions that she could click on. Other than those two incidents, she performed everything else efficiently and relatively quick.
The results we obtained did end up revealing some small issues we had with our user interface that we did not consider when designing it. The first issue that we obviously noticed was the way we let users attach a file. Two of our target users had difficulty understanding the way our design for attaching a file worked. Both just sat there and waited after checking the attachment box on the Post a Question screen expecting something to happen. It took both of them a bit of time before they realized they could post an attachment on the next screen after they submit the question. After learning about this ambiguity, it seemed clear to us that we might need to rethink our design for attaching a file in order to make it more intuitive for our users. We were thinking that instead of having a separate screen for adding attachments, we could put everything on the same screen, and the user would be able to browse for files after the attachment box was checked.
Another issue we learned about was the thumbs buttons for rating a file. It wasn’t particularly obvious to one user that he could click on it. He thought it was just a visual image that showed how many people liked a certain response. We definitely need to do a better job making it more obvious to the user that it is a button and not just a passive picture, whether it’s making the button more bold or somehow make the thumb look like its popping out of the screen so it actually looks like a button.
The final issue we learned about was the questions list on the Main Menu screen. It might not be obvious for users that those are just the titles for the questions, and that they could click on them to get to the actual Questions screen. As a result, we can include a Questions title right above the list of question titles to make it clear that it’s a list of questions.
Critical Incidents from Observers
- Seems to be very tech savy
- No iPhone, but has Android device
- Has used Yahoo Answers
- Moving through the interface at nominal speed
- Slight hickup when posting a question with attachment
- Bit confused on the attachment checkbox
- Most likely 1 time occurrence
- Has a non-smartphone
- But uses a tablet PC regularly (with stylus)
- Says he's heard of Yahoo Answers, but has never tried answering/asking
- Wants to become a member of Just Answer
- Moving through interface faster than #1, but doesn't seem like by much
- Has problem rating a response
- Doesn't know what to press
- Thinks pressing tapping on response itself will either rate it or open up something (not the case in our design)
- Judging by look on his face, it appears that he is not a believer of lofi prototyping
- Not sure what phone she has (forgot to ask)
- Calls herself tech-savy
- Has read Yahoo Answers, but has never signed up
- Moving through interface about bit slower than #1
- Likes our paper prototype (says she's really into art)
- Problem with attaching file checkbox: expected something to happen upon check
- Problem 2: From main screen (list of qs), didn't realize right away that to get to answers you have tap on question