LoFi-Group:Epileptic Eels

From CS 160 User Interfaces Sp10

Jump to: navigation, search

Low Fidelity Prototyping, Video Prototyping, Testing, and Analysis


Introduction and Mission Statement

Our application is designed to make grocery shopping easier and more convenient for people who have food allergies, intolerances, or other dietary restrictions. Unlike most people who shop for groceries, members of this group must carefully read through every ingredient on the labels of the items they wish to purchase in order to avoid buying items that are unsafe for them to consume. This process is time-consuming, and creates many chances for the shoppers to make errors. Offending ingredients can be masked by an unusual name, or simply missed. Shoppers who are buying items for friends or relatives are probably not used to shopping for people with food sensitivities, and are likely to make mistakes and be less careful. The goal of our application is to provide a quick, accurate, and simple tool for detecting offending ingredients in food products that any shopper can use effectively.
We constructed a low-fidelity prototype of our proposed interface, and conducted a series of user tests and interviews in order to determine whether our interface achieves our goals for efficiency, flexibility, accessibility, and ease of use.

Mission Statement: The goal of our application is to provide a quick, accurate, and effortless way for people who have food allergies, intolerances, or other dietary restrictions to detect offending ingredients in food products, and thus avoid the tedious and time-consuming process of reading and interpreting ingredient labels.

Group Members and Contributions

  • Divya Banesh: Helped film video and stop motion, helped create lo-fi prototype, appendix, method, and result sections, did general editing and formatting
  • Andrew Finch: Helped film video and stop motion, did video editing, helped conduct user tests and interviews, created interface sketches, helped create introduction, video prototype, and results sections, did general editing and formatting
  • Arpad Kovacs: Helped film video and stop motion, helped conduct interviews, helped create lo-fi prototype, intro/mission statement, result, discussion, and method sections
  • Daniel Nguyen: Helped film video, helped conduct interviews and interview scheduling, helped create lo-fi prototype, discussion, and method sections, wrote informed consent
  • Saba Khalilnaji: Helped film video and stop motion, helped conduct interviews, helped create lo-fi prototype, prototype section, and wrote scripts


Our prototype application is split into 4 tabs, each of which represent a specific task or goal that the user may want to achieve:


This tab allows the user to scan the product's barcode. When this tab is selected, the camera is activated, and a barcode scanning interface is presented. A live preview of the camera's field of view will be shown. This preview will be overlaid with an outline box that shows the preferred coordinates of the barcode for optimal recognition performance. Once the scan button is pressed, and the program detects a legitimate, in-focus barcode, it will automatically take a snapshot and attempt to parse the UPC pattern. if barcode capture fails, an error message will appear, and the camera will revert to live preview mode, prompting the user to try again. Once a UPC number is successfully obtained, the user is taken to the 'Analyze' tab, where the product information will be shown and analyzed based on the active profile information.

Design Sketch
This is our original interface design sketch for the Scan tab
Lo-fi Prototype
This is what the different screens of our lo-fi prototype look like for the Scan tab
The arrows show the direction of screen change


1. The green bordered barcode is what shows after the user properly aligns the barcode in the camera view. The adjacent red-bordered barcode view is shown when the barcode is out of alignment.
2. This window shows up as the iPhone is reading the barcode picture and searching for a product.
3. Once the product is found the screen labeled 3 shows up and the user can press ok to move onto the analyze tab.


As an alternative to scanning a barcode, the user may go to this tab and enter a numerical UPC number or textual keyword description of an item into the search box. For user convenience, the textbox will implement autocomplete functionality by providing a list of items (described by UPC code and a textual description) that match characters that the user has entered so far. The matching items will be shown as a scrollable list. Each item in the list will be identified by a picture (if available), the item's name and UPC. The user may select one of these options to immediately go to that product's 'analyze' page. It will also be useful for users who want to browse through or compare a list of products rather than look up a specific item.
Design Sketch
This is our original interface design sketch for the Search tab
Lo-fi Prototype
This is what the different screens of our lo-fi prototype look like for the Search tab


The screen depicted is simply the search field and the corresponding keyboard and number pad that pops up for this view.


Once an item has been scanned and recognized or selected from the list of search results, the user is taken to the 'Analyze' tab, where he is presented with a brief item description and the status of it's compatibility with the dietary restrictions of each active profile. Each profile is given a "green light" or a "red light" for the product. Tapping on one of these status listings will present the user with more detailed information about the compatibility of the product, which will include an ingredient list with the problem ingredients highlighted, as well as other nutritional data, etc.

Design Sketch
This is our original interface design sketch for the Analyze tab
Lo-fi Prototype
This is what the different screens of our lo-fi prototype look like for the Search tab
The arrow shows the direction of screen change


1. The screen labeled 1 is the screen that shows once the user presses "Scan" in the scan tab once a scanned product is found. It shows all the profiles and tells the user which people can eat the scanned product. This view matches the design sketch well.
2. From screen 1, if the user presses on a name the next screen shows the product and its ingredients and highlights the problem ingredients for the corresponding profile.


The 'Profiles' tab allows the user to mange the profiles each person who's being shopped for. When a new profile is created, the user enters a name for the person, along with allergens and other items the person wishes to avoid. By default, when an allergen is added, all ingredients that are similar but go by different names, or contain the allergens are also added. More specific details about which allergens are added, which nutritional information should be screened for, and the user's preferences can be customized. Profiles can be created, deleted, activated and deactivated. Only active profiles are used when the application checks a product.

Design Sketch
This is our original interface design sketch for the Profiles tab
Lo-fi Prototype
This is what the different screens of our lo-fi prototype look like for the Profiles tab
The arrows show the direction of screen change


1. The first view is the view that depicts all the profiles. You can turn profiles on and off so the both switches are in view with each profile.
2. From the first view, if the user presses the "+", they move to the second screen which is for adding profiles. They can type in their profile information in the fields depicted. If they touch the done button they will return to screen 1.
3. From the second view, if the user is creating a new profile and presses the "add new" button for intolerances, screen 3 comes up to allow to user to create a new intolerance.
4. From the first view, if the user presses on the name of the profile screen 4 comes up to show the user that profile's allergen information.


The miscellaneous items of the prototype are pictured below.
Design Sketch
These are the miscellaneous buttons for the analyze tab
Lo-fi Prototype
These are the buttons, icons and nutritional information cards for the various test products we selected

Video Prototype




We began video development by brainstorming several plausible scenarios in which we could integrate the three tasks for our prototype. We proceeded to select the best option, and further refined the concept by writing a script to provide a guideline for each scene that we would have to film. Our script had 1 main character and 2 secondary characters, and incorporated examples of all the key functionality of our app.

Live Action Footage (Scenarios)

We began recording the live action footage in the parking lot of the Berkeley Safeway supermarket on Shattuck, using a portable camcorder. Unfortunately we could not gain permission from the store manager to film inside the supermarket, and had to relocate to a nearby CVS drugstore to shoot the indoor scenes.

Stop-Motion Footage (User Interface)

Once we had captured all of the live footage we needed, we began capturing the user interface shots using the webcam provided by the instructors, and SAM studio. We initially shot the scenes one frame at a time, in the traditional stop-motion manner, but soon found this to be a very slow and tedious procedure that produced choppy results. We eventually discovered the time-lapse mode that SAM studio provided, which automated the shooting of still frames. We found that this technique provided much smoother and more efficient video capture capabilities, and set the time-lapse to shoot 10 frames each second while the user's hand was moving in the scene. We then simply paused the shooting when we wanted to change the interface layout, and then resumed again with the new layout. This process was rather effortless and the results were quite effective. One issue we encountered was that the low-quality camera did not perform well in scenes that were deficient in ambient lighting. Although we tried to rectify the situation by adding additional light sources, we found it hard to provide a sufficient amount of lighting for our scene, which resulted in our stop-motion sequences looking a little darker than we would have preferred.


After we captured all of the necessary live action and stop-motion footage, we proceeded to edit and composite the footage in Final Cut Express. Numerous cuts and transitions were made, as well as adjustments to color, saturation, brightness, and contrast. Audio enhancements and volume adjustments were also implemented.



We attempted to test our experiment on a diverse set of participants from varying backgrounds and with differing dietary restrictions so that the results would be generalizable and applicable to our entire target user group.

Interview 1:

The first subject has had serious lifelong allergies to tree nuts and seafood, and does not own an iPhone or any other smart phone. Although the user has used an iPhone once or twice, they are not very familiar with the iPhone platform conventions and application paradigms, so intuition contributed heavily in their use of the prototype. In addition, the participant was also a vegetarian, which gave insight into possible extensions to the current base of supported dietary restrictions.

Interview 2:

The second participant possessed a mild lactose intolerance that has developed within the past three years. They own an iPod Touch and thus are moderately acquainted with the Cocoa Touch user-interface toolkit and widgets that we are using in our applications. We relied heavily on this interview to measure our adherence to common application practices.

Interview 3:

The third subject had a variety of allergies, of differing severity, including: yeast, eggs, wheat, sugar, vinegar, and mold. They currently own an iPhone, but have only had it for a few months. Thus, they are quite familiar with the built-in/bundled iPhone functions and conventions. Unfortunately, they have had very limited exposure to third-party applications, limiting their knowledge of common application practices deployed by developers besides Apple.


Unfortunately we were unable to gain permission from Safeway or CVS to conduct the experiment within a store. Therefore, we had to settle for the next-best alternative, which was a spacious and well-stocked kitchen with shelves of food that provided a reasonable imitation of a supermarket aisle. In order to maintain the participants' comfort and remove fatigue as a random variable, we conducted the majority of the interview while sitting at a large rectangular table. The participant sat at the center of one side of the table, and was flanked by the facilitator, who would give task instructions. On the opposite side of the table was the computer, who would manipulate the low-fidelity prototype that was directly in front of the participant according to the commands that were given to it. The greeter and observer sat off to the side and discretely took notes on the process. The kitchen was relatively quiet and isolated for the majority of the interview, with the exception of an occasional interruption or comment from the kitchen's owners. This reflects the basic environment of a supermarket isle.


Below is the list of tasks in the order in which they were performed:

  1. Add a new allergy profile (Easy)
  2. Scan the barcode of an item. (Easy)
  3. Add a secondary profile and change the active profile (Medium)
  4. Check whether an item is safe to consume for multiple profiles (Hard)
  5. Delete a profile (Medium)
  6. Search for a particular item and check its ingredient restrictions (Hard)
We added a few tasks to be tested in addition to our benchmark tasks because we felt that testing these tasks would give us a better understanding of both what our application is lacking and the user's opinion of our application's functionality as a whole.

Testing Procedure

We split up the roles in the following manner:

  • Computer: Saba
  • Facilitator: Andrew
  • Greeter: Daniel
  • Observer: Arpad
After greeting the participant and introducing each member of the team, we provided the subject with some background information about our project to establish the premise of the experiment. Following this introduction, we provided the participant with some time to read and sign the consent form, as well as ask any questions or voice any comments/concerns about the experiment. Once the test subject had agreed to participate in the experiment, we provided a general overview of the system, as well as a quick demonstration of how to switch between tabs and view a saved product analysis.
By this time, we were ready to begin the actual experiment. We requested that the participant perform one task at a time. Upon successful completion of a task, we moved on to successive tasks. We did not provide any guidance to the participant regarding how to complete the task, but we did ask the participant to voice his/her thoughts and feelings.
Upon completion of the trial, we asked the participant to provide their thoughts and feelings about the interface and also discuss particular critical incidents that were identified during the process. We also requested that the participant provide any suggestions they had on how to improve the interface, and to identify any absent features they would find particularly useful or functionality they thought was deficient.
Checking ingredients against multiple profiles
Entering profile information via keyboard
Scanning a barcode

Test Measures

In each test, we focused mainly on how easy it seemed for the user to perform each task they were assigned. This was measured in a number of ways and was often affected by the user's interpretation of the information displayed by the prototype as well as their experience with iPhones. We used the speed of the participant in each task relative to the speed of the participant overall to judge which tasks were easier or harder to perform for the user. Also, we paid close attention to the user's reaction during and after each task to see if the task had been particularly confusing or straining on them. Almost everything the participants said was recorded, mainly because the participants seemed very focus on providing feedback for the application, whether it be in the form of criticize or praise. Another factor that we saw become very helpful in the tests was how many questions the participants would ask during each task. When a task was more or less what was to be expected, the user's comments would be very limited and often trivial. However, when a task was confusing, lacking, or generally difficult, the user would ask many questions in attempts to address their concerns about the functionality of the system.



All three of the participants successfully completed all of the tasks we requested them to perform. However, major usability issues arose during the process of creating and deleting profiles.

The following table shows the time required to complete each task.

Subtask User 1 User 2 User 3 Average
Add Profile 3 min 54 sec 3 min 36 sec 47 sec 2 min 46
Scan Item 1 min 5 sec 48 sec 23 sec 45 sec
Secondary Profile 1 min 34 sec 1 min 12 sec 41 sec 1 min 9 sec
MultiProfile Safe to Consume 1 min 36 sec 1 min 6 sec 53 sec 72 sec
Delete Profile 3 min 44 sec 2 min 31 sec 1 min 3 sec 2 min 26 sec
Search for particular item 47 sec 32 sec 28 sec 36 sec

Task 1: Add Profile (Easy)

In all three trials, users initially did not notice small buttons which were in the navigation bar, and thus outside the main content area. This made the process of creating a new profile a major point of contention for two out of the three participants, who found the procedure of saving profile preferences and navigating back to main profile page to be unclear. In particular, it took them a significant amount of time to recognize that the "Add" back-arrow in the navigation bar would perform this functionality.
When attempting to perform the "Create a new profile" task, the users were perplexed to find that upon clicking on the "Profiles" tab, they were faced with an empty list, which did not provide any guidance regarding the available options. At this point, the non-iPhone owners appeared dumbfounded, and took almost half a minute to discover the [+] button in the top right-hand corner, and even the experienced iPhone owner hesitated for several seconds before realizing that this button needed to be tapped in order to accomplish this task.

Task 2: Scan Product Barcode (Easy)

Scanning an item's barcode turned out to be quick and straightforward. We realized here that the "Product Found" dialog was an extra, unnecessary step for the user between scanning a product and seeing a successful analysis of it. Also, once the user was taken to the Analyze page, it was not completely clear that the profile could be tapped to reveal more information about the product and its safety for the owner of that profile.

Task 3: Add Multiple Profiles (Medium)

Once the users figured out how to create the first profile in task 1, adding and enabling a second profile proved to be staright-forward. The concept of having ON/OFF switches on each profile also seemed relatively clear to the users.

Task 4: Check whether item is safe to consume for multiple profiles (Hard)

Users found this task to be quick and straightforward. However, user #2 initially misinterpreted the meaning of the red X, which is intended to indicate that the item is not fit for consumption. She first thought that it meant that the product did not contain any dangerous ingredients. This was recognized as a critical issue. The misunderstanding was rectified in the next screen, which presented a message warning of "problem ingredients", and highlighted the ingredients that were unsafe for the owner of that profile.

Task 5: Delete a profile (Medium)

Users were confounded by the modality of the delete interface. Users repeatedly clicked on the profiles themselves, and searched for a "Delete This Profile" button, which they could not find. It was unclear that profiles could be deleted from the profiles list itself, once the user clicked the "Delete" button in the navigation bar. Users unfamiliar with iPhone widgets did not comprehend the meaning of a white negative sign enclosed in a red circle next to an item in a list.

Task 6: Search for particular item (Hard)

After completing all the other tasks, the users had very little trouble searching for a particular item. Using this tab seemed to conform to the other standards imposed by the application, so users were comfortable applying previously gained knowledge to accomplish this task. There was confusion, though, about what were valid items to search for. Should the user type "dairy" or "lactose"? Does the user have to be very specific and type "walnuts, pine nuts, etc.", or can the user simply type "nuts"? What about uncommon allergens such as mold, or strawberries?


During the course of the experiment, we learned the importance of clearly marking functionality in unambiguous terms, and always presenting the user with a consistent "action" button that will guide him/her through the application flow to the next step, or back to the previous screen depending on the context. As mentioned in the previous section, two of our users had difficulty discovering how to add a profile due to the fact that the "Add" button was relatively small and located in the navigation bar, rather than the main screen. Both of these users agreed that the button to "Add" a profile should have been labeled as "Save", and an even better option would have been to put a discrete "Save and Return to Profiles" button at the end of the form, so the user would not have to hunt around for what to do once he/she filled out the page. Additionally, some indicators we provided within the application proved to be ambiguous, which we had not anticipated. Marking these visual cues, such as the red "X" marking incompatible ingredients, with a small amount of text may help users adjust to the application in the beginning. Overall, we have discovered that we may have to sacrifice some spatial and aesthetic qualities of our application in order to make the interface more explicit and user-friendly.
We also discovered that we cannot assume that all users of our application will be familiar with iPhone platform standards. For example, our low-fidelity prototype utilized [+] and [Delete] buttons in the navigation bar to add or delete items from the profiles list, as recommended by the Apple Human Interface Guidelines. However, this proved to be a recurring point of confusion, since inexperienced iPhone users were not accustomed to looking at the navigation bar to find options, and instead would have preferred to click on an action button within the main screen area. To fix this usability problem, we will add a message that clearly states when there are no profiles cuurently in the system, and an "Add New Profile" entry in the list. This makes it so that the user is not initially confronted with a blank list, and has a clear path of action to take. We are also planning to rectify the confusion regarding the deletion of profiles by adding a "Delete This Profile" button to each profile details screen, to provide an option that is consistent with these users' expectations. However, some of the users' qualms with the interface's methods can be attributed to the limitations of the paper prototype. Certain features they expected, such as auto-completions when entering searches, would have been very hard to mimic in a low-fi stage, but their comments on their expectations at these parts will be very useful for a higher fidelity implementation of our application.
In response to these findings, we will add more action buttons within the content area of each page (rather than in the navigation bar), to cater to users that are unfamiliar with the iPhone platform, and to provide clearer guidance to users on how to step through the application. This will help make our application more inviting to users of any level of familiarity with iPhone applications.
One user said she would prefer to have an easy vegan/vegetarian option, where she could just tap a button to specify this, instead of adding every offending product, which would be nearly impossible. We thought this was a good idea.
Another set of issues we discovered were ambiguities regarding allergens. How specific does a user need to be? Can he or she enter a general item such as "nuts" and expect the system to recognize every nut, or does the user have to be more specific and enter "Pine nuts, walnuts, etc."? Or should we allow for both levels of specitivity? Should there be a difference between an allergy and an intolerance when creating a profile, since right now, the outcome is the same for both? How will the application handle differentiating between a product that contains an offending ingredient, and a product that states something like "may contain traces" of an offending ingredient? Some users must stay away from products that say either of these things, while others are fine with products that state the latter. How will the app handle associations between different names for the same ingredient? If a user enters dairy, will it suggest lactose instead, or just accept dairy? Will we allow for the user to specify particular amounts of an allergen that are tolerable? Finally, how will we deal with the realm of allergens that are unusual, such as mold or strawberries? How will we make it clear to the user what an acceptable search term is and what isn't? These are all issues that we don't know how we will handle yet, and we need to think about and discuss them thoroughly, and develop an effective solution.


Movie materials:

Movie Script
Experiment materials:
Prototype Pictures
Test Procedure and Script
Consent Form

Raw Data for Interviewee 1
Raw Data for Interviewee 2
Raw Data for Interviewee 3

Critical Incidents:
Level 5 importance:

  • users thought red 'X' in analyze meant product was OK to eat, when it actually meant the opposite
  • trouble figuring out how to finish adding profile (Add button needs to be improved)
  • trouble figuring out how to delete profile

Level 4 importance:

  • looking for ways to search/add very broad/specific food allergies
    • auto-suggestions?
    • what restrictions are supported by the application?
    • how are typos straightened out?
  • differentiating between 'traces', 'may contain' or main ingredient

Level 3 importance:

  • differences between 'allergen' and 'intolerance' ?
  • difference between name of person for profile and name of allergen(s) they're allergic to
  • vegan/vegetarian option?

Level 2 importance:

  • the "Product found" dialog box was not necessary - go straight to analyze

[add comment]
Personal tools