From CS160: User Interface Design Sp12
Introduction & Mission Statement
Professional quarterbacks willing to improve their skills resort to world class sports laboratories where his moves can be precisely measured and analyzed for imperfections. Due to the high cost of using such facilities, recreational quarterbacks don't have access to these. We attempt to change that.
Purpose & Rationale
The purpose of our application is to help football quarterbacks to improve their posture and positioning during certain pivotal poses like right before he is about to make a pass or throw a ball. Learning through repetition is proven to work, and by repeatedly performing the same poses every time, the quarterback will learn to execute crucial moves that are performed during game time. Our experiment will consist of football quarterbacks using our application to learn correct poses through repetitive practice. The players will use our application to correct their bad posture, and learn the correct way to perform a move by following the feedback provided by our application.
Our mission is to help recreational football quarterbacks improve their skills.
- Pedro: Sketches, acting, props, write-up.
- Juan: Sketches, filming, film-editing, write-up, props.
- Sally: Write up, acting, sketches.
- Jeff: Write up, main actor, sketches.
The prototype seems very plain and lacking of many things. In our next iteration, we hope to add more features to the UI to make navigation easier for the user. Some more functionality that could possibly be added: -Retrieve scores of football teams from internet - Teach the user about different "Game Plans" - Add recent highlights of talented football players.
The initial screen greets the user and asks him to do the calibration position. At this moment, the kinect adjusts its tilt for optimal performance. As soon as the user strikes the calibration position, the UI updates to the main menu interface. This screen only shows up once since we believe that we need to calibrate the kinect only once.
The main menu consists of different categories for the different lessons available on the application. Training, Stretches, and Full-Motion Throw are some of the examples of the options available to the user to navigate through in the main menu. The user swipes downward to select stretches or right-swipe/left-swipe to go onto the next option. The upward swipe means goes back.
After the user has selected a lesson, the user is taken to the demonstration page. Here is where the lesson really begins. The avatar on the screen will correctly demonstrate all poses required for that lesson. The avatar will move slowly, showing step by step how to achieve the perfect posture and pose. The user can pause the playback at any time if he'd like. The user may also playback the lesson as many times as he'd like before he is ready to try it out for himself. While the user is watching the demonstration, he should be following by doing the poses himself. After the demonstration has ended, the user will be prompted to try for himself.
After the user sees the short-animation, the screen changes into the next stage, known as "Your Turn". During this stage, the application notifies the user to start the exercise or he can choose to replay the previous short-clip animation (this option is not shown in the prototype, but found out it would be handy while making the prototype). The user raises his right-hand and holds for a second to notify the application that he/she is ready to begin the exercise. Once the application starts tracking the user, there would be notifications to the user whether he is doing a good job or is doing the exercise incorrectly. Once the user completes the full exercise, the application notifies the user that he has passed/finished the exercise and automatically retuns back to the previous menu so the user can select another training/stretch exercise.
Discussion of Video Prototyping
How did you make it?
We started out by by brainstorming how to improve the interface that we presented in the Contextual Inquiry assignment. When we reached a consensus on how the interface would look like, we started out by building out our prototype in A4 paper. Soon, we realized that drawing each interface on a page was a waste of time since just one mistake was enough to ruin that page. We then switched to prototyping each element of our interface in a separate piece of paper. After reaching a consensus on which elements we would use we built each page of our interface and then took pictures of the page. In order to generate the animation, we move the elements slightly and then took various pictures. When putting together the pictures of the UI and the slightly changed UI, it generates the feeling that the elements are animated. We also shot live action footage to make the story more realistic and give the viewer the context of the application. The combination of live action footage with traditional hand-drawn animation allows the viewer to see a real interaction that might takes place when a user is using our application. We then edited our final video and included narration.
The video was edited using Windows Movie Maker.
Any new interesting techniques you came up with?
There were many interesting techniques that came up as we built our prototypes. For instance, in order to create the feeling of a television, we built a TV frame out of A4 paper and some cardboard flyers. We then painted it with markers. This prop will come handy when we do the user of oz testing since it allows us to quickly change from one interface to another without giving up the feeling that the interface is showing on a TV screen. Another interesting techniques was prototyping each element individually instead of prototyping each interface. This gave us modularity allowing us to easily change our interface without having to redrawn it. The modular elements also were critical in creating the animations since without them, we would have to use other animation technique. The combination of these techniques played a pivotal role in our prototype since it gave us a better sense of how our interface worked and its shortcoming. Due to its modularity, we were able to quickly iterate and improve.
What was difficult?
The difficult of making the prototype is showing the animations of our application. For example, how the screen animates/slides as the user selects by swiping in some direction. It was also difficult for the group to decide on one main user-interface. Although there were many disputes, we now have a the basic idea of what user-interface our app needs. The most difficult task was how were we going to show the example stretch/throw for the user to follow? When a user selects a workout, the application shows a small clip of what the user is suppose to do. We decided to draw out every frame of a "stick-figure" throwing a football/workout as the small clip that is shown to the user. The hardest part was getting the group to schedule a time for the meetings. Some of us had MT's to study for, some of us had to travel somewhere over the weekend, so this led to lack of film-resources and had to make-do with what we had.
Additionally, we had problems drawing since most of us weren't talented artists.
What worked well?
The animation worked very well, conveying how elements move on our UI. Considering our very limited editing skills, we believe that the video ended up being better than what we expected (take in consideration that we didn't have a proper camera, so we were able to only take short shots and then glue these together in order to create the video). Upon watching the final video, we were all happy that it conveys the intended message. Actors and narration is up to Hollywood best standards. Special effects are on par with "made for tv" sci fi's. Finally, our 4 minute script has more content than most 1 hour scripts written nowadays.