Input Devices

From CS 160 User Interfaces Sp10

Jump to: navigation, search

Slides

Contents

Readings

Ken Hinckley, Input Technologies and Techniques, in Handbook of Human-Computer Interaction, ed. by A. Sears & J. Jacko. Revised online version, Read Sections 1-3 and 7-8.

Bill Buxton, Multi-Touch Systems that I Have Known and Loved.

Mattkc7 - Feb 21, 2010 07:47:14 pm

Jason Wu - 2/15/2010 2:41:47

I agree with Hinckley's suggestion that the mouse will continue to be the dominant input device for desktop graphical interfaces. In my opinion, trackballs, joysticks, and touchpads are far inferior to mice, especially for desktop computers that often have resolutions up to 1920x1080 pixels, since the amount of clutching needed for these other devices is exasperating. Even newer touch-based devices, such as the ones mentioned by Hinckley or Buxton, are not really suitable for desktop computing, as people would likely get tired fairly quickly if they had to continuously move their arms across the screen rather than move a mouse in a radius of several inches.

I also found Hinckley's point about keyboard layouts quite interesting. He claims that QWERTY is unlikely to be replaced any time soon, but he also mentions that the Dvorak layout offers about a 5% performance gain over QWERTY. Retraining to use a different layout will certainly be an issue, but the 5% gain will certainly add up over the years. In an 8-hour workday (ignoring the fact that the vast majority of people do not continuously type for 8 hours at work), an experienced Dvorak user could potentially type an amount that would take a QWERTY user an extra 24 minutes to type. Dvorak is even believed to be less likely than QWERTY to cause carpal tunnel syndrome [1]. With Dvorak now being supported on most popular operating systems, I don't see why more people don't make a switch from QWERTY.


Charlie Hsu - 2/15/2010 13:04:35

I found the "device acquisition time" terminology defined in the Hinckley reading enlightening; from personal experience, I am always trying to find ways to decrease acquisition time between my mouse and keyboard in Windows Explorer. Often times I will find myself trying all sorts of combinations of Tab/Alt/Shift to avoid picking up my mouse, attempting to get an item in Windows Explorer highlighted so I can navigate with the arrowpad. Acquisition time feels like wasted time to me, and I am sure one important component of design in systems involving text input and cursor pointing is to diminish the need to switch devices at all, or to offer alternatives.

I also found the "bimanual input" section in the Hinckley reading useful. I realized that this may be why many commonly used keyboard shortcuts are not only reachable with one hand, but also hover on the left side of the keyboard! (examples that come to mind are Ctrl+C, V, W, T) Since most of the population is right-handed, the mouse will be to the right of the keyboard, and an user can copy-paste selected text using both hands, without ergonomic difficulties (imagine if the shortcuts were on the right side of the keyboard).

The idea that caught my eye most in the multitouch reading was that of the need to physically be viewing a touchscreen to be able to interact with it: "if you are blind, you are simply out of luck." This is true, but what of the mention of teenagers that can text without looking, due to the mechanical shape of the keys? What if someday touchscreens could provide some way of communicating dynamic heights instead of simply being flat screens?


Divya Banesh - 2/15/2010 18:08:27

This week's reading by Ken Hinckley is about various input devices such as the mouse, keyboards, joysticks, tablets, touchpads, etc. and how these input devices are used to perform various tasks. Recently, I learned about a new technology that uses different input devices innovatively to accomplish various goals. For example, the inventor of the new "Sixth Sense" technology, Pranav Mistry, discusses how he used the rollers from mouses (the older types with mouse balls) to create a finger motion detection device. He also discusses how he used a camera and 4 finger markers as input/output devices to bring the computer world into the physical world and vice versa. His technology shows a unique method to making and using input devices.

http://www.youtube.com/watch?v=YrtANPtnhyg&fmt=6


Alexander Sydell - 2/15/2010 18:54:45

One of the things that surprised me from the readings was the fact that it's very difficult to improve on the time it takes to move to a menu at the edge of the screen on a tablet computer and back. It would seem like context menus that appear right under the finger/styles would help, but the author brings up the points that it takes time for a user to figure out which command to use in such a menu, and that users use time spent moving back from the menu to their task to figure out what to do next. A few questions arise for me. Is this because users are accustomed to "normal" menus on their desktop computers, and thus already know where to go in the same menu on their tablet PC, and could the time needed to use the menu decrease if users became accustomed to a new type of menu? Also, I wonder how this will be handled in the iPad - will they keep some sort of menu near the top of the screen as in OS X and the iPhone OS, or will Apple try something new for this finger-only device?

The other thing that stood out from the readings is how long multi-touch has been researched and developed. It was put into the spotlight by the iPhone, but Buxton's article showed that most of the concepts that Apple used have been around for a very long time. It seems to me that Apple did not improve much on the state of the art of multi-touch devices, but instead did many other things very well (good OS, stylish device, great marketing, etc.) that brought multi-touch into today's devices with them.


Kyle Conroy - 2/15/2010 19:56:59

While these readings covered the basics and history of multi-touch input devices, I feel the pieces failed to cover an important group of devices that offer both a "soft-machine" and hardware buttons. The iPhone may only have one hardware button, but all Android phones have four: a menu button, a home button, a back button, and a search button. Over the weekend, I had the opportunity to use an Android device and found myself frustrated with the inability to discover interface functions. Many functions were mapped to the physical buttons, and I was never sure what action a button pressed would cause. This ambiguity does not happen often on the iPhone as the physical buttons do the same action regardless of state. This statement is not entirely true, but pushing the home button in any app will cause that app to close. I found the ambiguous buttons on the Android frustrating because the buttons actions were not consistent. If I press the search button in this app, what will happen? If that button was inside the application, I could be sure that it would search within the app, but when the button is moved to the phone, one can never be sure.


Wei Wu - 2/15/2010 22:53:52

While Buxton clearly details the cons of a multi-touch device, he only briefly glosses over the pros. One point that he only grazes that is worth giving more credit is that "multi-touch greatly expands the types of gestures that we can use in interaction." Multi-touch devices like the iPhone have greatly closed the "articulatory distance," as Hutchins et al coin it, between meanings of expressions and their physical form. They introduce a range of gestures and motions with our hands that imitate actual actions that were previously not possible to convey through just a mouse and keyboard as an input device. For example, to flip through pages on the iPhone, one can run multiple fingers across the screen as if they are actually turning pages of a book. The pinching motion for zooming is also a very intuitive motion for the user to understand. Multi-touch has thus created a whole new batch of metaphors between real-life actions and HCI.


Annette Trujillo - 2/16/2010 19:01:53

In regards to the differences between a QWERTY keyboard on a cell phone vs a soft keyboard operated by a stylus: The author did not mention that in most cases I think QWERTY keyboards on cell phones still demand visual attention because the user must look at the keyboard to know which button to press, and which finger is closest to that button. I think the advantage this case would have over the soft keyboards is that over time a user might get used to the cell phone QWERTY keyboard and not need to look every time (because keyboards usually have a small bump on the f and j letters that help for typing without looking), and with soft keyboards it is easier to accidentally touch the wrong key if you are not looking, since you won't be aware of the touch (because the whole surface feels the same when you touch it).


Daniel Ritchie - 2/16/2010 20:14:21

In his article on multi-touch interfaces, Bill Buxton points out the primary downside to the "soft machine" interface approach: the lack of tactile feedback. While a multi-touch display can reconfigure its appearance to mimic any input device, it cannot do the same for the tacticle sensation it offers.

However, this lack of tactile feedback may not be a problem for much longer. Recently, I came across an article called "Dynamic Displays: Touchscreens, Meet Buttons" in the Fall 2009 edition of ACM Crossroads. Written by two students from Carnegie Mellon University, the article describes a new approach to touch displays: using pneumatic actuation to physically deform the touch screen into shapes matching i.e. displays of on screen buttons. This capability allows the user to not only see a button-like icon on the screen but to feel something button-like when she presses down with her finger.

The current implementation is limited in scope: the device is fixed function in that it supports only a few buttons at predetermined locations. The idea, though, has a lot of potential. Imagine a future when technology like this has reached a very small scale--perhaps so small that the actuation of each screen pixel can be independently controlled. And imagine that the system was programmable, so that the interface designers could synchronize their display elements with tactile elements. Given the above, the system would truly be the "soft machine": able to recreate any input device in both look AND feel.

I'm really excited about the possibilities.


Eric Fung - 2/16/2010 21:46:39

With all the excitement over touch devices being a mainstream product in the near future, it was very grounding to read that touch interfaces are not the best solution for all situations. For instance, as the Buxton reading had mentioned and from personal experiences, it's nearly impossible to operate a touch device without looking at it, and it's only slightly less impossible to use a touch device with one hand--try unlocking the iPhone with one hand while it's still in your pocket. Though this difficulty is a strength when considering its ability to prevent inadvertent actions (an issue with several of the other gesture-based interfaces), it requires the user to shift attention to the screen. I've found that typing on the iPhone keyboard requires a considerable amount of attention, considering the screen is a continuous surface and you don't get discrete feedback as you do with individual buttons on a keyboard. Factor in the aggressive typo correction algorithm and the user's typing rate slows considerably.

I think bimanual (or two-handed) interfaces have been somewhat overlooked and have the potential to be more powerful than currently imagined. Considering controls for first-person shooters--left hand on keyboard, right hand on mouse--can be learned quite quickly and allow for flexibility within the game, perhaps there is a common situation for daily tasks in which a similar setup would prove useful.


Matt Vaznaian - 2/16/2010 22:23:47

Out of the many input devices I have used, I find the newest MacBook trackpad and the trackball mouse to be the most efficient ones. I find that the MacBook trackpad has virtually no limitations when it comes to input functionality from a 3" x 3" square. The built-in multi touch recognizes one, two, three or four fingers which allow users to perform twice as many operations as with a standard mouse. When you add tapping, double tapping and pinching with multiple fingers into the mix you get an exponential amount of functionality. I especially find the ability to perform different operations by clicking on specific corners of the trackpad to be a very innovative feature. As with the trackball mouse, I find it a very efficient tool. The biggest advantage it has over the mouse is that you do not need to move it around. The input for the cursor comes from movement of the ball and not having to physically move the mouse around provides for virtually the same feel of a mouse (two buttons, middle scroll button) without needing the space for cursor movement. I also find that often with a mouse I need to physically pick the mouse up and move it back to a neutral position before moving it again in order to position the cursor from one side of the screen to the other. A simple flick of the trackball takes care of that for me.


Victoria Chiu - 2/16/2010 22:30:49

It seems rather hard not to have any modes for input devices. For example, the input of a stylus can be treated as either the ink content or gesture commands. In the reading, one way to switch between the ink content or gesture commands is to have the non-preferred hand to press a button to switch between these two modes. It seems that this method would work better than the situation of switching states for the mouse mentioned in Raskin's text. By pressing an extra button by the non-preferred hand, this switching act will be more likely to become part of the procedural memory. It is also important that users' comfort should be considered. The efficiency of the input is important, but it is also important to consider what the best way for human's bodies is. Bad postures can often lead to more serious health problems.


Richard Mar - 2/16/2010 23:07:13

The multi-touch overview article made me think of this: http://10gui.com/ A person has ten digits to use for interaction, not to mention the palms, and the myriad gestures that can be made with the hands. Handling all those gestures would be quite a hassle, as well as overload the user with command choices.


David Zeng - 2/16/2010 23:18:02

First, I must say that the first reading was a bit confusing because it referenced a lot of things that I didn't know.

As a tablet computer user, I have the option of many different types of input. As the second article said, each device is good for something and bad at another. When I first got my computer, I wanted to use it as much as possible, so I used the tablet function a lot. As I used it more, I began to be able to differentiate between when I should go into tablet mode and when I should be typing. In fact, one of the most important things was mentioned in the first article: the inability of the stylus to elegantly preserve all the functions of a mouse despite having all the states. Especially prevalent was the right click function, that needed a noticeable pause for it to active and there was often error when I first attempted it. Through time, I was able to chunk together different actions depending on whether I was using the stylus or the touchpad. I think some of these problems may have been mitigated by the ability of screens to take in a stylus or a finger touch, but there still needs to be significant improvement before we replace the mouse/touchpad as a primary source of movement.


Long Do - 2/17/2010 0:16:02

I did not realize that touch screens had such an extensive history. I feel that the iPhone really helped create this rush towards touch screens and I think that is for the better. I dislike the fact that touch screens are not as precise and that the user's hand will block part of the screen, especially when I play games on the iPhone and it forces me to keep a finger on the screen to continuously move. But even with the downsides, a touchscreen helps to breach the gulf by allowing the user to interact easily with objects on the screen and his expectations of what will happen is often very straightforward. Learning to use a touchscreen is often easier than learning to navigate through a series of menus with buttons that are overloaded with functions if my anecdotal evidence of my parents using various phones have shown.


Daniel Nguyen - 2/17/2010 0:18:41

After reading Buxton's article about multi-touch systems, I think that there will be a big market for multitouch interaction for personal computers. While the Hinckley article cites the mouse as the best pointing device for desktop graphical interfaces, multi-touch offers many features and extended interactions that just can't be done with the single pointer of a mouse. One of the problems Hinckley has with touchscreens, the limited number of states and events, is easily solved by multi-touch due to the increase in the number of ways interactions can be analyzed when multiple inputs are received. Multi-touch provides a much higher level of direct manipulation than the mouse, and once a multi-touch technology becomes more widely avaiable at a desktop level, I think it will replace the mouse altogether.


Linsey Hansen - 2/17/2010 1:22:02

So, for starters, I will say that the Bill Buxton article was really interesting and fun- my two favorite older things I saw were the Bi-manual input, which I initially thought looked cumbersome and confusing actually makes sense to me considering how much I use my keyboard with one hand and mouse with the other; then the other favorite thing was that flip keyboard, since having something like that built in in a laptop (or even by itself) would be cool, where it could be a keyboard on one side and a tablet-like thing with some shortcut buttons on the other, though I guess that you could also just buy a table (still it just seems so fun).

Anyways, I can't say I agree so much with the Hinckley article, since I feel like he was overly harsh and critical of certain input devices. For starters, he mentioned at some point that pen styluses aren't as effective as mice, though I have a really cheap wacom bamboo tablet (I am pretty sure that these were around during the time of the article), and if anything I feel like it is more effect that a mouse for most computer tasks (aside from games). I mean it is small and each point on the tablet represents one on the screen so it is really easy to move around, plus, the stylus has a button on it to serve as the right mouse button (plus it can use finger input as well). Also, the pen input is more comfortable in a lot of cases since I don't feel like I am getting carpal tunnel. I suppose that his job is to be critical, but I just feel rather defensive of my darling little tablet (I'm sure many people hate tablets and don't agree but oh well).


Jungmin Yun - 2/17/2010 9:32:27

Bill Buxton talks about Multi-Touch Systems that he has used, and shows differences between them and history. Personally, I do not understand how to distinguish touch-tablets from touch screens. BIll Buxton says that it is a difference of directness. However, I do not sill get it. I thought touch-tables has touch screens. As I know, touch systems are pretty new invention. However, the history of them is a lot longer than what I thought. touch screes started to be developed in the second of the 1960s.Through my experience with touch systems, I like touch systems even though they are a little bit annoying sometimes because of some kind of limitations that Bill Buxton says such as degrees of freedom and pressure festivity. For example, I have a Macbook pro that has a touch pad. This touch pad is a lot different from what my old computer had. I can use from one finger to four fingers to use a lot of different functions. At first time, it took me long to get used to it. However, I do not even use a mouse even at home because the touch pad is really handy and easy to control. One thing I do not like is my phone's touch keys. My phone is a slider and has touch keys, not for number buttons. This phone's UI is really bad, so these touch keys are annoying. It makes my phone mute sometimes because I touch one of keys accidentally. Whenever that happens I need to call back. Overall my experience with touch systems is pretty positive. I wish my navigation had a touch screen.


Weizhi - 2/17/2010 10:17:37

Bill Buxton, in his "Multi-Touch Systems That I Have Known and Loved", reviews the history of computer input device and give us some hints of the multi-touch future to come. He outlines degrees of freedom, a concept central to expanding the boundaries of how we interact with computers, which up nearly endless possibilities for one-surface computing based on your actions: discrete or continuous, horizontal or vertical orientation, pressure sensitivity, angle of approach, friction, and so one, all influenced by the single or multiple points and gestures you use. It may not be time yet to ditch your keyboard and retire your mouse. But sometime between the iPhone and Surface table-top computing, laptop and desktop multi-touch applications will emerge.


Bryan Trinh - 2/17/2010 11:15:04

I would have to agree with Hinckley that the mouse and keyboard will a very hard to beat physical interface for the desktop personal computer, but there are certain applications that afford the use of other tools like a tablet and pen for painting. I have found that transitioning between these different input devices have been an issue for me particularly when I am working on different things at once. Moving from tablet to mouse to keyboard is troublesome because it necessitates moving objects around to fit ergonomically for use.

As touch screens with multitouch become more predominant in consumer electronics( if Apple lets them), the software that we use with these devices will be better understood and better designed. A very good example of this is the gestures that are used in many touch interfaces. This wouldn't work so well with a mouse interface because it necessitates clutching of the pointer tracking. Even when using a tablet and pen interface, gestures are just not as intuitive as with a direct interaction of a finger to screen.


Hugh Oh - 2/17/2010 12:58:15

Hinckley focuses much of his attention to touch and voice inputs. If you focus on a single sense then human overhead will start becoming an issue but combining touch and voice will create a more efficient way of input. However, why should we just focus on touch and voice when we have so many other ways of communicating (i.e. body language, eye contact, etc.). As technology continues to advance, we will figure out how to transform more human to human interactions into human to computer interacts thus improving our ability to utilize computers.


Long Chen - 2/17/2010 13:46:36

Input Technologies and Techniques: This Microsoft publication really strives to answer all questions related to design principles and the purpose of input devices. The brief history of pointers was very insightful and lays out a nice pathway for the development of future devices used for pointers. The design decision of using the mouse over all other input devices such as joysticks and trackballs can be debated and the article provides many useful background information. The particularly useful diagrams of Fig 3 and 4 underlays the complexity of user interface difficulties and demands, and these diagrams can even be applied to current day problems associated with touch input. Sections 7-8 introduced inherent problems when dealing with textual and other methods of input. I never realized how complex this field of inputs was. There are so many design considerations that need to be made before deciding which mode of input is used. Based on the reading, everything has its pros and limitations and the most important job of a designer is to understand the subtle differences and making an informed decision.

Multi-Touch Systems: Bill Buxton has seen the entire spectrum of development and growth in multi-touch systems and this article really gives a great brief narration of that history. His point about ocular centric and how displays are much more advanced than other features of machines is something I have not been exposed to before. That idea implies that there is much more growth potential in the field of input devices, and also links back to my previous idea of using the smartphone as an ubiquitous mobile input device. His "Some Framing" segment is definitely helpful and a useful reference for documentation purposes in the future. I particularly liked the section about the limitations all designers face when dealing with touch screens. These difficulties need to be accounted for and understood in order to really design a great product. Finally, I feel just reading the brief history of multi-touch sensors is so incomplete due to his long-nose of innovation concept. Many new technology is under incubation by private companies or the government for development and won't be revealed to the public for some time. Although the equipment already out there is exciting, there are already so many more prototypes being developed that can lead to a paradigm shift in input devices. Who knows how people will interact with their electrical environment in the near future.


Jonathan Hirschberg - 2/17/2010 13:53:30

The need to design an interface that allows a more direct match between the user’s tasks and the low-level syntax of the individual actions, as discussed in the input technologies and techniques reading, is much like bridging the gulf of execution. In either case, you want to make a more direct match between the user’s tasks and the low level means to carry them out so that it would be more intuitive and easier to perform. Bridging the gulf between user goals and the way they must be specified in the system can be done by either changing the user side or the system side. The reading on input technologies and techniques advocates changing the system side by choosing and designing a device. Since the rest of the article describes the pros and cons of each input device and the situations they are appropriate for, choosing the right device is important because it relates to the need to take advantage of affordances, or the idea that certain input devices are better suited for certain situations.


Vidya Ramesh - 2/17/2010 14:00:05

The second reading called Input Technologies and Techniques dealt with the different technologies that can be used for input and the techniques that work best for each of them. During this, the author brings up a very interesting point about elemental tasks. He explains that an elemental tasks is an small, contained task that is usually put together with other elemental tasks to create processes and complete larger tasks. He points out that from the user's perspective a series of elemental tasks may seem like a single task, however for the program the task is made up of multiple elemental tasks. This makes me wonder if perhaps the program is taking the wrong approach by creating elemental tasks to build into larger tasks. Maybe it would be better not to have such small tasks and rather have them preprogrammed to occur together? However, considering the rest of the reading and its discussion about text entry, it seems unlikely the current system which works rather well will change. However it is interesting to point out that both the first and second readings note that it is hard to use touchscreens without grabbing the user's visual attention, which makes it very unlikely that touchscreens will replace keyboards and other physical entry devices.


Michael Cao - 2/17/2010 14:24:30

The article on input technologies and techniques talked about different forms of mobile text entry. While I do believe that text entry on a mobile device is slower than on a normal computer keyboard, it also depends on how accustomed a user is with the device. I'm sure there are users out there that can type faster on a mobile device than others can on a keyboard. Also it seems foolish to assume that typing on a mobile device can every be as fast as typing on a normal keyboard. Mobile devices are meant to be small, otherwise people wouldn't carry them around, so its understandable that users can only use such a few number of fingers when typing on them. This is in comparison to keyboards, where users can use all their fingers to type, making it much faster. I also feel that text input will always be much faster than the pen and gesture input that the reading mentioned. It would be extremely tedious for a new user to learn and memorize so many different gestures just to be able to write a note or a phone number. Not to mention the user would only be allowed to use one hand at a time to input text, whereas QWERTY keyboards allow you to use two.


Jordan Klink - 2/17/2010 14:34:17

I found this to be a much easier read than the selections from "The Humane Interface," so it was a much welcome change. It was also very enlightening in seeing the history behind different forms of user interfaces and the sheer multitude of all the different devices that exist or have previously existed. The reading has opened my eyes to all of the design choices that are available to me as a developer, and that practically every device ever made has its own unique interface that must be considered when developing for it.

Furthermore, when considering which platform to develop on, I have an enormous amount of devices to choose from, and it will be difficult to pick the best one suited for my application in mind. Perhaps one day there will be so many hardware choices or the hardware will be so flexible that the tables will turn in this software-hardware battle, and that I'll be able to develop my program first and then customize the hardware itself to run it in the most optimal manner.


Wei Yeh - 2/17/2010 14:50:22

I found quite a few things in the reading amusing. First, I was surprised to learn that touch technologies have such a long history, especially multitouch. Most people would readily believe that Apple invented multitouch, which is rather sad. I just wish credit would more often go to where credit is due. Second, Hinckley suggested that physical keyboards on a mobile device is superior to software keyboards. I disagree -- although physical keyboards provide tactile feedback, they are likely slower to type on than a software keyboard once the user gets used to it. Finally, I found it interesting that Hinckley claims that the QWERTY keyboard is good; that it is "unlikely to be supplanted by new key layouts". I have read many articles online claiming that QWERTY was specifically designed to make typing slow in order for typewriters to keep up with the user. The suggested alternative keyboard layout is often DVORAK.


Jeffrey Doker - 2/17/2010 14:56:07

I really enjoyed the input device paper because it was concise and precise about things that people are often very vague and redundant about. In other readings a concept has been described using imprecise language and discussed redundantly for several paragraphs, whereas in this paper I felt that I got a simple but clear explanation of a huge amount of concepts in a very short period of time.

I have been curious throughout this course about what sorts of mathematical metrics exist to measure aspects of interface usability, and I was delighted to finally see some actual equations, such as the power law of practice.

I also appreciate the level of detail used to parse the subtle but crucial differences between mice, tablets, and touchscreens--differences that, as far as I can tell, are largely overlooked in many applications and interfaces. My instinct is to assume that this is a brand new emerging field, however the writings of Bill Buxton's webpage lead me to suspect that this precisely-defined and analytical view of interfaces has probably been around for quite some time.

I am particularly taken with Buxton's mantra that every interface is the best for some task and the worst for another. I am extremely wary of new interfaces that claim to "eliminate the need for a mouse/keyboard/physical book," without fully analyzing the costs of abandoning these tried and true interfaces. The issue that bugs me the most is the idea that a digital reader is an absolute upgrade over a physical book. Proponents ignore the fact that with a book the user has 10-finger interaction capabilities for marking and navigating pages, for example. I would love to talk about this with Bill Buxton, or if not him, then with our class.


Kathryn Skorpil - 2/17/2010 15:16:17

Both of these readings show how important it is for us to consider the input device and input capabilities of the users when we design our projects. Particularly for the course project since we are developing on the iPhone, we have the option of several inputs: a touch-based, motion based input, and camera/video based input. Both of these inputs offer separate challenges, but it appears overall that for certain applications they work very well and are easy to learn quickly. Most of the applications (at least based on reading the many reviews I have read) that have failed in the app store tend to lack originality in these areas or have made the inputs overly complicated.


Bobbylee - 2/17/2010 15:38:06

Before reading Bill Buxton's paper, I have never thought that touch screen can have so many drawbacks as he mentioned in the section "There is no free lunch".

I used to think that touch screen will eventually replace mouse/stylus in a short period of time. However, after reading this paper, I know that touch screen also have some restrictions. For instance, blind people cannot use to touch screen, while a mouse might be more useful for blind people. Or maybe one day, the screen will shrink to a size that our fingers cannot make accurate movement, while a stylus might be useful in the case. 


Angela Juang - 2/17/2010 15:41:09

The topic of tablets is pretty interesting, because from an interface point of view it's important to consider the sensitivity of the tablet to strokes that the user makes with their finger or stylus. From personal experience, I've seen that a lot of times tablets are extremely sensitive to small motions, which can cause writing to become messy looking, especially when the surface of the tablet is smoother than usual paper. In this case I think it would be better for the tablet to have a bit more tolerance for changes in position so that writing will come out smoother. However, when painting on the computer, it can be important to increase sensitivity for more delicate techniques. Desired sensitivity seems dependent on what the user is using the tablet for, but the people making the tablet need to know what to support when they create the tablet. Is it the application's job or the tablet driver's job to control the tablet's sensitivity level?


Nathaniel Baldwin - 2/17/2010 16:15:44

Because both of today's readings were largely enumerations of various interface techniques and their pros and cons, it's hard to comment terribly substantively on them. It was interesting - for some reason, I had never thought that Microsoft had a "research" division, but of course it makes sense for them to. I also had no idea that research into multi-touch interfaces had such a long history. I did notice that there was no discussion here of non-physical interaction methods (assuming you categorize speech as somewhat physical) such as methods for controlling things directly with your brain. I suppose such methods are in their relative infancy, and as such would not fit in well with discussion of existing, practical methods. One thing I will say is that as far as the "jack of all trades, master of none" concern that Buxton brings up, I personally anyhow find that a trade-off I'm willing to make for the sake of consolidation. For example, I have a programmable remote that has a small LCD display with which it can make different labels for various buttons depending on their current functionality. It doesn't do a perfect job of being a remote for any of the devices in my home theater, but the benefit of having only one remote around (and not having to switch between them to control different devices) is well worth it.


Joe Cadena - 2/17/2010 16:26:57

With the American pop culture buying into the touchscreen craze, devices such as the iPhone, iPod Touch, Droid, and iPad are reaping in sales. But as Bill Buxton mentioned in his multi-touch writeup, a compromise between "soft" and "hard" touch input devices is essential in order to address the requirements and "likes" of the user. Also, in addressing the practicality of having two pointing devices for a single computer, I believe multi-pointing devices would be helpful for multitasking and data interaction. For example, my idea of full data immersion stems from the interface John Anderton used in "The Minority Report." His use of both hands and all fingers to interact with data was an efficient use of available tools.


Calvin Lin - 2/17/2010 16:44:53

Touch-based input technology has certainly been on the rise and devices have been selling like hotcakes all over the world. However, there is a great challenge in having software adapt to this new input model. As seen with current devices and their applications, usage is simplified to more basic tasks and interfaces are simpler with a narrower and directed focus. How will complex programs and interfaces such as Photoshop, movie editing software, and even word processing programs adapt? Apple is trying with iWork for the iPad, users immediately reacted with how text entry seems difficult and cumbersome.

As tech improves, we get bigger and bigger screens. People like things to be bigger. But bigger screens mean more physical movement and thus more fatigue. What makes the mouse and keyboard still relevant is the ratio of compact physical movement to quickness of tasks on the screen. What I wonder is: will touch-based devices reach a plateau, where anything bigger or more complex will be too difficult for users to handle since they have to be more physical?


Boaz Avital - 2/17/2010 16:48:05

Hickley: Not sure what the reading means by "clutching". Does it mean repositiong your hand? I was glad that the section on speech was realistic. Speech recognition for interacting with your computer was supposed to be the "next big thing" for a very long time. And now, even though actual speech recognition is excellent, it's still not a widely used method for interacting with your computer. The paper mentions that is partially because it's non-private, which is correct, but I believe it's also because 1) people are uncomfortable speaking to their computers, even (or especailly) when they're alone, and 2) when you're using silent input devices, you can do one thing while thinking about the next thing you need to do. When you're speaking, you need to focus a lot more on what you're saying, so your current action, and your actions become very serialized in your head and don't flow easily.

Buxton: I'm pretty sure apple has patent rights for the pinch gesture. If it's true that that gesture has been published before, they're not allowed to patent it. The rest of the article was an interesting overview and I was pleased to see an acknowledgment and acceptance that interfaces are not perfect, are designed to be good at some things and so necessarily bad at others.


Saba Khalilnaji - 2/17/2010 16:50:00

When reading about the tablet PC's pen with its 3 state transition. I thought why not have a button on the pen so you can know when to drag and when to move the cursor. Then I quickly realized this would just end up being a mouse and hard to use, so I suddenly realized why multi-touch is so useful. You can mimic those 3 states with different numbers of fingers on the screen with many more possible states. Furthermore, the power law of practice, when applied to input devices should not have steep learning curves or they will ward of new users. The steep curve of the current keyboard is what keeps my mom from using computers. Technically keyboards were the beginning of multitouch technology and the steep learning curve associated with it is almost explained with its age. Also, one interesting fact about touch technology is that your fingers are not transparent. We cover what we are loooking at when we press a keyboard, or touch an item on the touch screen. This can drastically reduce the time it takes to perform then next action because we have to remove our hand to see the screen again and place it back over the screen. This is avoided when the button is out of the way of the interface, like on the edge, but most common iPhone apps have this interesting issue that never crossed my mind.


Mohsen Rezaei - 2/17/2010 16:56:13

The argument of multi-touch/single-touch, soft keyboard/hard keyboard, and user interactions similar to these been around for a while and what devices users prefer more than others always factor in to these discussions. One would argue that sometimes I lose track of where my mouse's cursor is while I would be able to just point to a place on an touch screen and do the same action I meant to do with the mouse in a faster and easier way. Others would argue that it is hard to keep our hands up in the air to work with touch-screen/multi-touch devices while we could rest our arms and hands on a keyboard/desk and do the same work more efficiently. Moreover, even though a lot of tablet computer users have announced the hardships of using tablets, companies like Apple announce their newest tablet with even more software oriented functionalities. As we read in "Multi-Touch Systems that I Have Known and Loved" reading, history of touch-screen/multi-touch devices goes way back to around 1970s, and the problems have been pointed out, but still depending on what users can adopt, it seems like that the multi-touch device idea might be taking the user interfaces and human computer interaction to a next, high level standing.


Spencer Fang - 2/17/2010 17:02:06

It is very easy to see the limitations of each type of input device when their interactions are formalized into finite state machines. I have not thought of them in this way before.

It is interesting that efforts have been made to detect the angle of the user's finger in a multitouch interface. Aside from specifying vectors in 3D space as the reading suggested, this could be very useful in table-top interfaces and other large displays where multiple users can interact with the interface simultaneously. The software can potentially use the angles of the touches to figure out which inputs came from the same user. This information could let each user have his or her own "Undo" stack, even if multiple users are working in close proximity to one another.

The reading stated it was unfortunate that no systems took advantage of bimanual input for tasks like positioning/scaling or selection/navigation tasks. I think this interface need is adequately addressed with mice that have scroll wheels or trackballs built in.


Richard Heng - 2/17/2010 17:10:55

I found the representation of mouse clicks as states in Hinckley's article to be an interesting representation of pointing devices. I disagreed about the lack of flexibility of a pen based input device. The main argument against it was the complexity from adding a right click. I thought a good solution to the problem would be to simply add a button on the pen that the user could press, so that when the user puts the pen down, a different click is interpreted. You could simply ignore the clicks in other situations, as modern computer do with untimely right clicks.


Tomomasa Terazaki - 2/17/2010 17:13:14

To be completely honest, I did not enjoy reading both of the articles that much. Our other articles were usually an analysis of user interface, but these articles were mostly facts and the author did not real talk about what they think they can make a difference to make it better.

The first reading talked about input devices, which are things like mice, keyboards, or even the touch-screens. These are basically things that the users will be using as their hands of their laptop. In real life we write with pen and paper but we use keyboards for computers. It was a little boring when the article started talking about how doing things on computer is faster than hands because some of the things are just so obvious. Reading the article, I believe soon there will be something much more easy to use than mice or touch screens. I think in the future there will be computers that will catch the users’ eye movements and click on the screen wherever the user is looking at. This way people with disability or old people can use computers easily. Another idea is talking to the computer and the computer just type/do whatever the user wants to accomplish. Something faster than typing on a computer is actually saying something, so this will make the work even faster.

The second reading focused on the multi-touching systems. Multi-touching system is famous on gadgets like iPhone or iTouch today, but this system was created in 1984. This article talked about how there are many factors for the computer to choose what the user is trying to do with their fingers on the screen. I was always surprised by touch-screens because they know what the users are trying to do by how we touch the screen. Maybe in the later years how strong the user is touching the screen might come to effect (like for video games how hard the user pressing the button changes the actions). After doing the project now I have even more respect for the first people that created this system because it is revolutionary and making it actually work was very difficult.


Chris Wood - 2/17/2010 17:20:32

New touch technology definitely has its positives and negatives, as both this week's readings suggest. I have found in personal experience that the attempt of a touch interface to simplify life only serves to cause frustration and irritation. The iPhone OS is the best touch interface I have ever used, and it for many reasons: the screen is prevalent, the touch is very responsive and accurate, and the degree of realism of the required actions are very easy to use. This goes along with the points covered in the second article, where many of the key aspects of a good touch interface are bulleted. The bullets "Degree of touch/Pressure sensitivity" and "Angle of Approach" both exemplify why the iPhone interface is so successful.


Wilson Chau - 2/17/2010 17:20:32

Hinckley One overlooked part of interface design might be the input technologies and techniques. There are so many of them that it is worth looking into the pros and cons of each and trying to understand the different uses of different inputs. One of the sections that interested me was section 7 on keyboards and text entry. Through use of analysis of the different wpm that different inputs offer, we can better understand what types of inputs we should attach to different devices. If we are choosing an input for something that will primarily be used for writing long documents nothing beats a keyboard which offers much higher wpm than other input devices.

Buxtox This was an interesting article that went through all the different types of inputs in chronological order. It goes from the first input, the keyboard to the latest, multitouch sensing. It is also interesting how in depth Buxtox goes into the art of inputs the different aspects that must be taken into account like the degree of touch or even the angle of approach.


Raymond Lee - 2/17/2010 17:26:46

One thing that strikes me is despite three decades of research into multi-touch, it has not been that widespread until relatively recently. I think that the ability for multi-touch devices to potentially act as several types of devices, simply by presenting a different image of the device, is already present in devices like the iPhone/Android with apps. Perhaps it is the versatility of this aspect, in addition to improving touch technologies, that is making multi-touch so widespread? Despite the advances in multi-touch, I believe that it will be difficult for multi-touch to supplant the mouse in a standard, extended use productivity setting, due to possible fatigue that might set in with an interface that uses more muscles than mouse and keyboard.


sean lyons - 2/17/2010 17:27:13

Obviously, all of the input devices covered in Buxton's "Multi-Touch Systems that I Have Known and Loved" are haptic. Nearly all of the input devices that exist are haptic, from the touch displays he covers to mice, keyboards, trackballs, joysticks and light guns. Are hand-controlled input devices universally better? Is there a defined hierarchy of usability among input devices, which handicapped users must traverse down until they find a device they can suitably manipulate? A footmouse seems as though it would be more intuitive and easier to control than the breath-based or blink-based input devices that paralyzed users are constrained to, but neither seems particularly desirable for someone who can use their hands. Relatively esoteric devices, such as EEG headsets, look like they show promise, and given rapid ability for humans to accommodate themselves to the tools they possess, it seems likely that they could function better than any of the other devices that require physical input. Are there any other devices that suitably meet Hinckley's usability criteria, and are physically capable of matching mice and keyboards' input speed?


Andrey Lukatsky - 2/17/2010 17:33:56

Hinckley spoke about Procedural Memory and cited the example of a keyboard. I found it particularly interesting how he used the power law of practice to suggest that the benefits of a Dvorak layout come at a large time cost. However, I didn't quite understand what values he plugged into the equation to come to this conclusion. Perhaps it can be discussed in class.

Buxton distinguished between multi-person vs multi-touch. Previously, I hadn't considered such a distinction - primarily because when I hear "multi-touch" I think of a small screen. In what kind of applications can such a distinction be useful? At first glance, it seems that if you design with a single-person in mind (perhaps with more than 2 hands), you shouldn't have to worry about this.


Geoffrey Wing - 2/17/2010 17:36:35

The Hickley reading definitely illuminates some concepts discussed in the Contextual Inquiry readings. The QWERTY keyboard was designed to slow typist down, because paper jams often occurred. However, today, many people type very quickly with QWERTY keyboards, so this seems to be an example of habituation on a bad interface. Also, for tablet PC inputs, the stylus allows for some direction manipulation--circling a paragraph and drawing a line to a new location is a clear example of the verb-noun model. On the other hand, tablet PC input creates modes (ink content vs gesture), so it is not the best UI, according to Beyer and Holtzblatt.


Vinson Chuong - 2/17/2010 17:50:35

As discussed by Bill Buxton, multi-touch interfaces have the potential for high-bandwidth, direct interaction with users. When I read the reading that discussed direct interfaces, I wasn't sure exactly how the idea of directness mapped to software applications, probably, because there were few mainstream general-use applications featuring direct interaction. I see now that given all of the history behind multi-touch interfaces, a truly direct interface may not be so abstract. Going further, we may soon see 3D holographic interfaces in which there is full support for every hand/body gesture imaginable. Such an interface could be truly direct.

As things stand now, it feels like today's multi-touch interfaces are merely toys, trivially mapping gestures to pre-determined functions. We have yet to see a multi-touch interface that really gives us extra degrees of freedom in our interactions--as opposed to mimicking one or two mouse pointers.


Dan Lynch - 2/17/2010 17:51:57

The author Bill Buxton discusses the various technologies that have been developed and the technology timeline and progression of these devices. One important point that he makes about devices such as the iPhone is that they approach the "jack of all trades but master of nothing". Also, the idea of all Look and no feel is a big deal. This is true, but I have to say that the iPhone does have a feel to it that is quite unique. The fact that a user can use various touching strokes to interact with a "soft" device in itself is the "feel". Maybe there are ways to come up with devices that do both. I thought it was very interesting that electronic music machines predated the pc (very cool).

The second article discusses different input technologies, tasks, and interactions to complete these tasks. First goes into comparing a pen to a mouse to show why there are more functions in a mouse than a pen utilizing state machine diagrams. I think their method for analyzing the states and functions of input devices is very thorough. This allows you to detect possible errors in a systematic way as opposed to stumbling into them randomly. Another part of the article that I liked is the brief discussion in regards to the keyboard and procedural memory. The idea of not having to think about an action and being able to focus on the topic or idea is critical in order to transmit your thoughts to others. This is also something important for user interfaces---being able to interact without thinking about the interaction itself and focus on the task at hand.


Jeffrey Bair - 2/17/2010 17:52:50

I find it interesting that the reading by Bill Buxton mentions that we have had touch screens for a while but still we have gone backwards in terms of feel and have emphasized LOOK much more. This is especially in the case of things such as the iphone and the ipad, there is no tactile feedback which has alienated some of the older users who are used to feeling their buttons rather than seeing them being pressed down. The BlackBerry Storm was a phone that was supposed to bring back the idea of tactile feedback with their springy screen. However, this idea never really caught on which may not entirely be due to the feedback feature but it was certainly not enough to save it from quickly going to the wayside. I believe that today since we are so used to not touching and damaging our screens, it is difficult for people to come to terms with the idea of tactile feedback on a screen that supposedly is quite easy to break if pushed too hard. With this idea ingrained in our minds I find that it is difficult for us to break our habit of not being used to tactile feedback on touchscreens.

In Ken Hinckley's article I realized that there have been a multitude of different input devices that people have used over the years. With new improvements there are people who are not satisfied with the new direction input devices are going. It is a neverending see-saw of change since what someone considers and improvement may prove to be a drawback for another user. Remembering the basis for all human interaction through touching, holding, and moving items is the only true foundation of user input devices that we should keep in mind when designing new input devices.


Jessica Cen - 2/17/2010 17:55:08

The first touchscreen device that I have been using most regularly now is my iPod Touch. At first I thought it was going to be easy to use since it only requires to be touched by my fingers and both input and output shared the same screen. However, I noticed that a touchscreen doesn't have the same functions like a mouse such as hovering. I am used to hovering on a button to see its description before I click on it. However, when I see an unknown icon on my iPod Touch, I don't know what it does until I tap on it. So I understand what Hinckley is saying when he mentions that even though the mouse and a touchpad both support state 0 and 2, it is still difficult for both devices to have the same interaction techniques. Furthermore, I agree with Hinckley when he says that sensors are a promising design space (p41). The accelerometer on an iPod Touch makes it easy to switch the screen to landscape mode whenever I feel the keyboard is too tight for writing. It's also amazing how Buxton gives us a chronicle of multi-touch devices and how technology has advanced to sophisticated devices such as the iPhone. I am really grateful that the iPhone has multi-touch capability since it allows the user to do more functions such as zooming by sliding two fingers away on the screen. This can't be done neither with a stylus or with a mouse, and with a mouse it would take time to change the cursor from pointer to zoom.


Andrew Finch - 2/17/2010 17:56:56

In "Input Technologies and Techniques", Ken Hinckley implies that the mouse is the best input device for desktop PC's for a number of reasons--its movements translate well to the cursor on the screen, it is moved perpendicular to the axis that it rests on, and it remains in the same place when you let go of it. These are all valid points and most likely do contribute to why the mouse has become the most popular pointing device, but Hinckley neglects the factor of user acclimation. Mouses were one of the most technologically feasible pointing devices that could be manufactured at the time they got popular, and all users of computers with GUI's were forced to use them, and they got used to them as a result. If multi-touch trackpads or some other pointing device was the standard device shipped with every computer, then users would have gotten used to that, and it might've become the most popular input device.


Brandon Liu - 2/17/2010 17:57:31

The first reading briefly discussed transfer function gain for input devices. For me, this is the mosts important aspect of an input device, so I was surprised the paper glossed over it. For example, in a full screen touchscreen device, one has to exert a lot of motion to reach different parts of the surface, while for a mouse a minimum of movement is required to move across the entire screen. the paper says that experimentally, gain doesn't make a difference in the time to perform pointing movements, which was also a surprise. I would have expected higher gain devices to result in faster input.

Second reading: I was interested in the other parts of touch other than contact and position: for example, the trackPoint devices on IBM thinkpads is basically a friction/force vector input, but it's stationary. It would be interesting to see this merged with a multi-touch screen.


Yu Li - 2/17/2010 17:57:44

We use input devices everyday without much thought, things such as a mouse, or even a pencil/pen can be considered as an input device. We have also come a long way since joysticks and trackballs. Today we have touch sensor input devices in our cell phones, but the ability of the input device also depends (and is sometimes constrained) by the UI it comes with. I have a Bamboo tablet and it's a great input device, but it's only as good as the system and underlying hardware that it runs on. For example if I had a slow computer or my computer didn't have access to photoshop, there are not that many things I could use my tablet for. Additionally the design of the input device also makes a big difference in how it is used. A large touch screen is a great input device since it registers all the user's motions, but if it is designed to hang on a wall, then the amount of time a user can actually use the device is very limited (human physical constraints).


Mikhail Shashkov - 2/17/2010 17:59:19

One idea struck me while reading Bill Buxton's ideas, was when he summarized his points by saying: "my general rule is that everything is best for something and worst for something else", because contexts and interactions are so quickly evolving and changing.

This brought me back to modes. Despite their purported evil, I think one previously unconsidered benefit of having touch rather than device (keyboard, mouse) inputs is that you could have different ways of interacting with the same UI (even on the same view) for different users depending on their culture and context and all of that. Basically the idea would be to have custom UI (views and input options).

Has this been researched at all?


Alexis He - 2/17/2010 17:59:47

Bill Buxton reasons that the gap between display technology and touch technology is largely due to humans being ocular-centric. I disagree because I have observed that young children (especially toddlers) tend to use their sense of touch much more so than sight. Toddlers will touch their toys and put them in their mouths naturally. I think the reason that touch I/O has largely lagged is because computers have always been driven by business needs where information needs to be visualized rather than touched. Perhaps if the gaming industry drove the market more, touch capabilities would be more advanced?


Sally Ahn - 2/17/2010 18:04:22

The discussion about multitouch input devices was interesting. The Multi-person vs. multi-touch discussion suggested an interesting realm of user interface where many people can work on a single large surface. Such a device may be useful for meetings and encourage collaboration.


Arpad Kovacs - 2/17/2010 18:12:34

I agree with Bill Buxton's assertion that a single cursor is beneficial in a multi-touch technique. 10/GUI http://10gui.com/background/ is a recent HCI concept that combines multitouch and bimanual input modalities to increase interaction bandwidth. Unfortunately, by using all 10 fingers, there are 20 possible degrees of freedom of continuous action, and therefore this could detract from allowing the user to focus on a single locus of attention. Furthermore, the key to the concept is using multiple fingers at once to perform gestures, rather than point-based manipulations. Combined with the fact that the touchpad is disparate from the screen, the focus on displaying all 10 fingers at once unfortunately causes a lack of directness and focus compared to traditional single-point touchscreens.



[add comment]
Personal tools