Engineering Interfaces I: Layout, Widgets and Events
From CS160: User Interface Design Sp12
- Event Handling. Building Interactive Systems. Chap 3. Olsen. Read through page 66.
Douglas Treadwell - 2/19/2012 15:31:03
I found the discussion of the evolution of event handling and the reasons for that evolution to be very interesting. I haven't previously seen a relatively comprehensive history of event handling, from event dispatch loops to event listeners to delegates. In addition, the chapter gave a summary of method dispatch techniques from procedure calling to method maps to virtual tables. On the other hand, we've already seen similar ideas to the "model of interaction" from the first part of the reading, so that part didn't add much additional value.
Lichen Han - 2/22/2012 23:37:11
This week's reading provided extensive coverage of event handling. It was particularly interesting to explore the methodology behind how event handling is implemented. One of the points that struck me is that there is no actual main method or program. The users defines the steps that must be executed in the main program that correspond to the goals in their mind. I thought that this point is very significant because it defines the flexibility built into event handling. To be more specific, event handling is implemented so that the software can be expected to behave reliably independent of the order in which events arrive. The manners of event handling implementations and issues discussed later in the readings all tie back to this point: how to represent events and views such that users are allowed to execute events freely and with consistent results.
Raphael Townshend - 2/24/2012 23:54:23
I was already familiar with the concept of Event Listeners and inheritance/interfaces, but the rest of the discussion was entirely new material for me. Especially enlightening was the discussion of the weaknesses of listeners and how delegates could be used to focus events in a more efficient manner. Also, the historical discussion of event loops and event tables helped clarify the nuts and bolts of event handling, and how these were built upon the create today's state-of-the-art systems. The final chapter on interpreted expression was probably the part I had the most trouble with as I had no idea such a system existed or how it would work, and had to perform additional research to get the concept.
Elena Gasparini - 2/25/2012 14:48:27
The user must cross the gulf of execution in order to tell the computer what she would like to get done. She must cross the gulf of evaluation in order to understand the information given the presentation by the computer. Dan Olsen argues that there is a common model or pattern of human computer interaction. he then describes multiple events, event handlers, listener classes and listener mechanisms.
Hywel Lo - 2/25/2012 16:39:29
After reading the "Event Handling" article, I now understand more about how the design process works for each different event handlers. Before this reading, I didn't know there were so many different kinds besides the normal ones I have coded on web development such as mouse and keyboard interaction. What interested me was the focus, inheritance and event loop that are utilized in allowing programmers to do many different things to make the user interface more easily used. Event handlers help us organize the main screen into many sub-classes which gives a sense of organization, hierarchy and more. Without it, everything would be a mess and do such actions on the computer would not resemble everyday's actions that we as users can easily identify with. By binding such event handlers, we can do so many new things that will help improve the process of future technologies.
Whitney Lai - 2/25/2012 19:12:36
I love that front-end development and design engineering typically involves the event listen/trigger model. This is true in both web development and application development and definitely caters to how users interact with the actual interface. It isn't a standard software program someone might write in an intro CS class, where the main() method is just automatically executes and then terminates after performing some function. For a well-designed application, it must listen for and react appropriately to user input for a good experience.
Tobit Narciso - 2/25/2012 19:36:16
This reading begins with an introduction to the model-view-controller architecture, which is used in designing programs and user interfaces. The model is a representation of the information stored in the computer, the view is how this information is displayed to the user, and the controller receives the user input and dispatches the necessary instructions to the model or thw view. The reading also talks about how the gulf of evaluation can exist between the user and the view, and likewise the gulf of execution between user and controller.
The second part of the reading talks about event-handling and how it is used in interface design. It talks about windowing systems and different methods of extracting input events. It gives examples for these methods through code snippets that explain how they work.
Jessica Chou - 2/25/2012 20:35:56
All human-computer interaction begins with a model of the information, a view that presents the model's state and displays updates in accordance with changes, and has a controller that feeds user input to the model. The user perceives the presentation and thinks up a mental model, but this could be different than the actual model due to the gulf of evaluation. The three problems related to event handling include 1) receiving and dispatching events to the corect application/window, 2) associating events with the code to process them, and 3) updating model changes so that the presentation can be redrawn. The windowing system provides a separate drawing interface for each view and its tasks include updating the display, receiving input events from the OS, and dispatching them to widgets.
Jonathan Sulistio - 2/25/2012 22:34:44
Event handling has greatly evolved to accommodate a rapidly increasing number of events. The majority of these events is not directly generated by input devices, but rather results indirectly from actions performed by the input devices. More specifically, operating systems, widgets, and other software bring about these indirect event creations. Over time, the number of events increased from just about ten or fifteen to several thousand, which makes sense considering that as software became more robust and complex, more user input options and consequently, more events, would transpire. This in turn led to a fluctuation in memory use as virtual tables became larger in order to store more events, which then necessitated new, more efficient event handling mechanisms in the form of generator and listener objects.
Kenneth Do - 2/26/2012 0:26:24
Brandon Young - 2/26/2012 7:10:51
The Olsen reading is a description of different types of events and their implementation, particularly in object-oriented programming, as well as a general description of the "translate-change-notify-present" cycle common to all interfaces. It covers events using various types of physical interfaces, such as mice, and the windowing system, which allows many applications to run simultaneously. The first interfaces I made were by playing around with Pygame, where I used the "very primitive" switch statement mechanism described on page 53. I rewrote the program using an object-oriented paradigm later because it was easier to follow the program logic, but I wasn't aware until this reading that this caused the program to run faster as well. Although I originally thought it was inconvenient how the methods invoked with click events had to be explicitly created for buttons in WPF, after reading how events are programmed in many Windows applications I see it is better to avoid dead code.
Alvin Chang - 2/26/2012 13:18:10
The articles model of HCI closely related to the action cycle described in class. By combining these ideas and bridging the gaps of execution/evaluation, a better design can be created through iteration. The key focus explained in the article also matches with the user's locus of attention as described before.
Benjamin Le - 2/26/2012 13:18:41
I thought this reading was very informative to teaching us how certain programs handle events. This would have been very useful to read at the start of class before we began implementing interfaces. A question I have about the readings is how to web servers handle events? I believe that the Rails framework uses a similar approach where there are controllers that connect view elements to underlying code but I wish the author would touch a bit on how PHP and Rails handle web events.
Darren Sue - 2/26/2012 14:31:58
I'm glad that this reading went over the gulf of evaluation and execution again. Sometimes I would have difficulty separating the two. The gulf of evaluation is the difficulty in forming the correct mental model for the system. The gulf of execution is the difficulty in conveying tasks to the system.
I am doing focus in a manual way for programming assignment 3. Obviously it would be much easier to use it in the object oriented way outlined in the readings. Although, I don't know whether the Kinect API offers this functionality so manual might be the only option anyways.
Anyhow, many of these specific event discussions are not relevant due to the Kinect's lack of state. The discussion about mouse down and mouse up are more relevant to conventional inputs and may motivate the reason that HCI are so eager to discount nontraditional inputs. Anyhow, I am running into some problems with gesture recognition so there may be that too.
Eugenia Lee - 2/26/2012 16:05:30
The reading was on event handling and window/widget systems. In general, event handling involves receiving user input (such as key strokes our mouse clicks) and sending that information to the correct window, running the corresponding code to handle that event, and updating the view accordingly. In a windowed mode, a certain widget has key focus, which means that a user should be able to use key strokes to transfer the focus to other widgets (such as tabbing through a form to fill it out or hitting enter to submit) without having to reach for the mouse. There are multiple ways to bind code to events; a simple example would be a switch case statement, or a window event table.
Bei He - 2/26/2012 16:39:44
Interface elements interact with each other through events that mostly original from user inputs. A large number of interfaces use separate windows that pass events and call each other's event handlers to pass information. Some of the common events include basic mouse and keyboard events as well as interface specific events such as window resizing. These events are then passed along in queues or similar constructs.
Yu Gan - 2/26/2012 18:58:08
Shu-Chen Chen - 2/26/2012 20:26:31
The reading on Olson’s interactive systems focused heavily on the interaction between the user and the “View” or the computer screen. For every interface you have the computer model and a mental model, and the important part of the interface is making sure the two models can accurately interpret each other. For example the book uses a thermostat, and making sure the user knows what the current temperature is, what the temperature they are setting the system to is, and that the system understands the command and is acting on it. The reading also explains the different system of user interaction with widgets and mouse and key focus. It is important to let your user know which window they are actively interacting with and where the cursor or functional buttons are. For example, when a save window pops up, you don’t want the user to accidently hide the window or change the part by clicking on the original screen. By controlling the focus of the mouse and keyboard, you can keep the user in the correct interface screen and improve the usability of the product. The reading also talks about event handling and how to keep events and windows correctly handled during use. There are many events that a user could trigger in a widget that may or may not interact with other widgets. There are multiple ways of handling this such as Callback, WindowProc, and Inheritance.
Samuel Zhu - 2/26/2012 20:58:28
User interaction is very important, and how you handle events is important as well. I thought this reading lent a lot to the fundamentals of how to handle these interactions. It is a fairly basic breakdown, but it is all relevant to the text, if a bit dry.
Shuqun Zhang - 2/26/2012 21:06:49
All human-computer interaction has a Model-View-Controller architecture. The model is the information to be manipulated, the view is the presentation of the current state of the model, and the controller is what translates user input into model changes. One main challenge of programming user interfaces is event handling. There are three main issues that occur in event handling: 1) receiving events from the user and dispatching them to the appropriate application (windowing system), 2) matching up an even with the correct code to process that event, and 3)notifying the view of model changes so it can change the presentation accordingly. The windowing system is responsible for providing each view with their own window, making sure the views get updated with any model changes, and receiving input events and dispatching them to the appropriate widgets. There are many types of input events; one for each type of input, such as buttons, mouse, keyboard, other windows, etc. There are several ways events get bound to code. The early macs used event queues and type selection, the GIGO/Canvas system used windowed event tables, X Windows used callback event handling, Smalltalk-80 used Inheritance event handling, and more relevant to modern technologies, Java uses Listeners, where each event has its own listeners which would 'listen' for specific events to trigger.
David Squeri - 2/26/2012 21:13:42
I can consider myself fairly familiar with the Windowing System, but the reading’s description about top-down vs bottom-up vs bubble up strategies managed to confuse me for a little bit, as I couldn’t figure out what the difference between the three actually was. I finally understood what was being said when I realized that the author describes every subsection of a window as its own window. Therefor a bottom-up strategy would send a mouse click to whatever item (or window) was directly underneath the mouse, while the top-down approach would first send the mouse signal to the frontmost window (referring to the program that is actually being run) and then that window (or program) would decide what to do with the signal. I’m still not sure what bubble-up means… I’m also very glad that I can’t remember a time when I was forced to use a computer that didn’t implement key focus. That sounds awful.
Kurtis Freedland - 2/26/2012 21:30:37
I think that modeling humans as computers is really interesting. It allows us to understand how they might interact with other "computers" and how our users will process the data that we deliver to them. From these analogies, we can design systems that best fit human beings. I think that event-based models for human interaction are probably the most accurate, as I find myself focusing on things that happen in my environment. Outside stimuli influence me to make the majority of my decisions and this is exactly what even-based programming seeks to model.
Arturo Wu-Zhou - 2/26/2012 21:33:22
Three issues in event handling: demultiplexing user input and dispatching those inputs to the correct application/window; something about associating an event with something else called a correct code in order to process the event; notifying the view to change its presentation. There are different ways to dispatch the event: bottom-up, top-down, bubble-out, and focused. Bottom-up: Many inputs are associated with the mouse. In this case the event is directed to the lowest, front-most window (the one that the user perceives to be directly under the mouse location). If that window (whatever it is defined to be) has no use of that event, it propagates the event up to its parent and so on and so forth until a window is found that can use it. Top-down: Similar to previous one, except the event is passed onto the front-most window and this window decides how to dispatch the event to its children (not parent, as before). This gives the application software more control of what happens. Bubble-out: This is used when there is no clear nesting of windows. This is sort of a combo of the previous two. The windows tree is traversed in a top-down, and then when one of the objects, whatever it may be, can use that event it propagates a notification back up the tree. Focus: (1)Key focus: Used with a keyboard. Windows now are associated with something called the key focus. When a keyboard event is fired the window with the key focus gets that event. (2) Mouse focus: when the mouse down event fires, the window under the mouse when that event fires, gets all input from the mouse. Input events: These generally carry three pieces of information: the event type (button/mouse, keyboard), mouse/pen location, and some modifier button information.
Binding event to code, which processes event: Event queue and type selection: programmer responsible for binding event to code. Window event tables: exists a table that the windowing system indexes in order to bind event to code, but code at static addresses? Callback event handling: same as event tables, but addresses of code were dynamic? Windowproc event handling: same as table, but only one entry/process, and something about each window having a unique, or different switch statement that handles certain events? Inheritance Event Handling: Object oriented way of dispatching events. Each widget/window? implements some base class and when event fires, the widget/window passes the event to its children and they would know how to deal with it? Object-Oriented Event Loop: same as previous? Listener – same as before, but there’s no enforced hierarchy as before, so no new classes (which are same as the current ones) need to be generated/created ?
Model/View notification: controller processes event, notifies model (somehow), model changes itself (somehow) then tells view to change itself, and finally view tells windowing system to change its presentation.
Lida Wang - 2/26/2012 21:47:23
I thought it was interesting to read about the why and how of event-driven progragmming because it helps make sense of what we've been doing with WPF. It was also interesting to read up on the history of event binding and it was cool to learn how C# does event-binding, through virtual tables. The pros/cons of various event binding techniques (listeners, interpreted expressions, delegate event model, etc) was also good to read up on, as it helps to understand the inner workings of the abstraction that you are working on top of.
Connie Guo - 2/26/2012 22:23:03
In an interactive model, the program must effectively deliver information regarding the state of process to a user through the view. The concept of eventhandling supports this process, yet there are fundamental challenges associated with it. Events must correctly be interpreted from the user and delivered to the correct application, be processed by the right code, and finally be delivered to a presentation view.
WenJie Zhou - 2/26/2012 22:34:41
reading reponse for "Building Inteactive System. Chap 3"
Through reading this piece of reading, i know that the anatomy of an event as following:
Encapsulates into needed for handler to react to input. They are event type, such as mouse moved, key down; even source, such as the input component; timestamp, which is when did event occur; modifiers, such as Ctrl, Shift, Alt.
And also, the event dispatch loop was introduced: Mouse moved(t,x,y) --> Event Queue, queue of
input events --> Event loop(run in dedicated thread):1,remove next event from queue. 2,determine event type. 3,find proper component(s). 4, invoke callbacks on components. 5, repeat, or wait until event arrives. --> Component, Invoking callback method, updating application state, requesting repaint if needed --> Output device,
such as screen.
Benjamin SHapiro - 2/26/2012 23:02:22
This week's readings provide a short survey of event handling in design frameworks (specifically software). The event handling mechanisms are all described as occuring within a Model-View-Controller framework much like (or rather, exactly like) those used for entire modern-day web systems (common MVC frameworks include Django, JSP, and Flask, to name a few).
MVC frameworks seem like a useful way to describe design implementation and event handling, but they also seem very implementation specific. The description in the reading gets very detailed about software because most of the MVC metaphors it uses to describe how events get triggered map exactly onto the software implementations it uses. However, MVC isn't the only framework out there. I would be interested in finding out if there are other design and event-handling "frameworks", that is, ways of thinking about event handling that are not so implementation-dependent. Is this the only mode of thinking when it comes to event handling or are there other alternative ideologies?
Huan Do - 2/26/2012 23:02:34
This reading covers a lot material. I argue that some elements of this reading maybe less relevant and might be better to leave out. For example, the runtime organization of programming languages that uses interfaces and virtual tables. Or the concept of reflection. I believe this reading would be better off if it stuck to MVC model, events and delegates.
Chenkai Gao - 2/26/2012 23:03:49
The reading talks about the architecture of interactive systems. The software presents its state through "view". Not only must the view present the state of the model, but it must also update its presentation in response to model changes from the user or some other activity. When doing the event handling, there are three issues. The first issue is the process of the receiving events from the user and dispatching them to the correct application/window. The second issue is that, eventually, an event must be associated with the correct code to process the event. The third is the problem of notifying the view and the windowing system of model changes so that the presentation can be correctly redrawn. There are also various event handling mechanism. The last part of the article talks about how object oriented languages have implemented event dispatching.
Lu Cheng - 2/26/2012 23:45:22
The model-view-controller model provides useful layers of abstraction between the user and the computer. I thought that this was a good concept to keep clean code and maintain separation between the user and computer so that the backend model design is different from the user interface design.This article brought up interesting points about how focus is important towards user interface design and some of the events a programmer needs to bind to make user interface more interactive and provide more user feedback. For example, bringing an object into focus helps the user understand that they are invoking or about to invoke an action that has to do with that object.
Camilo King - 2/26/2012 23:49:59
The main focus of this reading was the topic of event handling and various approaches to this process. There are different kinds of events such as input events (i.e. the user doing something with an input device) or even button events. Much of the reading consists of information on window events (the result of having a window based system). In addition there are various types of event handling such as callback, windowproc, and inheritance handling. I found it quite interesting how the reading made several references to Java and its interface.
Joseph Schadlick - 2/27/2012 0:06:33
Event handling is composed of receiving events from the user, relaying it to the proper part of a program, associating the event with the correct code that deals with the event, then notifying the user that the event has been processed via a form of display. This process can be done numerous ways. One way is through a windowing system. In an operating system, it is common that the front most window will receive the focus, or accept and interpret the events given to it by the user via an input device. An application must also take into account the type of input event it is receiving. A common way of binding code to an event is through a program wide while loop, that has different cases to deal with the different events that can occur during the life of the program. For more complicated applications, listeners, or objects that wait for certain events to be generated, are preferred.
Rohan Cribbs - 2/27/2012 0:12:41
This reading focuses primarily on events and their functions. The reading explores various implementations of even handling and in addition discusses their weaknesses and strengths. The reading also implicitly forms a good model for event handling and aspects which can improve efficiency. Numerous examples and explicit code is used to make points in the reading.
Brennan Polley - 2/27/2012 1:23:47
My individual programming assignment 3 had a lot of event handling in it. I mean a lot! So I found chapter three of the reading interesting. It discusses three issues that often occur in event handling. These issues are: receiving input from user and properly dispatching the information to the application, associating the input information with the correct code necessary to process it, and updating the view and windowing system once the information has been processed. These input events usually have three main pieces of information. That being the event type, location of input device where event occurred, and "some modifier button information." These input events can come from the mouse clicking, mouse moving, keyboard, etc. The reading then goes into each of the three event handlings and event inputs. Definitely would have helped to have read this before starting project 2, but a good read non-the-less.
Rohan Ramakrishnan - 2/27/2012 1:51:08
I thought the section regarding Key Focus in this week's readings was particularly interesting for several reasons. First off, the fact that losing focus is actually part of how the program creator must handle key focus events was not immediately obvious to me, and while the reading could have phrased that more clearly it made sense when it started talking about tabbing away. Also, I thought it was odd that while Olsen noted that using the Tab key exclusively for switching key focus was a bad idea because word processors (or rather, word processing software) use the tab key for indentation as well, he didn't mention the simple fix that is enabled on virtually every modern computer of using the Alt+Tab combo to switch focus, which negates any chance of accidentally tabbing away while intending to do something else like indentation entirely.
Jingwei Qi - 2/27/2012 1:56:02
The reading first talks about the architecture of interactive system -- MVC model. Then it talks about the input event like button events, mouse movement event, keyboard event, window events etc. Later, it talks about the event handling. These basically contain callback event handling, windowproc event handling and inheritance event handling.
Minzhi Zhao - 2/27/2012 2:02:27
All human computer interaction has the architecture as Olsen concluded: it begins with a model of the information to be processed; after the information is produced it will present the information to the user; the user will perceived the information and then the user will formulate a series of actions. In addition to HCI architecture, Olsen also introduces three main issues in event handling: the process of distributing events to the correct application after receiving the events from users; an event must be associated with the correct code to process the event; and the problem of notifying the view and the windowing systems.
Sahana Rajasekar - 2/27/2012 2:24:20
This week's reading was very relevant to the individual programming assignment 3. I enjoyed how the event handling process was split into 3 main groups or issues. The first is that the windowing system has to detect and recognize user input. I had some trouble getting the Kinect to recognize a button press so this topic was very interesting to me. Secondly, connecting the event to the code that will perform the necessary action. Visual Studios made this a pretty smooth issue for me. Lastly, the windowing system must change the initial screen to accomodate changes from the action.
As I mentioned above, most of these ideas directly related to the work I've been doing for 160 and my research work.
Peter Beardshear - 2/27/2012 2:42:32
The reading provides an informative overview of the event-driven design paradigm, and the MVC architectural style. However, for those familiar with the notion of callbacks, event binding, handlers, and delegates, the article was primarily review. It would have been more compelling if the author had placed more emphasis on the user interaction benefits of event-driven programming, rather than its technical implementation.
Sally Lee - 2/27/2012 2:49:27
This article talks about the architecture of interactive systems and how people respond to them. Then it explains the input process of event handling. The only experience I have is with event handlers in Java when I was creating GUIs. The reading was pretty consistent with my own event handling experiences in Java. The article was an interesting read but it didin't really help me with any particular project that I'm working on for this class. Also I thought the reading was more abstract and harder to read than the other articles from the past.
Jeffrey Yu - 2/27/2012 3:13:51
In Event Handling, the author goes into very detail about how event handling works and the input process. It was very comprehensive, covering everything from widget placement to event queues. Since I have experience with Java event handlers and the ActionListener class, the material in the reading was not completely new. I thought the reading was pretty straightforward, like when it discussed different events (e.g. button events, etc).
Neel Rao - 2/27/2012 3:25:21
There are many aspects of event handling to keep in mind. There is the actual appearance and reaction to the user and also the framework where the code is handled. In a windowed system each view has its own separate interface. Users are provided with feedback when changes to each view are made. Maintaining good practice with element 'focus' will keep users on track and less confused. For example, using tab to switch between form elements changes the focus and lets the users keep their hands on the keyboard.
There are also many other ways of handling events on the code side: Event tables, callbacks, and event listeners each can be implemented for different styles of handling. There are downsides and advantages to each method. For example, in an advanced interface module event listeners are typically used because they scale better as the number of events grow. The downside is that it's not always easy to figure out the best way to handle the events in relation to other code.
Kelvin Jie Lam - 2/27/2012 3:28:15
From the reading, I found it interesting how many of the terms described in the article are also a part of visual studio such as event handling, input events, etc. The view, controller, model system is also interesting as I’ve also seen that in databases as well. While this view seems to encompass how we interact with systems now, I wonder if there will be a significant enough change in the future that may alter this interactive system model that we have right now.
JinWoo Roh - 2/27/2012 3:32:59
This week's reading talks about the event handling during the process of designing. The author mentions that there are many benefits of event handling, and I especially liked the Window events. I agree with the statement that every interactive system has a windowing system that is responsible for the sharing of screen drawing and input device resources (51). From this week's reading, I learned a lot about event handling and its importance.
Yuki O'Brien - 2/27/2012 3:45:50
This reading touched on many concepts that I have encountered in this course as well as other programming experiences I have had, making it quite relevant. Different models of event handling were presented, and made me think more deeply exactly how wpf manages events, as well as how the iphone deals with events. Although the syntax is different both use multiple threads to handle UI events. The view/model/controller concept of computer interaction has been presented to me before and seems to be the gold standard in terms of ways to create a User interface, and for good reason.
Robert Marks - 2/27/2012 3:48:10
Today's readings covered a tremendous amount of information about the structure of event handling in various Operating Systems and languages. I was interested to see how with the dozens of languages that can handle events, there are comparatively few ways to handle events. It was also interesting to see how the different languages implemented things such as bubbling up of events and delegation. I had trouble understanding the diagrams for virtual methods in the C++ explanations. I wish that the article would have provided an example of C's use of procedure pointers, rather than just mentioning that it exists.
I found the use of a magic elf rather odd, but it did convey the decoupling of models, views, and controllers in the MVC style.
Ahmed Afifi - 2/27/2012 3:58:58
User Interfaces are often difficult to program because they don't have a central program, but instead have many windows that are driven through event dispatching. It can be difficult to tell which object on the screen is supposed to receive the event, especially with a keyboard. Focus comes into play here, allowing the user to have an idea of which object is activated. There are various types of event handlers, and they are all seful for different things.
Matthew Leung - 2/27/2012 4:01:23
"Event Handling" talks about different models of event handling. Callback event handling has each window store procedures and important events in simple string names which become callbacks. Then the window can access any of its necessary pieces of code in order to handle events. Another example of event handlers are listeners which take user feedback from input devices and perform the correct (and usually simple) operations. The reading also talks about the Gulf of Evaluation and the Gulf of Execution and why a good design bridges both gaps so that users are both comfortable and proficient with using the design.
Bhavik Singh - 2/27/2012 4:09:01
I think this reading did a good job of explaining the "gulf of execution" and "gulf of evaluation". What I found most interesting was the possibility of writing a controller that can "understand all possible human forms of communication and translate them accurately". What would that sort of controller even entail? The computer would have to interpret not only the 5 human senses, but also more subtle things like human emotions and implied meanings.
This discussion expands further to event handling. Our current event handling is based solely on very few inputs such as a keyboard and mouse. With applications such as the Kinect we can expand this to include gestures as events. I wonder if in the future computers will have the ability to handle more random events such as me walking into a room, or falling asleep. This also brings up an interesting question "For computers to be able to interpret all human input and listen for all possible events, will this require them to always be on and watching us all the time?"
Pedro Tanaka - 2/27/2012 5:45:43
In the selected excerpts of “Building Interactive Systems: Principles for Human-Computer Interaction”, Dan R. Olsen Jr. discusses various programming paradigms that support the architecture of interactive systems. The author starts defining a simple model of interaction where the user perceives through view and he/she express his will through some sort of input device. Inside the machine, the message is translated, interpreted, and then the result is communicated back to the user. In chapter 3, Olsen discusses the underlying programming concepts that are used to program interfaces.
Sherman Ng - 2/27/2012 6:21:52
The idea of using events that an average user experience to drive your application development is surprisingly uncommonly taught early. While the idea that teaching proper programming and functional correct code important, the other half to build the thing right (build the right thing) is often a skill that deserves just as much study. This piece illustrates this point as it shows that even the simplest schemes can greatly benefit if designed with the user in mind. The fact that software development companies are moving towards better user satisfaction through the agile design cycle illustrates this. Event handling, or the abstracting of the user that provides arbitrary input that must be processed shows that despite how one can never perfectly fulfill every single user need, the programming and development team can develop a good product by satisfying their users by giving them what they would want.
Yian Shang - 2/27/2012 6:24:44
The view should be separate from the controller. Event handling apparently follows an MVC model, similar to various forms of web programming that I've seen before. Events could include any type of input and then it processes that input.
Praneet Wadge - 2/27/2012 6:35:30
I found this reading fairly dry but nonetheless useful due to the technical level of details it provided, such as its discussion on event tables and queues. I especially found useful the explanation of event handlers and listeners, as even after countless computer science projects, I have never recieved a full fledged explanation of their mechanics. One question out of pure curiosity I had was how the database of listeners was implemented; if done relationally, which seems natural, I believe designers can have good insights through the study of this database while developing interfaces, as they can find the most efficient combinations of event handlers.
Andrew Wun - 2/27/2012 7:14:32
As there is no true main program, there must be event handling for whatever the user decides. Each application shares screen space and input devices but maintains its own private world in which to function. Handling focus, mainly between key and mouse, is needed in implementing the windowing system, with input events of all types arriving whenever. The screen-keyboard-mouse approach is currently dominating because of all the tools and techniques mapped into it but the other models are maturing. There is a challenge, however, in associating design with code to process events. Object-oriented techniques simplify windows and their widgets to allow direct implementations of events and methods. An event has two types of objects: generators that produce and listeners that want to be notified. There are many event/code binding mechanisms in various languages to help develop user interface design tools.
Kate Greenwood - 2/27/2012 7:22:25
This reading discusses the general structure of human-computer interaction as always being derivable from the basic model-view-controller architecture. This structure is just to say that there is always first a "view" that displays current state to the user, then there is always some way of "modelling" the state of the program in such a way so as to be interpretated by the computating facilties of the computer, and finally a "control" by which the user may change the current state of the system to accord with their desired state.
I think this is a really, really helpful way for understanding and breaking down the potentially ambiguous task at hand for the engineer when setting out to specifically to build something that is truly accessible to users. As a very visual person, this helps me greatly.
Ashley Hsu - 2/27/2012 7:30:51
In the chapter about Event Handling, the reading talked about the different ways that events can be handled by different systems. One system that was discussed is the Windowing System, which I thought of to be like the system in many photo editing programs, such as Photoshop. There are many different windows in one screen that relate to different functions, and each click of the mouse might be related to a different window, which may be selected either by window focus or mouse focus. What I thought was interesting in the readings was the section on "event/code binding," because the logic of linking one event to another and looking forwards as well as backwards to see what may be coming up next has always been tricky for me.
Timothy Zhu - 2/27/2012 7:32:23
About the description of event handling approaches, I felt some information was not present to make it comparative instead of a list. If a plain global event queue is so fast, how come no one uses it? (Because it's a pain.) C# delegates are faster than Java because you don't have to construct an anonymous class. But how fast are they compared to earlier systems? By what order of magnitude? Which ones do everyone use nowadays (or as of the time of writing) and why??
Can Zhang - 2/27/2012 8:19:58
This is one of the more CS-heavy readings of the course so far, though it mostly describes how to work with a windowing API, which in this case looks suspiciously like JAVA. It teaches how to handle Event-based dispatching and handling, as well as the concept of focus. Interestingly, there's a lot of different kinds of event handling which I did not expect. I thought one model would have been more than suffice.
Danube Phan - 2/27/2012 8:25:09
The biggest issue with user interface is that there is no "main program". Instead, the "main" is actually just the initial steps of the interface that the user is interacting with. The user must learn how to interact with the user interface in order to reach his or her goals. Usually if a user wants to do something, such as play music, he or she can interact with windows that can help him or her navigate to the correct window and eventually reach his or her goal. This is a windowing system, which is responsible for ensuring correct application/window navigation by receiving input from keys, mouse, buttons, etc.
Christopher Nguyen - 2/27/2012 8:42:41
Today's reading discusses the Model-View-Controller abstraction present in event handling. The first part talks about users understanding the interface ("Gulf of Evaluation") and then users carrying out a plan ("Gulf of Execution"). Problems can arise when users do not correctly evaluate the affordances of the interface. The second part of the article talks about three issues that arise in event handling: receiving and dispatching input, associating an event with the correct code, and then updating the view of the changes. For me, the most interesting discussion was the talk on the windowing system to handle the first issue, and when it was appropriate to use a bottom-up, top-down, or focused windowing system. For instance, a top-down approach was good for writing interactive test cases. At the same time, many of the technical details gave more insight into how the MVC model applies to Kinect programming (i.e. with listeners, event handlers, etc.)
Eric Mao - 2/27/2012 8:52:36
The reading shined new light for me on how to design interfaces that grasp the users attention. One must synchronize what the user intends to do with the interface with what the designer wants the user to perform. Human computer interaction comes down to very specific, subtle, and often times overlooked interactions such as mouse movements and keyboard actions in the big picture. To be a successful designer one must be meticulous in analyzing the gamut of possibilities users may take.
Jessica Miller - 2/27/2012 8:53:41
This reading discussed the basic principles behind Event Handling for building user interfaces.The four main approaches of event based systems include bubble-up, bubble-down, focused, and bubble-out. These systems will use a window-based set up to keep state of the program and ensure that the correct output is being displayed. The article also discussed the different types of ways that input could be detected. For example, using when using a button to detect an event, the clicks on the button will indicate whether or not to raise an event. OnClick and doubleClick are two different mouse click events. Mouse movement like hovering or mouseDrag can be used to set off certain events as well. In addition, the article finished with a discussion on code binding and inheritance. With listener implementation, one interface will 'forward' all events to your desired class containing all the members and methods of the event.The listener class is an added layer of abstraction to detect for the the event. In the reflection mechanism, certain methods and objects are discovered at run-time. This allows objects to take on methods later on which allows for flexibility.
Lingbo Zhang - 2/27/2012 8:54:45
Human-computer interactions is based upon an information flow from the model to the view to the mental model to the controller and back to the view. Before the mental model is formed, the user has to perceive what the model is based on what view is presenting, leading to a "Gulf of Evaluation" when the user misinterprets the model. Similarly, the user has to express his desired actions for the controller to translate, leading to a "Gulf of Execution" if the user is unable to express his desired actions into the language of the controller. There are three main issues in event handling: associating an event with the correct application/window, associating an event with the correct code, and communicating changes of the windowing system to the view (the reading only addresses the first two). Concepts such as key focus, input events, window event table, listeners and delegate event model, present ways to solving the above problems.
Tamzid Islam - 2/27/2012 8:57:04
The reading assignment for Engineering Interfaces discusses various event handling approaches that have been in different ways over time. Event handling is the procedure that asynchronously manages inputs received in a program. The critical part is that a software has to be constructed to work reliably no matter which event has arrives, even if the event is inappropriate. To handle such an issue an effective event handler is crucial to the performance of a system.
Kaiyuan Deng - 2/27/2012 10:06:41
This reading demonstrates the Viewer/Controller models. Event handling is very important because the program needs to understand user input; With the mouse the action is simple, but with modern multi-touch input, the device is constantly polling for any gestures or actions. The listener model allows for a good design patter for programmers to piggy-back logic and separate the design model from the business logic.