Skip to main content


As part of our interaction with the real world, we perceive, plan and act. These three activities are intimately connected and deeply affected by context. Much or our actions, as well as elements of how we perceive our environment and the actions we take influence each other albeit inadvertently, (i.e., without our deliberate planning or knowing). We show that computational models connecting perception, planning and action allows us to predict human intentions and infer users’ context-specific thoughts. As a specific example of this phenomenon, we will show eye-gaze and hand movements in the context of pen-based and gesture-based interaction can be modeled to infer user intent and actions. In another example, we will describe how human’s cognizance of anomalous events can be read from multimodal signals pertaining to the interaction. Our work shows that using state of the art machine learning, and a psychology-inspired model of multimodal user interaction, we can build computational models of the user. Such models have vast potential in advancing the state of the art in Brain-Computer Interfaces by complementing the existing systems with models of the human behavior, environment, task and context.

For more information, see: