PI: James Landay
In the era of ubiquitous computing, users are surrounded by intelligent devices, ranging from consumer products, such as learning thermostats, to systems used for monitoring and control in manufacturing or agriculture. These systems are complex, distributed in space, and evolve over time through machine learning. Consequently, these systems challenge our traditional notions of good design, such as congruence with the users’ mental model. In these dynamic systems, not only is it difficult for designers to evaluate users’ mental models in advance, it is difficult to conceive a model to present to users, as even designers are challenged with the task of predicting how the system will behave over time. We investigate an approach in which the changes in the system are exposed to users as a means of evaluating and improving their mental model over time, and also eliciting feedback to disambiguate uncertainties in the inferences made by the system. We explore this approach in the context of loT devices and determine when the information should be exposed based on the gained benefits, uncertainty, cost of intervention, and cost of error. Based on the findings, we compile a set of guidelines that can be utilized in the design of intelligent computing systems.