Affordance is a key concept in usability. When well-designed objects “suggest how to be used”, they avoid the necessity for training and enable walk-up use. Physical objects, for example, use their visual and tactile cues to suggest the possible range of usages to the user. Unfortunately, physical objects are limited in that they cannot easily communicate use that involves (1) motion, (2) multi-step processes, and (3) behaviors that change over time. A spray can, for example, is subject to all three limitations: (1) it needs to be shaken before use, but cannot communicate the shaking, (2) it cannot communicate that the shaking has to happen before spraying, and (3) once the spray can is empty, it has no way of showing that it cannot be used for spraying anymore (and instead should now be thrown away).
As pointed out other researchers., the underlying limitation of this type of physical object is that they cannot depict time. The spray can is inanimate. Motion, multi-step processes, and behaviors that change over time, however, are phenomena in time One way of addressing the issue is to provide objects with the ability to display instructions, e.g., using a spatial augmented reality display. To offer a more “direct” way for objects to communicate their use, researchers have embedded sensors and actuators into objects, allowing them to be animated. This approach works, unfortunately, at the expense of substantial per-object implementation effort.
In Affordance++, we propose a different perspective. While animating objects allows implementing object behavior, we argue that affordance is about implementing user behavior. The reason is that some of the qualities of an object are not in how the object behaves, but in how the user behaves when making contact with the object.
A good part of the process of communicating how the user is supposed to operate the object, however, takes place before users even touch the object. Users operating a door handle do not just touch the handle to then re-adjust their hand position based on the handle’s tactile properties; rather, the object communicates its use while the user’s hand is approaching it. The haptic quality of the handle itself ultimately does play a role, but by the time the hand reaches the door handle, the user’s hand is already posed correctly to grip the handle.