Hasso-Plattner-Institut
Prof. Dr. Patrick Baudisch
 

Affordance++

allowing objects to communicate dynamic use

We propose extending the affordance of objects by allowing them to communicate dynamic use, such as (1) motion (e.g., spray can shakes when touched), (2) multi-step processes (e.g., spray can sprays only after shaking), and (3) behaviors that change over time (e.g., empty spray can does not allow spraying anymore). Rather than enhancing objects directly, however, we implement this concept by enhancing the user. We call this affordance++. By stimulating the user’s arms using electrical muscle stimulation, our prototype allows objects not only to make the user actuate them, but also perform required movements while merely approaching the object, such as not to touch objects that do not “want” to be touched. In our user study, affordance++ helped participants to successfully operate devices of poor natural affordance, such as a multi-functional slicer tool or a magnetic nail sweeper, and to stay away from cups filled with hot liquids.

What is Affordance++?

We call this concept of creating object behavior by controlling user behavior affordance++.

Conceptually there are many ways of implementing affordance++, generally by applying sensors and actuators to the user’s body, such as the arm. In this paper, we actuate users by controlling their arm poses using electrical muscle stimulation, i.e., users wear a device on their arm that talks to the user’s muscles by means of electrodes attached to the user’s arm (we describe the device in detail in section “Prototype”). This allows for a particularly compact form factor and is arguably even more “direct” than the indirection through a mechanical system. However, the concept of affordance++ needs not to be tied to a particular means of actuating the user, but to the concept of doing so instead of actuating the objects that the user interacts with.

Figure 1 illustrates affordance++ at the example of the aforementioned spray can. Affordance++ allows the spray can to produce a range of different types of behavior. In the shown example, when the user grasps the spray can, the spray can causes the user to shake it. Our prototype implements this either using an optical tracking system or using a sensor in the user-worn device that recognizes a marker inside the spray can. Once recognized, the prototype plays back the desired behavior into the user’s muscles.

As we illustrate in the following section, affordance++ allows objects to produce multiple types of behaviors, including behaviors that start prior to physical contact. By storing information about objects’ states, affordance++ allows implementing not only motion, but also multi-step processes and behaviors that change over time.

On a technological level, the electrical muscle stimulation technology we use is able to make users perform certain motions. However, affordance++ intentionally avoids this and instead suggests how to use objects by actuating the user’s hand with low intensity. This keeps users in the loop, allowing them to decide when to follow a suggestion and when to overwrite it.

Limitation of traditional affordance = depicting “time phenomena”

Affordance is a key concept in usability. When well-designed objects “suggest how to be used”, they avoid the necessity for training and enable walk-up use. Physical objects, for example, use their visual and tactile cues to suggest the possible range of usages to the user. Unfortunately, physical objects are limited in that they cannot easily communicate use that involves (1) motion, (2) multi-step processes, and (3) behaviors that change over time. A spray can, for example, is subject to all three limitations: (1) it needs to be shaken before use, but cannot communicate the shaking, (2) it cannot communicate that the shaking has to happen before spraying, and (3) once the spray can is empty, it has no way of showing that it cannot be used for spraying anymore (and instead should now be thrown away).

As pointed out other researchers., the underlying limitation of this type of physical object is that they cannot depict time. The spray can is inanimate. Motion, multi-step processes, and behaviors that change over time, however, are phenomena in time One way of addressing the issue is to provide objects with the ability to display instructions, e.g., using a spatial augmented reality display. To offer a more “direct” way for objects to communicate their use, researchers have embedded sensors and actuators into objects, allowing them to be animated. This approach works, unfortunately, at the expense of substantial per-object implementation effort.

In Affordance++, we propose a different perspective. While animating objects allows implementing object behavior, we argue that affordance is about implementing user behavior. The reason is that some of the qualities of an object are not in how the object behaves, but in how the user behaves when making contact with the object.

A good part of the process of communicating how the user is supposed to operate the object, however, takes place before users even touch the object. Users operating a door handle do not just touch the handle to then re-adjust their hand position based on the handle’s tactile properties; rather, the object communicates its use while the user’s hand is approaching it. The haptic quality of the handle itself ultimately does play a role, but by the time the hand reaches the door handle, the user’s hand is already posed correctly to grip the handle.

Affordance++ in action (more examples in the paper)

In Figure 2, affordance++ helps users handle an unfamiliar object. This “nail sweeper” allows users to pick up and drop objects with the help of a magnet—an example of a multistep process. (a) The tool suggests grasping the handle by repelling the user’s hand when trying to grasp any other part. (b) Afterwards the device suggests sweeping the nails by slowly rocking the wrist back and forth.

 

The critical moment occurs when collecting a screw. Here users typically reach for the lever below the handle, as they assume this is how one collects the screws. This assumption is, however, false and affordance++ repels the user’s hand from grasping this handle and continues the sweeping motion. (c) Only when the user hovers over the container, affordance++ loosens the user’s closed fist, allowing the user to grasp the lever and (d) by pulling the lever, the magnet releases the screws.

The next example, depicted in Figure 3, shows a patented kitchen tool with multiple functionalities. The challenge here is to find out what each part does and when to use it. (a) The user explores the unfamiliar tool and tries to grasp it. Here, affordance++ repels from grasping the knife blade and only affords grasping the other end. (b) After grasping, affordance++ suggests cutting with the blade, by gently rocking the wrist back and forth.

The next step is to remove the pit from the avocado, shown in Figure 4. This kitchen tool affords this by providing a set of blades inside a hole that extract the pit. (a) Affordance++ repels the user from removing the pit with the knife tip and (b) waves the tool back and forth parallel to the pit. This suggests a slamming motion, which the user performs in order to extract the pit.

The last step is to slice the avocado in pieces. While the conventional way is to peel off the skin and then slice each piece individually with a knife, this tool does it in one step. However, as we found in our user study, this instruction is not easily discoverable. Affordance++ helps here by (a) releasing the grasp gently as to suggest grasping the other end and (b) moves the tool towards the avocado suggesting a scooping motion. (c) Users respond by performing the scoop, which slices the avocado simultaneously.

Implementation of Affordance++

Figure 1-5 shows the simple prototype setup we used to explore the concept of Affordance++. It uses electrical muscle stimulation, which provides us with a particularly direct way of instrumenting users. The tracking component of the shown version is based on optical motion capture. Figure 6 shows the gestures we used for our User Study (more details in the paper).

Furthermore, besides the stationary version using optical tracking, we created a simple wearable prototype based on RFID (Figure 7) for tracking which object is in contact with the user. We believe this to be interesting because within “Internet of Things” (IoT) already objects start to have more and more simple embedded sensors which fuse nicely with the visions of Affordance++.

Lopes, P.Jonell, P.Baudisch, P.
Affordance++: allowing objects to communicate dynamic use
In Proc. CHI'15. Full Paper.
  CHI 2015 BEST PAPER AWARD

 PDF (3.7MB) |  Slides (19.6MB)  PDF (PDF, 33.2MB) |  Video  |  Talk Video