Manipulating trust propensity to elucidate the person-to-object-trust-development process

PI: Erin MacDonald


Person-to-product-trust has two sides: trust propensity (the person's willingness to trust in general, and specifically the product), and trustworthiness (the product's assessed integrity or benevolence) [1]. Research in product trust thus far has focused on trustworthiness: manipulating the product's design, for example anthropomorphizing an autonomous vehicle, and measuring changes in trust. This projects flips the usual approach, manipulating a person's propensity to trust. In doing so, we expect to reveal insights into the development of person-to- product trust, rather than focus on simply improving it. To accomplish this, we build on our past successes with priming exercises to reveal product insights. Autonomous products are the future. Engineers have dedicated tremendous effort to addressing their technical challenges, and their capabilities are quickly becoming a reality. However, consumers are unlikely to make the best use of these products without an appropriate level of trust—not too much, and not too little. We strive to reveal the cognitive and affective processes involved when people decide to trust or distrust products. This learning will facilitate, and perhaps facilitate the intentional design of, the trust-development processes that build a healthy level of trust with autonomous products.