Hasso-Plattner-Institut25 Jahre HPI
Hasso-Plattner-Institut25 Jahre HPI
Login
 

03.07.2019

News

Interview with Dr. Pedro Lopes

Dr. Pedro Lopes received his doctorate at the Hasso Plattner Institute in 2018 and did his research in his doctoral thesis on Interactive Systems based on Electrical Muscle Stimulation. He was supervised by Prof. Patrick Baudisch, Head of the Research Group Human Computer Interaction at HPI. Shortly after his doctorate, Lopes was appointed Assistant Professor at the University of Chicago, where he heads the Human Computer Integration Lab.

Günther and Jakobs present prize to Lopes
At the graduation ceremony 2019 of the University of Potsdam, President PhD. Oliver Günther (l.) and Jann Jakobs (r.), former Mayor of Potsdam, Pedro Lopes (m.) presented the prize for the best doctorate of Potsdam University 2018. (Photo: UP/Ernst Kaczynski)

"My vision is that the future of human computer interaction is that devices connect more directly with your body"

Interview

Hello Dr. Lopes, a lot has been changing for you, and we would be happy to know more about your new environment, the development of your work, and what you are up to these days in general.

Can you give us an update about what happened during the last months?

I would be happy to. Which aspect of my work is most interesting for you?

 

How do you like your work at the University of Chicago? Could you proceed with your IMS Research and work with your student group to evolve your research?

I am really happy at the University of Chicago. There, I started a lab ( https://lab.plopes.org) dedicated to building interactive devices that connect at a very physical level to our body's sensors and actuators. The result of this direct connection is that these interactive devices are unlike the ones we have today (smartphones, smartwatches, and so forth) because our body is an integral part of the device. With these devices we can experience computation in new ways, such as through the sense of touch or the sense of our body’s position (also known as proprioception), rather than just by seeing or hearing computers' responses, which is the traditional way in which we receive information from computers. By using new senses to interact with computers we can create new benefits for using computers, one that we explore a lot is that by using proprioception instead of our eyes we can build devices that harmonize well with humans in motion because we can create displays that are not visual.

 

Will humans gain new ways of experiencing their surroundings, besides touch, smell, sight, and sound?

Fantastic question! Let me start at the physical origin of our experiences, at its core, our sensory neurons allow us to access the world by letting us feel some of the world's properties. Examples of this are how neurons in our eyes allow us to perceive some of the incoming photons that bounce off objects in our surroundings or how neurons in our skin allow us to perceive some contacts with physical matter. Note that I use "some" because we, like probably most beings, don't perceive the full spectrum of our environment but just that which our senses afford. Now, one way to answer your question is to say we already experience beyond our physical sensory limits since we invented tools that afford access beyond our sensory thresholds. A canonical example is a high-resolution microscope that images a very small amount of photons bouncing back from contacts with very small pieces of matter; the adjectives “small” used in here are in relation to our human scale and sensory limits. These tools are really transformative, for example in my most recent paper, which I just published with colleagues from UCL, FU Berlin and HPI, we used such a tool (magnetic resonance imaging) to peak at neuronal processes inside a functioning human brain that, otherwise, our bare senses could not examine. However, another take on this is to argue that these devices are translational devices and not new sensory organs. The microscope amplifies the world below our visual threshold to an image that our eyes can process -- hence, it is a sensory amplification and not a new sense we gained. Therefore, building a new sense from the ground up might be a much harder challenge as it requires the creation of specialized neuronal pathways to process the incoming information; still, I don't think is an unattainable one. Those who study the evolution of beings have long shown that senses do evolve, so natural selection is constantly creating and re-shaping our sensory system. Now, as for the much harder challenge of truly creating a new sense by means of augmentative technology it becomes more approachable if you consider a human also as the integration between their body and their tools; this is why my lab is called Human Computer Integration Lab.

 

How important can those “new senses” be? F.e. for remote surgery or research on far away planets? 

Let's look at some examples of "new senses" that I just brought up. Whether one considers tools as extensions of our bodily apparatus or not, they are truly transformative when they afford accessing the world beyond/beneath our sensory thresholds. Imaging technologies such as magnetic resonance imagining, electro-encephalography and microscopes shaped our knowledge, and even an HCI lab like the one I direct at UChicago uses these tools! Now, most of these tools are stationary and thus don’t move together with us all the time, so they end up being specialized objects you can only access when you are at the right place (at the lab, hospital, etc). But let’s for a moment imagine you could use any of these anytime, anywhere; how much value would you derive from them? My intuition tells me it would be a lot. Sensory augmentation devices, which might be our best approximation of a “new sense”, can really transform the way you experience the world. One powerful use case is how sensory substitution devices shape the experience of those less fortunate to have all their senses intact, such as visually impaired people (blind, hard of seeing, and so forth). Researchers have engineered sensory substitution devices that can, for instance, translate an image of a camera worn on a blind person’s head to vibration patterns played on their forehead, much like a “tactile image”. Adaptation to this new sense is by no means instantaneous, but, slowly and by means of lots of practice, the wearer starts to rewire their neuronal pathways in a way they can better make use of the vibration patterns to act on the world.

 

Can we soon feel VR?

This is a complex question that can be answer in a multitude of ways. From one pragmatic perspective, one might argue that Virtual Reality (VR) never felt this realistic, and that we are doing many steps in the right direction towards even more realism. If you look at the field of Human Computer Interaction (say by looking at two of its top conferences, e.g., ACM CHI and ACM UIST) you see how dozens of labs worldwide spend all their time in inventing new ways to feel touch and forces in VR. On the other hand, one can also argue that this is only a small step forward, since emulating touch (and we aren't even close to a high fidelity emulation) is just one of the many senses you would have to digitize to get "perfect" virtual reality; which is an oxymoron, since a virtual experience indistinguishable from reality would just be reality. So, in short, we can already feel part of VR really well at home (visuals), other parts are a research lab away (forces and touch), and others are many years away (smell, taste, etc.). On the extreme of this scale, some might argue that one will never be able to fully render a perfect digital/virtual experience as that would likely entail a full understanding of the human (e.g., how our brains encode our senses). Another completely orthogonal way to think about this question, which some of my students are fans of, is that VR can both be used for realistic depictions of reality that are useful for training or as depictions of “new realities”, in which case we don't need to be bound entirely by the principles of our everyday reality.

 

Are you working on a product design or is your approach just theoretical?

My approach is neither theoretical neither we are building products. It is precisely between both of these poles that a lot of our inventions and engineering happens. What we mostly do in my lab is building functional prototypes to test our ideas. Creating physical prototypes allow us to conduct studies with participants to better understand the efficacy of these prototypes. While these are fully functional prototypes that undergo the typical engineering pipeline, we also don't think of these as products because making a product to put in the shelves of a supermarket is beyond our interests (and most likely beyond our expertise too).

 

What are you working on these days?

These days, we are trying to answer what we think are the most important questions in HCI. For instance we have been working on the question of agency. We've been curious about a lot of the haptic devices out there that are powerful enough to actuate one's body, including my own haptic devices made while at HPI, and we started wondering how does it feel to be moved by an external force? Recently in our CHI'19 paper (video here), we shown that while these devices have the potential to accelerate out body's reaction time, which is useful for physical activities like sports, they can also feel alienating because you might feel like you did not move but, instead, you were moved. In this paper, we found a way around this paradox. It turns out that the timing of the external actuation matters a lot for our sense of agency. We found out that actuating the participant's muscles 80 ms faster than their own reaction time, resulted in a faster-than-human reaction time, yet, it preserved a lot of their own agency.

Other than this particular project, we spend our time doing many different activities at the lab. The majority of our time is spent on engineering and building new haptic devices. For example, just before I traveled here to receive the Dissertation award, we spent most of our time engineering a new type of nerve stimulator, I'm very excited about this one. The rest of our time we spend in different activities, such as learning new skills from each other (e.g., we learned a lot about silicon from our last visiting student who's a material scientist), studying how the human body perceives haptics (we spend a lot of time running user studies that differ very little to what you would see a cognitive neuroscience lab running), teaching (we started a new curricula at the University of Chicago, comprised of 3 courses on human computer interaction) and exploring the city; both me and the majority of my students are all new to Chicago, so we spend a bit of our time exploring the city such as going to restaurants, fairs, indoor farms and Japanese supermarkets!

 

What is your vision working at the University of Chicago?

My vision is that the future of human computer interaction is that devices connect more directly with your body -- that's why my lab is called "human computer integration". One example of this is the interactive muscle stimulation devices I build during my PhD at HPI. At my lab at the University of Chicago, together with my students, we investigate what other parts of the human body our devices could connect to. For instance, we think there might be more to Virtual Reality than just visuals, sound and touch. So we are exploring how to digitally stimulate some of our other senses too.

 

Thank you a lot for your time. It has been a pleasure talking to you.

Now we are even more curious to follow your future work and the discoveries the field you are working in will make.