Professor Scott Hudson has received the SIGCHI Lifetime Research Award, an award presented to individuals for outstanding contributions to the study of human-computer interaction. This award recognizes the very best, most fundamental and influential research contributions, and is awarded for a lifetime of innovation and leadership.
The 2021 CHI Conference and its 2021 SIGCHI Awards will be presented virtually this year due to the pandemic. Hudson's talk abstract and award talk (in 2 parts) are available below.
Being honored with something like the Lifetime Research award, tends to make you look back retrospectively on a career. But I am much more interested in what I should be doing (and the fun things I’m going to be able to do) next week, than what I did 20 or 30 years ago. And so, in this talk I want to consider the future – specifically, the future of technology-oriented HCI research. I want to consider some ways we may not be seeing all the opportunities of a “new future”, and suggest a few ways we might consider thinking differently about how we go about our work. HCI has grown substantially, and there are now several quite different kinds of work being done together under the interdisciplinary HCI umbrella. Within that, technological HCI research is specifically concerned with the invention of new things for the use and benefit of people. This endeavor is inherently affected by new technological opportunities. And in fact, through the whole history of HCI we have seen what seems like an utterly massive amount of change in our primary technological focus – computing – and with the help of HCI, this has had profound impact on our everyday lives. The inventions of technical HCI were enabled (but not made inevitable) by the tremendous growth in computing power that has occurred. But I will argue in this talk that this past change, as big as it seems, is actually small in comparison to what we should expect to see in the future – in essence “we haven’t seen anything yet”. Although it seems mundane, when we consider change in computing technology, we must consider “the elephant in the room” of Moore’s law. This of course predicts a doubling, approximately every two years, of the number of transistors we can expect to see on a chip or at a certain price point; this change has been a key driving factor behind much of the technological innovation that affects us. Because the familiar corollary of doubling processor clock speeds stopped a number of years ago, many may think Moore’s law is dead; but it most definitely is not. And because this rate of change has persisted for an extraordinary period of time – for nearly everyone reading this, their whole working life – it has fallen into the background: most of us feel we understand it and have properly incorporated it into our thinking. I will present two quick thought experiments in this talk to try to convince you that you really don’t understand the implications of Moore’s law, that this really does matter, and that you should perhaps be thinking a little differently about your work as a result. (Spoiler alert: you are dramatically underestimating how much change in computing power is ahead of you, and probably underutilizing its potential.) Based on this, the core of my talk will consider what we might be missing in terms of how we go about our work, and talk about several exemplars of where a different view of a “new future” might lead in terms of specific research directions. With these exemplars as motivation, I will consider some more general thoughts about the methodologies we use in our work.