CMU logo
Expand Menu
Close Menu

HCII at UIST 2023

News

Vivian facilitating a demo at UIST 2023. [This image is from the official ACM UIST photo album]

The 2023 ACM Symposium on User Interface Software and Technology (UIST) was held in San Francisco, California from October 29 to November 1, 2023. 

UIST unites researchers from diverse areas of human-computer interaction, including graphical and web user interfaces, tangible and ubiquitous computing, virtual and augmented reality, new input & output devices, as well as others. 

Human-Computer Interaction Institute (HCII) researchers had more than a dozen papers accepted to UIST 2023, and several papers and demos received awards and recognition during the event. 

Our community of researchers is already looking forward to the next conference, UIST 2024, which will be held here in Pittsburgh, PA, from October 13-16, 2024. 
 

Lasting Impact Award

The UIST Lasting Impact Award recognizes UIST papers published at least 10 years ago that have had a long-lasting influence on the field of user interface software and technology. This impact can be measured broadly in terms of new research directions, wide acceptance in industry, or large societal impact. 

The 2023 Lasting Impact Award was presented to HCII Associate Professor Jeffrey Bigham and co-authors for their work on this UIST 2010 paper: 
Vizwiz: Nearly Real-Time Answers to Visual Questions
Jeffrey P Bigham, Chandrika Jayant, Hanjie Ji, Greg Little, Andrew Miller, Robert C Miller, Robin Miller, Aubrey Tatarowicz, Brandyn White, Samual White, Tom Yeh
 

Best Demo Awards 

Best Demo Jury's Choice Award

Constraint-Driven Robotic Surfaces, At Human-Scale
Jesse T Gonzalez, Sonia Prashant, Sapna Tayal, Juhi Kedia, Alexandra Ion, and Scott E Hudson.  

Best Demo People's Choice Honorable Mention & 
Best Demo Jury's Choice Honorable Mention

Fluid Reality: High-Resolution, Untethered Haptic Gloves using Electroosmotic Pump Arrays
Vivian Shen, Tucker Rae-Grant, Joe Mullenbach, Chris Harrison, and Craig Shultz

 

Awards and Work from HCII Authors at UIST 2023 

trophy icon representing a best paper award Icon represents a Best Paper Award 

ribbon icon representing honorable mention award Icon represents a Best Paper Honorable Mention Award

___ 
 

A list of papers with CMU contributing authors is available below. 

trophy icon representing a best paper award GenAssist: Making Image Generation Accessible
Mina Huh, Yi-Hao Peng, and Amy Pavel

Blind and low vision (BLV) creators use images to communicate with sighted audiences. However, creating or retrieving images is challenging for BLV creators as it is difficult to use authoring tools or assess image search results. Thus, creators limit the types of images they create or recruit sighted collaborators. While text-to-image generation models let creators generate high-fidelity images based on a text description (i.e. prompt), it is difficult to assess the content and quality of generated images. We present GenAssist, a system to make text-to-image generation accessible. Using our interface, creators can verify whether generated image candidates followed the prompt, access additional details in the image not specified in the prompt, and skim a summary of similarities and differences between image candidates. To power the interface, GenAssist uses a large language model to generate visual questions, vision-language models to extract answers, and a large language model to summarize the results. Our study with 12 BLV creators demonstrated that GenAssist enables and simplifies the process of image selection and generation, making visual authoring more accessible to all.
 

KnitScript: A Domain-Specific Scripting Language for Advanced Machine Knitting
Megan Hofmann, Lea Albaugh, Tongyan Wang, Jennifer Mankoff, Scott E Hudson

Knitting machines can fabricate complex fabric structures using robust industrial fabrication machines. However, machine knitting's full capabilities are only available through low-level programming languages that operate on individual machine operations. We present KnitScript, a domain-specific machine knitting scripting language that supports computationally driven knitting designs. KnitScript provides a comprehensive virtual model of knitting machines, giving access to machine-level capabilities as they are needed while automating a variety of tedious and error-prone details. Programmers can extend KnitScript with Python programs to create more complex programs and user interfaces. We evaluate the expressivity of KnitScript through a user study where nine machine knitters used KnitScript code to modify knitting patterns. We demonstrate the capabilities of KnitScript through three demonstrations where we create: a program for generating knitted figures of randomized trees, a parameterized hat template that can be modified with accessibility features, and a pattern for a parametric mixed-material lampshade. KnitScript advances the state of machine-knitting research by providing a platform to develop and share complex knitting algorithms, design tools, and patterns. 
 

Pantœnna: Mouth Pose Estimation for AR/VR Headsets Using Low-Profile Antenna and Impedance Characteristic Sensing
Daehwa Kim, Chris Harrison

Methods for faithfully capturing a user’s holistic pose have immediate uses in AR/VR, ranging from multimodal input to expressive avatars. Although body-tracking has received the most attention, the mouth is also of particular importance, given that it is the channel for both speech and facial expression. In this work, we describe a new RF-based approach for capturing mouth pose using an antenna integrated into the underside of a VR/AR headset. Our approach side-steps privacy issues inherent in camera-based methods, while simultaneously supporting silent facial expressions that audio-based methods cannot. Further, compared to bio-sensing methods such as EMG and EIT, our method requires no contact with the wearer’s body and can be fully self-contained in the headset, offering a high degree of physical robustness and user practicality. We detail our implementation along with results from two user studies, which show a mean 3D error of 2.6 mm for 11 mouth keypoints across worn sessions without re-calibration.
 

Parametric Haptics: Versatile Geometry-based Tactile Feedback Devices
Violet Yinuo Han, Abena Boadi-Agyemang, Yuyu Lin, David Lindlbauer, Alexandra Ion

Haptic feedback is important for immersive, assistive, or multimodal interfaces, but engineering devices that generalize across applications is notoriously difficult. To address the issue of versatility, we propose Parametric Haptics, geometry-based tactile feedback devices that are customizable to render a variety of tactile sensations. To achieve this, we integrate the actuation mechanism with the tactor geometry into passive 3D printable patches, which are then connected to a generic wearable actuation interface consisting of micro gear motors. The key benefit of our approach is that the 3D-printed patches are modular, can consist of varying numbers and shapes of tactors, and that the tactors can be grouped and moved by our actuation geometry over large areas of the skin. The patches are soft, thin, conformable, and easy to customize to different use cases, thus potentially enabling a large design space of diverse tactile sensations. In our user study, we investigate the mapping between geometry parameters of our haptic patches and users’ tactile perceptions. Results indicate a good agreement between our parameters and the reported sensations, showing initial evidence that our haptic patches can produce a wide range of sensations for diverse use scenarios. We demonstrate the utility of our approach with wearable prototypes in immersive Virtual Reality (VR) scenarios, embedded into wearable objects such as glasses, and as wearable navigation and notification interfaces. We support designing such patches with a design tool in Rhino.
 

Reprogrammable Digital Metamaterials for Interactive Devices
Yu Jiang, Shobhit Aggarwal, Zhipeng Li, Yuanchun Shi, Alexandra Ion

We present digital mechanical metamaterials that enable multiple computation loops and reprogrammable logic functions, making a significant step towards passive yet interactive devices. Our materials consist of many cells that transmit signals using an embedded bistable spring. When triggered, the bistable spring displaces and triggers the next cell. We integrate a recharging mechanism to recharge the bistable springs, enabling multiple computation rounds. Between the iterations, we enable reprogramming the logic functions after fabrication. We demonstrate that such materials can trigger a simple controlled actuation anywhere in the material to change the local shape, texture, stiffness, and display. This enables large-scale interactive and functional materials with no or a small number of external actuators. We showcase the capabilities of our system with various examples: a haptic floor with tunable stiffness for different VR scenarios, a display with easy-to-reconfigure messages after fabrication, or a tactile notification integrated into users’ desktops.
 

SmartPoser: Arm Pose Estimation With a Smartphone and Smartwatch Using UWB and IMU Data
Nathan DeVrio, Vimal Mollyn, Chris Harrison

The ability to track a user’s arm pose could be valuable in a wide range of applications, including fitness, rehabilitation, augmented reality input, life logging, and context-aware assistants. Unfortunately, this capability is not readily available to consumers. Systems either require cameras, which carry privacy issues, or utilize multiple worn IMUs or markers. In this work, we describe how an off-the-shelf smartphone and smartwatch can work together to accurately estimate arm pose. Moving beyond prior work, we take advantage of more recent ultra-wideband (UWB) functionality on these devices to capture absolute distance between the two devices. This measurement is the perfect complement to inertial data, which is relative and suffers from drift. We quantify the performance of our software-only approach using off-the-shelf devices, showing it can estimate the wrist and elbow joints with a median positional error of 11.0 cm, without the user having to provide training data.
 

Soundify: Matching Sound Effects to Video
David Chuan-En Lin, Anastasis Germanidis, Cristóbal Valenzuela, Yining Shi, Nikolas Martelaro

In the art of video editing, sound helps add character to an object and immerse the viewer within a space. Through formative interviews with professional editors (N=10), we found that the task of adding sounds to video can be challenging. This paper presents Soundify, a system that assists editors in matching sounds to video. Given a video, Soundify identifies matching sounds, synchronizes the sounds to the video, and dynamically adjusts panning and volume to create spatial audio. In a human evaluation study (N=889), we show that Soundify is capable of matching sounds to video out-of-the-box for a diverse range of audio categories. In a within-subjects expert study (N=12), we demonstrate the usefulness of Soundify in helping video editors match sounds to video with lighter workload, reduced task completion time, and improved usability.
 

SPEERLoom: An Open-Source Loom Kit for Interdisciplinary Engagement in Math, Engineering, and Textiles
Samantha Speer, Ana P Garcia-Alonzo, Joey Huang, Nickolina Yankova, Carolyn Rosé, Kylie A Peppler, James McCann, Melisa Orta Martinez

Weaving is a fabrication process that is grounded in mathematics and engineering: from the binary, matrix-like nature of the pattern drafts weavers have used for centuries, to the punch card programming of the first Jacquard looms. This intersection of disciplines provides an opportunity to ground abstract mathematical concepts in a concrete and embodied art, viewing this textile art through the lens of engineering. Currently, available looms are not optimized to take advantage of this opportunity to increase mathematics learning by providing hands-on interdisciplinary learning in collegiate classrooms. In this work, we present SPEERLoom: an open-source, robotic Jacquard loom kit designed to be a tool for interweaving cloth fabrication, mathematics, and engineering to support interdisciplinary learning in the classroom. We discuss the design requirements and subsequent design of SPEERLoom. We also present the results of a pilot study in a post-secondary class finding that SPEERLoom supports hands-on, interdisciplinary learning of math, engineering, and textiles.
 

ribbon icon representing honorable mention award   Sustainflatable: Harvesting, Storing and Utilizing Ambient energy for Pneumatic morphing Interfaces
Qiuyu Lu, Tianyu Yu, Semina Yi, Yuran Ding, Haipeng Mi, Lining Yao

While the majority of pneumatic interfaces are powered and controlled by traditional electric pumps and valves, alternative sustainable energy-harnessing technology has been attracting attention. This paper presents a novel solution to this challenge with the development of the Sustainflatable system, a self-sustaining pneumatic system that can harvest renewable energy sources such as wind, water flow, moisture, and sunlight, convert the energy into compressed air, and store it for later use in a programmable and intelligent way. The system is completely electronic-free, incorporating customized energy harvesting pumps, storage units with variable volume-pressure characteristics, and tailored valves that operate autonomously. Additionally, the paper provides a design tool to guide the development of the system and includes several environmental applications to showcase its capabilities.
 

Synergi: A Mixed-Initiative System for Scholarly Synthesis and Sensemaking
Hyeonsu B Kang, Tongshuang Wu, Joseph Chee Chang, Aniket Kittur

Efficiently reviewing scholarly literature and synthesizing prior art are crucial for scientific progress. Yet, the growing scale of publications and the burden of knowledge make synthesis of research threads more challenging than ever.While significant research has been devoted to helping scholars interact with individual papers, building research threads scattered across multiple papers remains a challenge. Most top-down synthesis (and LLMs) make it difficult to personalize and iterate on the output, while bottom-up synthesis is costly in time and effort.Here, we explore a new design space of mixed-initiative workflows.In doing so we develop a novel computational pipeline, Synergi, that ties together user input of relevant seed threads with citation graphs and LLMs, to expand and structure them, respectively. Synergi allows scholars to start with an entire threads-and-subthreads structure generated from papers relevant to their interests, and to iterate and customize on it as they wish. In our evaluation, we find that Synergi helps scholars efficiently make sense of relevant threads, broaden their perspectives, and increases their curiosity. We discuss future design implications for thread-based, mixed-initiative scholarly synthesis support tools.
 

The View from MARS: Empowering Game Stream Viewers with Metadata Augmented Real-time Streaming
Noor Hammad, Erik Harpstead, Jessica Hammer

We present MARS (Metadata Augmented Real-time Streaming), a system that enables game-aware streaming interfaces for Twitch. Current streaming interfaces provide a video stream of gameplay and a chat channel for conversation, but do not allow viewers to interact with game content independently from the steamer or other viewers. With MARS, a Unity game’s metadata is rendered in real-time onto a Twitch viewer’s interface. The metadata can then power viewer-side interfaces that are aware of the streamer’s game activity and provide new capacities for viewers. Use cases include providing contextual information (e.g. clicking on a unit to learn more), improving accessibility (e.g. slowing down text presentation speed), and supporting novel stream-based game designs (e.g. asymmetric designs where the viewers know more than the streamer). We share the details of MARS’ architecture and capabilities in this paper, and showcase a working prototype for each of our three proposed use cases.
 

VegaProf: Profiling Vega Visualizations
Junran Yang, Alex Bäuerle, Dominik Moritz, Çağatay Demiralp

Domain-specific languages (DSLs) for visualization aim to facilitate visualization creation by providing abstractions that offload implementation and execution details from users to the system layer. Therefore, DSLs often execute user-defined specifications by transforming them into intermediate representations (IRs) in successive lowering operations.

However, DSL-specified visualizations can be difficult to profile and, hence, optimize due to the layered abstractions. To better understand visualization profiling workflows and challenges, we conduct formative interviews with visualization engineers who use Vega in production. Vega is a popular visualization DSL that transforms specifications into dataflow graphs, which are then executed to render visualization primitives. Our formative interviews reveal that current developer tools are ill-suited for visualization profiling since they are disconnected from the semantics of Vega’s specification and its IRs at runtime.

To address this gap, we introduce VegaProf, the first performance profiler for Vega visualizations. VegaProf instruments the Vega library by associating a declarative specification with its compilation and execution. Integrated into a Vega code playground, VegaProf coordinates visual performance inspection at three abstraction levels: function, dataflow graph, and visualization specification. We evaluate VegaProf through use cases and feedback from visualization engineers as well as original developers of the Vega library. Our results suggest that VegaProf makes visualization profiling more tractable and actionable by enabling users to interactively probe time performance across layered abstractions of Vega. Furthermore, we distill recommendations from our findings and advocate for co-designing visualization DSLs together with their introspection tools.
 

Poster

Using LLMs to Customize the UI of Webpages
Amanda Li, Jason Wu, Jeffrey P Bigham

LLMs have capabilities to understand natural language and code, which makes them a great candidate for user-driven customization of webpages. A process that focuses on natural language can be useful for those who are less technologically literate. In this paper, we explore the potential of using LLMs to modify webpages, and what kinds of opportunities and challenges that come with it. We observe that specific prompts referring to color or targeted components can succeed, vague requests and any complex website tend to perform poorly.

 

Related People
Jeffrey Bigham, Chris Harrison, Alexandra Ion, Scott Hudson, Yi-Hao Peng, Lea Albaugh, Daehwa Kim, Violet Han, David Lindlbauer, Vimal Mollyn, Nathan DeVrio, David Chuan-en Lin, Sherry Tongshuang Wu, Hyeonsu Kang, Noor Hammad, Erik Harpstead, Jessica Hammer, Dominik Moritz

Research Areas
AR / VR / XR, Computational Fabrication, Wearables, Accessibility