Spooky SCS: A Round-Up of Chilling Research
Mutant cats, troll hunting and walking through spiderwebs in a haunted forest — some work in the School of Computer Science is downright scary … and it's not 15-151: Mathematical Foundations for Computer Science.
Here is a sampling of some of the spookiest research happening in the halls of Gates, Newell-Simon and beyond. Venture further at your own risk.
No Cats Were Harmed in the Making of This Research
Jun-Yan Zhu's latest work with generative models to create images has veered into an unknown land where cats transform to have rabbit ears, devil horns or sinister alien eyes. Zhu, an assistant professor in the Robotics Institute, worked with robotics Ph.D. student Sheng-Yu Wang and David Bau, an assistant professor at Northeastern University, to warp a generative model so it can create a whole new species of digital creatures.
The trio's research proposes a method that allows users to edit only a few images with the desired warped attributes. Those edits are then generalized so the model can reproduce the warped outcome across many different images.
Deep generative models such as generative adversarial networks (GANs) and diffusion models are powerful machine learning models for digital content creation. Train a generative model on a set of data, like images of examples, and it generates new images similar to that data set. Generative AI-powered content creation tools like Canvas and DALLE-2, are used to edit and manipulate photos and videos and are the technology that allows a person to create an emoji-like avatar.
But generative models such as GANs take lots of data to train and can only be trained with data that actually exists. Zhu, Wang and Bau could not use a traditional GAN to generate countless examples of their long-eared, rabbit-like cats because scores of photos of those cats don't exist. Instead, the trio looked for a way to train a GAN using as little data as possible. In their research, it takes only about a dozen edited images to train their warping model.
And while that's a great tool to create an island inhabited by mutant cats, it also presents an opportunity to open up image editing using generative models to a broader audience. The trio call it DIY machine learning.
Visit their website for more information about the research.
Hunting Online Trolls
Trolls, whether under bridges or on the internet, are no good. The bridge dwellers delay travelers with riddles, extort them for tolls and possibly eat them if they don't comply. Internet trolls lurk behind screens and keyboards to disrupt and harass conversations on anything from politics to pop culture.
At its worst, internet trolling can be weaponized in high-stakes scenarios like elections and international conflict. In those cases, there's an urgent need to establish both a strong, distinctive understanding of online trolling and tools that can quickly identify these manipulative behaviors. TrollHunter, a social cybersecurity framework developed by Kathleen Carley, Joshua Uyheng and J.D. Moffitt in the Software and Societal Systems Department, uses psycholinguistic features to predict the likelihood that a given account is a troll. Their research found that trolling is similar to hate speech and cyber-aggression, but that trolls might also intend to distract or even humor the recipient. Their messages are not always hateful, but they are always disruptive.
The team used TrollHunter to learn more about what trolls post, where they come from and who they target. Additionally, the research aimed to distinguish between trolls and bots, which are automated rather than interactive like their troll cousins. Both of these digital threats have different impacts that can galvanize polarization and disinformation in society.
Insights from the continued application of TrollHunter could help both policymakers and platform developers establish effective strategies to circumvent online trolling in the future.
For more information about TrollHunter, check out a recent paper published in Information Processing & Management.
A Lip-Tingling Experience
Imagine walking through a dark forest with trees so thick you have to crawl your way through them. As you do, spiderwebs cling to your face. You pick them off and brush away others but not in time. A giant spider leaps from its web at your face. You open your mouth to scream, but the spider jumps right in.
Soon, gamers might not have to imagine this. They'll feel it.
A virtual reality headset developed by the Human-Interaction Institute's Future Interfaces Group (FIG) uses ultrasound waves aimed at a user's lips, teeth and tongue to create haptic effects. Vivian Shen, a second-year Ph.D. student in the Robotics Institute, worked on the project with HCII post-doc Craig Shultz and Chris Harrison, an HCII associate professor and FIG Lab's director. They designed an array of 64 tiny ultrasound-generating transducers arranged in a flat, half-moon shape and attached it to the bottom of a VR headset so it rests just above the mouth. The ultrasound waves can be directed at the mouth and modulated to produce the desired haptic effect.
Ultrasound waves have been used before by researchers to deliver sensations to the hands, enabling them to create haptic effects such as virtual buttons that users can perceive themselves pushing. Lips, gums and the tongue are second only to fingertips in nerve density, making the mouth an attractive area for heightening the virtual experience.
The haptic effects consist of point impulses, swipes and persistent vibrations targeted on the mouth and synchronized with visual images. This can simulate the sensation of drinking water, brushing your teeth or even a kiss. For VR horror games, this technology could simulate a whole range of sensations, from bugs crawling across your lips and into your mouth to whatever else the twisted mind of a developer thinks of.
More information is available on the FIG Lab website.
Chris Harrison, Craig Shultz