Category Archives: Mobile Computing

Popular Science Honors Four Projects as “Best of What’s New”

Four inventions that trace their origins to the School of Computer Science and, particularly, the Robotics Institute, have been honored by the annual Best of What’s New Awards by Popular Science.

This year’s winners, published in the magazine’s December issue, include the Flex System, a neck surgery tool based on snake robot research; 360fly, a panoramic video camera; 3-D Object Manipulation Software, a photo editing tool, and LiveLight, a method for automatically editing out the boring parts of personal or security videos.

Undergraduates design iPad app to track pressure ulcers

As part of Professor Anind Dey’s Designing Human-Centered Software course, a team of undergraduates designed and prototyped an iPad app to help nurses track, analyze, and treat clinical pressure ulcers. Their tool helps nurses collect data and photos of an ulcer over time, step through existing tests, keep track of repeated treatments, and analyze everything later.

While learning essential HCI methods such as contextual inquiry, the team spent months interviewing and shadowing physicians, researchers, and nursing staff at local hospitals and nursing homes. They identified problems in existing work flows and gather a clear understanding of the constraints of working in a hospital environment.

The team was comprised of undergraduates Jessica Aguero, MacKenzie Bates, Ryhan Hassan, Sukhada Kulkarni, and Stephanie Yeung.

Tiramisu App Wins FCC Chairman’s Award

The Carnegie Mellon research team that created Tiramisu, a smartphone app that enables transit riders to create realtime information about bus schedules and seating, has won this year’s Federal Communications Commission Chairman’s Award for Advancement in Accessibility in the Geo-Location Services category.

The crowdsourcing app was launched in Pittsburgh in 2011 and now also is in use in Syracuse, NY. Preparations are underway to deploy it in Brooklyn, NY.

Tiramisu Transit was developed by researchers in the Rehabilitation Engineering Research Center on Accessible Public Transportation (RERC-APT), funded by the National Institute on Disability and Rehabilitation Research. The work is also supported in part by CMU’s Traffic21 initiative and the US Department of Transportation.

WorldKit: Ad Hoc Interactive Applications on Everyday Surfaces

Creating interfaces in the world, where and when we need them, has been a persistent goal of research areas such as ubiquitous computing, augmented reality, and mobile computing. The WorldKit system makes use of a paired depth camera and projector to make ordinary surfaces instantly interactive. Using this system, touch-based interactivity can, without prior calibration, be placed on nearly any unmodified surface literally with a wave of the hand, as can other new forms of sensed interaction. From a user perspective, such interfaces are easy enough to instantiate that they could, if desired, be recreated or modified “each time we sat down” by “painting” them next to us. From the programmer’s perspective, our system encapsulates these capabilities in a simple set of abstractions that make the creation of interfaces quick and easy. Further, it is extensible to new, custom interactors in a way that closely mimics conventional 2D graphical user interfaces, hiding much of the complexity of working in this new domain.

Using Shear as a Supplemental Input Channel for Rich Touchscreen Interaction

Touch input is constrained, typically only providing finger X/Y coordinates. To access and switch between different functions, valuable screen real estate must be allocated to buttons and menus, or users must perform special actions, such as touch-and-hold, double tap, or multi-finger chords. Even still, this only adds a few bits of additional information, leaving touch interaction unwieldy for many tasks. In this work, we suggest using a largely unutilized touch input dimension: shear (force tangential to a screen’s surface). Similar to pressure, shear can be used in concert with conventional finger positional input. However, unlike pressure, shear provides a rich, analog 2D input space, which has many powerful uses.

OmniTouch: Wearable Multitouch Interaction Everywhere

OmniTouch is a body-worn projection/sensing system that enables graphical, interactive, multitouch input on everyday surfaces. Our shoulder-worn implementation allows users to manipulate interfaces projected onto the environment (e.g., walls, tables), held objects (e.g., notepads, books), and even their own bodies (e.g., hands, lap). This approach allows users to capitalize on the tremendous surface area the real world provides. For example, the surface area of one hand alone exceeds that of typical smartphone; tables are often an order of magnitude larger than a tablet computer. If these ad hoc surfaces can be appropriated in an on-demand way, users could retain all of the benefits of mobility while simultaneously expanding the interactive capability. 

Zoomboard: A Diminutive QWERTY Keyboard for Ultra-Small Devices

The proliferation of touchscreen devices has made soft keyboards a routine part of life. However, ultra-small computing platforms like the Sony SmartWatch and Apple iPod Nano lack a means of text entry. This limits their potential, despite the fact they are capable computers. We created a soft keyboard interaction technique called ZoomBoard that enables text entry on ultra-small devices. Our approach uses iterative zooming to enlarge otherwise impossibly tiny keys to comfortable size. We based our design on a QWERTY layout, so that it is immediately familiar to users and leverages existing skill. As the ultimate test, we ran a text entry experiment on a keyboard measuring just 16 x 6mm – smaller than a US penny. Users achieved roughly 10 words per minute, allowing users to enter phone numbers and searches such as “closest pizza” and “directions home” both quickly and quietly.

FingerSense: Enhancing Finger Interaction on Touch Surfaces

Six years ago, multitouch devices went mainstream, and changed the industry and our lives. However, our fingers can do so much more than just poke and pinch at screens. FingerSense is an enhancement to touch interaction that allows conventional screens to know how the finger is being used for input: fingertip, knuckle or nail. This opens several new and powerful interaction opportunities for touch input, especially in mobile devices, where input bandwidth is limited due to small screens and fat fingers. For example, a knuckle tap could serve as a “right click” for mobile device touch interaction.