The Romibo Robot Project is an evolving robot for motivation, education and social therapy. Our project goal is to improve research techniques through the use of robots and social therapies. The robot has been designed around applications for individuals with conditions including autism, traumatic brain injury and dementia. Romibo includes features taken from other therapeutic robots currently used in research, such as Keepon, Pleo and Paro. The Romibo Project stands out by providing a low-cost development platform while providing the necessary features for use in a wide range of social therapies. The platform features a fully customizable design, allowing for individual creativity, ease of assembly and experimentation. Romibo is a social robot, able to convey emotions, communicate socially, and form relationships with individuals.
Monthly Archives: August 2013
Undergraduates design iPad app to track pressure ulcers
As part of Professor Anind Dey’s Designing Human-Centered Software course, a team of undergraduates designed and prototyped an iPad app to help nurses track, analyze, and treat clinical pressure ulcers. Their tool helps nurses collect data and photos of an ulcer over time, step through existing tests, keep track of repeated treatments, and analyze everything later.
While learning essential HCI methods such as contextual inquiry, the team spent months interviewing and shadowing physicians, researchers, and nursing staff at local hospitals and nursing homes. They identified problems in existing work flows and gather a clear understanding of the constraints of working in a hospital environment.
The team was comprised of undergraduates Jessica Aguero, MacKenzie Bates, Ryhan Hassan, Sukhada Kulkarni, and Stephanie Yeung.
HCII Ph.D. Student Receives Microsoft Research Fellowship
Jeff Rzeszotarski, a Ph.D. student in the Human-Computer Interaction Institute, is among 12 students of U.S. universities who are recipients of 2013 Microsoft Research Ph.D. Fellowships.
Rzeszotarski studies how crowds of people generate content online and how to improve the content that they create. By looking at the behavior of people as they produce content, his research identifies places where people may be going wrong, so interventions can be developed to help them make better contributions.
The two-year fellowship covers all tuition and fees for the 2013-14 and 2014-15 academic years and includes a travel allowance, the offer of a paid internship, and a $28,000 annual stipend.
Tiramisu App Wins FCC Chairman’s Award
The Carnegie Mellon research team that created Tiramisu, a smartphone app that enables transit riders to create realtime information about bus schedules and seating, has won this year’s Federal Communications Commission Chairman’s Award for Advancement in Accessibility in the Geo-Location Services category.
The crowdsourcing app was launched in Pittsburgh in 2011 and now also is in use in Syracuse, NY. Preparations are underway to deploy it in Brooklyn, NY.
Tiramisu Transit was developed by researchers in the Rehabilitation Engineering Research Center on Accessible Public Transportation (RERC-APT), funded by the National Institute on Disability and Rehabilitation Research. The work is also supported in part by CMU’s Traffic21 initiative and the US Department of Transportation.
Spontaneous Design Studio!
Professor Haakon Faste created Spontaneous Design Studio in Fall 2012 in response to a perceived lack of design-oriented elective courses in the Masters in HCI curriculum. While traditional HCI courses tend to focus on targeted topics and areas of existing knowledge, the aim of this course is to build creative confidence, intuition, motivation, empathy, teamwork and fulfillment while working on unconstrained and ambiguous projects. To this end, the course has no syllabus or pre-determined plan. Instead, the first assignment is to design the second assignment and everything else happens spontaneously thereafter.
Some of the projects students have worked on in this course have included: mobile shopping applications, Jack-o-lanterns, philanthropy networks, self-driving cars, talking refrigerators, elegant shoes, life philosophies, and large interactive public displays (specifically: Robowall, the interface you’re looking at right now!)
WorldKit: Ad Hoc Interactive Applications on Everyday Surfaces
Creating interfaces in the world, where and when we need them, has been a persistent goal of research areas such as ubiquitous computing, augmented reality, and mobile computing. The WorldKit system makes use of a paired depth camera and projector to make ordinary surfaces instantly interactive. Using this system, touch-based interactivity can, without prior calibration, be placed on nearly any unmodified surface literally with a wave of the hand, as can other new forms of sensed interaction. From a user perspective, such interfaces are easy enough to instantiate that they could, if desired, be recreated or modified “each time we sat down” by “painting” them next to us. From the programmer’s perspective, our system encapsulates these capabilities in a simple set of abstractions that make the creation of interfaces quick and easy. Further, it is extensible to new, custom interactors in a way that closely mimics conventional 2D graphical user interfaces, hiding much of the complexity of working in this new domain.
Using Shear as a Supplemental Input Channel for Rich Touchscreen Interaction
Touch input is constrained, typically only providing finger X/Y coordinates. To access and switch between different functions, valuable screen real estate must be allocated to buttons and menus, or users must perform special actions, such as touch-and-hold, double tap, or multi-finger chords. Even still, this only adds a few bits of additional information, leaving touch interaction unwieldy for many tasks. In this work, we suggest using a largely unutilized touch input dimension: shear (force tangential to a screen’s surface). Similar to pressure, shear can be used in concert with conventional finger positional input. However, unlike pressure, shear provides a rich, analog 2D input space, which has many powerful uses.
OmniTouch: Wearable Multitouch Interaction Everywhere
OmniTouch is a body-worn projection/sensing system that enables graphical, interactive, multitouch input on everyday surfaces. Our shoulder-worn implementation allows users to manipulate interfaces projected onto the environment (e.g., walls, tables), held objects (e.g., notepads, books), and even their own bodies (e.g., hands, lap). This approach allows users to capitalize on the tremendous surface area the real world provides. For example, the surface area of one hand alone exceeds that of typical smartphone; tables are often an order of magnitude larger than a tablet computer. If these ad hoc surfaces can be appropriated in an on-demand way, users could retain all of the benefits of mobility while simultaneously expanding the interactive capability.
Zoomboard: A Diminutive QWERTY Keyboard for Ultra-Small Devices
The proliferation of touchscreen devices has made soft keyboards a routine part of life. However, ultra-small computing platforms like the Sony SmartWatch and Apple iPod Nano lack a means of text entry. This limits their potential, despite the fact they are capable computers. We created a soft keyboard interaction technique called ZoomBoard that enables text entry on ultra-small devices. Our approach uses iterative zooming to enlarge otherwise impossibly tiny keys to comfortable size. We based our design on a QWERTY layout, so that it is immediately familiar to users and leverages existing skill. As the ultimate test, we ran a text entry experiment on a keyboard measuring just 16 x 6mm – smaller than a US penny. Users achieved roughly 10 words per minute, allowing users to enter phone numbers and searches such as “closest pizza” and “directions home” both quickly and quietly.
FingerSense: Enhancing Finger Interaction on Touch Surfaces
Six years ago, multitouch devices went mainstream, and changed the industry and our lives. However, our fingers can do so much more than just poke and pinch at screens. FingerSense is an enhancement to touch interaction that allows conventional screens to know how the finger is being used for input: fingertip, knuckle or nail. This opens several new and powerful interaction opportunities for touch input, especially in mobile devices, where input bandwidth is limited due to small screens and fat fingers. For example, a knuckle tap could serve as a “right click” for mobile device touch interaction.