I’ve seen this projector already, and it is pretty impressive!
Find out more on: www.ricoh.eu
Beamatron is a new, augmented-reality concept that combines a projector and a Kinect camera on a pan-tilt moving head. The moving head is used to place the projected image almost anywhere in a room. Meanwhile, the depth camera enables the correct warping of the displayed image for the shape of the projection surface and for the projected graphics to react in physically appropriate ways. For example, a projected virtual car can be driven on the floor of the room but will bump into obstacles or run over ramps. As another application, we consider the ability to bring notifications and other graphics to the attention of the user by automatically placing the graphics within the user’s view.
This project is a depth-sensing and projection system that enables interactive multitouch applications on everyday surfaces. Beyond a shoulder-worn system, there is no instrumentation of the user or the environment. Foremost, on such surfaces—without calibration—Wearable Multitouch Interaction provides capabilities similar to those of a mouse or a touchscreen: X and Y locations in 2-D interfaces and whether fingers are “clicked” or hovering, enabling a wide variety of interactions. Reliable operation on the hands, for example, requires buttons to be 2.3 centimeters in diameter. Thus, it is now conceivable that anything one can do on today’s mobile devices can be done in the palm of a hand.
PPP introduces a new way to play with toy blocks. Various images would be projected to the toy blocks depending on how you stack and place them by using image recognition and projection mapping.
[ link ]
And again we are back with a cup of Augmented Reality (AR).
This time it’s something very cool: a light bulb :D
LuminAR reinvents the traditional incandescent bulb and desk lamp, evolving them into a new category of robotic, digital information devices. The LuminAR Bulb combines a Pico-projector, camera, and wireless computer in a compact form factor. This self-contained system enables users with just-in-time projected information and a gestural user interface, and it can be screwed into standard light fixtures everywhere.
The LuminAR Lamp is an articulated robotic arm, designed to interface with the LuminAR Bulb. Both LuminAR form factors dynamically augment their environments with media and information, while seamlessly connecting with laptops, mobile phones, and other electronic devices. LuminAR transforms surfaces and objects into interactive spaces that blend digital media and information with the physical space. The project radically rethinks the design of traditional lighting objects, and explores how we can endow them with novel augmented-reality interfaces. LuminAR was created by Natan Linder and Pattie Maes from the Fluid Interfaces Group at the MIT Media Lab. Video produced by Paula Aguilera and Jonathan Wiliams.