Posts Tagged Surface
Disturbance offers a first demo, “Design lab experience 001” which introduces interactive table “touchless” created in a few hours.
A glass table, IR sensor, a webcam, a video projector, a mirror and a Mac, this is everything needed to make this prototype behavior and convincing potential: no necessary contact sensor analyzes gesture in space to transform it into information.
A long series of exciting technology/design experiments should follow soon!
Impressively thin table surface for Surface
– Surface 2.0 is mountable.
– No cameras included. Each pixel is a camera. Yup,pixelsense technology.
– Surface has the biggest piece of Gorilla Glass ever produced.
No word and pricing and availability. Steve Ballmer will reveal that at CES keynote I guess.
Created in partnership with Samsung, the Samsung SUR40 for Microsoft Surface incorporates all the key features of the original Surface product – a massive multi-touch experience, the ability to recognize fingers, hands, and objects – as well as a new technology that has enabled a more flexible form factor.
The Next Generation of Microsoft Surface – LCDs That Can ‘See’
Also during Ballmer’s keynote speech, Microsoft unveiled the next generation of Microsoft Surface, built upon a new technology that enables thin LCD screens to “see” without the use of cameras.
Created in partnership with Samsung, the Samsung SUR40 for Microsoft Surface is a major step forward in the surface computing category. It incorporates all the key features of the original Surface product – a massive multi-touch experience, the ability to recognize fingers, hands, and objects – as well as a new technology that has enabled a more flexible form factor.
Just Scratching the Surface
Looking at the year ahead, Ballmer noted that it is an exciting time for Microsoft, partners and customers across the board. Even with all the amazing experiences talked about at CES, from what’s next in Windows to the latest capabilities with Kinect, “the best is yet to come,” he said.
We all know Microsoft Surface. Though its an amazing technology and works pretty well,still it didn’t find its way into consumer mainstream. Because its costly,huge,not portable. Now here comes its cousin Mobile Surface which I think is going to be the future of Computing. The next generation of interfaces,NUI. It is a novel interaction system for mobile computing. Our goal is to bring Microsoft Surface experience to mobile scenarios, and more importantly, to enable 3D interaction with mobile devices. We do research on how to transform any surface (e.g., a coffee table or a piece of paper) to Mobile Surface with a mobile device and a camera-projector system. Besides this, our work also includes how to get 3D object model in real-time, augmented reality and multiple-layer 3D information presentation.
Microsoft Research always comes up with cool concepts,prototypes. Two Researchers from MSR’s Redmond Lab have come up with this cool concept of LightSpace taking surface computing to next level.Here is the description from MSR’s Projects Page. I tried to embed Silverlight Smooth Streaming player to show the video, but it didn’t work.
Combining Multiple Depth Cameras and Projectors for Interactions On, Above, and Between Surfaces
Instrumented with multiple depth cameras and projectors, LightSpace is a small room installation designed to explore a variety of interactions and computational strategies relat-ed to interactive displays and the space that they inhabit. LightSpace cameras and projectors are calibrated to 3D real world coordinates, allowing for projection of graphics correctly onto any surface visible by both camera and projector. Selective projection of the depth camera data enables emulation of interactive displays on un-instrumented surfaces (such as a standard table or office desk), as well as facilitates mid-air interactions between and around these displays. For example, after performing multi-touch interactions on a virtual object on the tabletop, the user may transfer the object to another display by simultaneously touching the object and the destination display. Or the user may “pick up” the object by sweeping it into their hand, see it sitting in their hand as they walk over to an interactive wall display, and “drop” the object onto the wall by touching it with their other hand. We detail the interactions and algorithms unique to LightSpace, discuss some initial observations of use and suggest future directions.