Posts Tagged Research

Digital Narratives From Microsoft Research

The Microsoft Rich Interactive Narratives (RIN) project aims to combine traditional forms of storytelling with new visualization technologies to create compelling interactive digital narratives. Experience sample narratives created using the technology at http://www.digitalnarratives.net. The RIN project is an undertaking by MSR India in collaboration with the Interactive Visual Experience group in MSR Redmond and the MSR External Research group. RIN supports a large number of media formats; standard audio and video formats, text, as well as the new formats such as Microsoft® Photosynth™, Deep Zoom™, etc. New visualization formats can be incorporated by adding dynamically-loadable plugins.

Technology

The Rich Interactive Narratives (RIN) technology consists of:

  1. The RIN Data Model – a structured, extensible, and platform-independent representation for narratives (it’s XML manifestation can be considered a kind of “HTML for RINs”)
  2. A Silverlight RIN player
  3. Silverlight plugins for “foundational Experience Streams”, each of which brings a specific visualization experience (such as maps, panoramas, Deep Zoom images) into the realm of RINs
  4. Authoring tools to create RIN content

While we have invested in a Silverlight player and plugins, there is nothing in the technology that precludes creating a player and plugins for other platforms, such as HTML5/JavaScript.
RIN is prototype technology. The RIN technology has its origins in the “Sri Andal Temple project” in 2008. The temple demo featured an application that led users through an immersive, interactive, narrated walkthrough Photosynth‘s and HD View stitched images of a temple in Tamil Nadu (video). Several core RIN concepts such as “Experience Streams” and “Generalized Trajectories” originated in that project.
Keep checking www.digitalnarratives.net for updates. You can find details about the RIN technology at http://research.microsoft.com/en-us/projects/rin/

, , ,

Leave a comment

Microsoft’s Tablet 2.0 Work Behind The Scenes?

Get Microsoft Silverlight

Michel talks about the “fat finger problem” and show us a few demos on Surface and a Tablet PC. Things get really interesting when he whips out a smartphone and uses movement to change the UI on the phone – moving the phone up and down for zoom for example. For those of you who are followers of this kind of stuff, you may have seen Bill Buxton show some of this work.

Source: MSR

, ,

Leave a comment

Microsoft Research’s Street Slide Now Live in Bing Maps For Mobile

Microsoft Research showed its research project called street slide just few months ago and its already implemented in Bing maps for mobile in someway. Take a look at the streetside demo shown at Bing search summit yesterday and then watch the video below from Microsoft research released months ago.

, , ,

14 Comments

ImageFlow–A New Way To Search Images From Microsoft Research

Microsoft has releases a paper on a new way to search and browse images on web. Here is the description from the paper.

Traditional grid and list representations of image search results are the dominant interaction paradigms that users face on a daily basis, yet it is unclear that such paradigms are well-suited for experiences where the user‟s task is to browse images for leisure, to discover new information or to seek particular images to represent ideas. We introduce ImageFlow, a novel image search user interface that explores a different alternative to the traditional presentation of image search results. ImageFlow presents image results on a canvas where we map semantic features (e.g., relevance, related queries) to the canvas‟ spatial dimensions (e.g., x, y, z) in a way that allows for several levels of engagement – from passively viewing a stream of images, to seamlessly navigating through the semantic space and actively collecting images for sharing and reuse. We have implemented our system as a fully functioning prototype, and we report on promising, preliminary usage results.

image

This is what it does,

We present and implement ImageFlow, a novel image search interface that explores an alternate approach to satisfying core user activities around image search tasks. ImageFlow streams images towards the user in a 3D-like environment and supports both the passive and active exploration of a search result set. A user can type an initial query into a search box and then passively observe images as they flow towards the user. A user can also interact with the system by steering through the flow of images with the mouse. ImageFlow also introduces a new way to explore different semantic and image attributes by mapping them onto its canvas‟s spatial dimensions. 

image

More details here.

, , ,

1 Comment

Aided Eyes: Eye Activity Sensing for Daily Life

Another cool project from Microsoft Research: Aided Eyes demoed at Innovation conference,Asia.

 

 

Here is the description mentioned in the blog,

Our eyes collect a considerable amount of information when we use them to look at objects. In particular, eye movement allows us to gaze at an object and shows our level of interest in the object. In this research, we propose a method that involves real-time measurement of eye movement for human memory enhancement; the method employs gaze-indexed images captured using a video camera that is attached to
the user’s glasses.

We present a prototype system with an infrared-based corneal limbus tracking method. Although the existing eye tracker systems track eye movement with high accuracy, they are not suitable for daily use because
the mobility of these systems is incompatible with a high sampling rate. Our prototype has small phototransistors, infrared LEDs, and a video camera, which make it possible to attach the entire system to the glasses. Additionally, the accuracy of this method is compensated by combining image processing methods and contextual information, such as eye direction, for information extraction. We develop an information extraction system with real-time object recognition in the user’s visual attention area by using the prototype of an eye tracker and a head-mounted camera.

altWe apply this system to (1) fast object recognition by using a SURF descriptor that is limited to the gaze area and (2) descriptor matching of a past-images database. Face recognition by using haar-like object features and text logging by using OCR technology is also implemented. The combination of a low-resolution camera and a high-resolution, wide-angle camera is studied for high daily usability. The possibility of gaze-guided computer vision is discussed in this paper, as is the topic of communication by the photo transistor in the eye tracker and the development of a sensor system that has
a high transparency.

Source

,

Leave a comment