Results 31 - 36 of 36
Project Persons Year Tags
The Haunted Book Camille Scherrer (EPFL CVLab) 2009 desktop, graphic image recognition, graphic images, animation, multi/single user, small field of view, reactive to object position, linear narrative, user can turn the page and start animation, in front of any computer with webcam, poetry, book, student work, markerless
Diploma project by an ECAL Media & Interaction Design student with the EPFL CVLab. An artwork that relies on recent Computer Vision and Augmented Reality techniques to animate the illustrations of a poetry book. Because we don't need markers, we can achieve seamless integration of real and virtual elements to create the desired atmosphere. The visualization is done on a computer screen to avoid cumbersome Head-Mounted Displays. The camera is hidden into a desk lamp for easing even more the spectator immersion.
The Whisper Deck Craig Kapp 2009 headmounted display, marker recognition, marker, generative animation, single-user, 360 large field of view, reacts to head position, non-linear informational narrative defined by user, any suface, Flicker, Google, search, informational, voice
The Whisper Deck is a voice-controlled augmented reality data visualization tool that immerses users within a fluid information ecosystem of their own design. Using an off the shelf Vuzix CamAR head mounted display, users can look around their local environment. A special symbol visible in the environment causes the 3D interface of the Whisper Deck to appear. Users can speak commands to the model to cause it to search the Internet and return relevant information, including spoken definitions from Wikipedia, images from Picasa, Flickr and Google Images, as well as search term comparisions from Google Trends.
TINMITH Wayne Piekarski (University of South Australia) 2006 head-mounted display, gloves, Markers, pinch gloves, videofeed, generative stills, single-user, 360 large field of view, generative non-liear narrative, could be programmed anywhere, Metro, geometry, outdoor
We have written a number of applications which use Tinmith technology in order to perform outdoor augmented reality tasks. The Tinmith-Metro application is our main application, demonstrating the capture and creation of 3D geometry outdoors in real time, leveraging the user's physical presence in the world. Tinmith-Metro is also capable of easily rendering existing 3D models such as VRML, 3DS, and DXF for visualisation purposes. The true power of our software is experienced when using the 3D modelling capability to make changes in the environment, which is the most advanced of its kind available.
We The Citizens Paul Lincoln (Multimedia Art Asia Pacific) 2004 handheld display, desktop, shape recognition, Markers, animation, multi-user, 360 small field of view, reacts to hand position, linear narrative, space-specific, MAAP, conditioning, installation
A visual allegory for existence in Singapore, this installation thematically revolves around air conditioning, a physical condition noted for it's importance in Singapore's great economic development through the conditioning of ambient temperature. 'We the citizens' is about us, Singaporeans and endeavors to confront the audience with issues of our comfort and meanings of unity under the comfort of government. "We the citizens" utilises Mixed Reality Technology
Wrigleys 5 Gum Music Mixer Boffswana, Exposure (Wrigley) 2009 desktop, shape recognition, markers, generative sound and animation, single-user, 180 small field of view, interactive non-linear narrative, used on any desktop computer with webcam, ARvertising, music, gum
The innovative site targets both the visual and auditory senses.
[syn]aesthetics_09 Halvor Høgset 2009 handheld display, shape recognition, markers, videofeed, animation and sound, multi-user, 360 small field of view, reacts to hands position, linear narrative, anywhere indoors, long setup, XGA, Galleri ROM
The installation uses the whole exhibition space, and consists of 3 handheld monitors (1 cordless and 2 with cable) equipped with progressive XGA video cameras, headphones and buttons for interaction. Through these monitors the users explore the space, augmented with digital structures and spatialized sounds, and interact with them in a real-time experience.