Results 1 - 15 of 15
Project Persons Year Tags
ARhrrrr Augmented Environments Lab (GVU) 2009 handheld phone, graphic image recognition, image, map, generative animation, sound, single-user, 360 choice of perspective, small field of view, reactive to hand position, physical objects, first person shootergame, conscious choice about narrative, phisical objects trigger events, any surface, anywhere, quick setup, skittles, game, zombies
ARhrrrr is an augmented reality shooter for mobile camera-phones. The phone provides a window into a 3d town overrun with zombies. Point the camera at our special game map to mix virtual and real world content. Civilians are trapped in the town, and must escape before the zombies eat them! From your vantage point in a helicopter overhead, you must shoot the zombies to clear the path for the civilians to get out. You can use Skittles as tangible inputs to the game, placing one on the board and shooting it to trigger an explosion.
Bamzooki Jason Garbett, Rupert Harris (BBC) 2010 television, 'Free-D' marker recognition, video, marker, generative animation, sound, multi-user, small field of view, game show, linear narrative, on any television, television, game, show, entertainment
Children's entertainment show in which teams of kids design creatures called zooks which are put to the test against each other. The BBC’s Virtual Studiotechnology was used to enable realtime composition of the 3D rendered graphics with live camera feeds. Each studio camera has a dedicated render PC to render the virtual scene from that camera’s perspective. To know what a studio camera’s perspective is, each camera is fitted with a second ‘Free-D’ camera which points towards the ceiling. On the ceiling are reflective, circular bar codes. The 3-D camera data is fed to a computer system that identifies the targets on the ceiling and calculates that camera’s position and orientation, 50 times a second.
Displacements Michael Naimark 2005 spacial projection, filmed video-mapping, white room, people, video, multi/single user, small field of view, linear narrative, specific for that white room, projection mapping, installation
Displacements is an immersive film installation. An archetypal Americana living room was installed in an exhibition space. Then two performers were filmed in the space using a 16mm motion picture camera on a slowly rotating turntable in the room’s center. After filming, the camera was replaced with a film loop projector and the entire contents of the room were spray-painted white. The reason was to make a projection screen the right shape for projecting everything back onto itself. The result was that everything appears strikingly 3D, except for the people, who of course weren’t spray-paint white, and consequently appeared very ghostlike and unreal.
Eye pet Nicolas Doucet (playstation) 2009 tv screen, motion detection, user, generative animation, multi-user, flat small field of view, non-linear interactive narrative, any tv with playstation, playstation3, game, kids
The game uses the PlayStation Eye camera to allow a virtual pet to interact with people and objects in the real world. Using augmented reality, the simian, gremlin-like creature appears to be aware of its environment and surroundings and reacts to them accordingly. The player can place objects in front of the animal and the game will interpret what the object is and respond to it.[
PTAM (Robotics Research Group, University of Oxford) 2007 desktop, environment tracking and mapping, videofeed, animation or still, single user, 360 small field of view, no narrative, anywhere, ISMAR, research project
Video results for an Augmented Reality tracking system. A computer tracks a camera and works out a map of the environment in realtime, and this can be used to overlay virtual graphics. Presented at the ISMAR 2007 conference.
Rockpaperscissors t-shirt (T-Post,Moment 77 ) 2009 desktop, graphic image recognition, marker logo, generated animation, single-user, small field of view, reactive to object position, non-linear game, in front of any computer with webcam, t-shirt, fashion
Once you have your shirt on, stand in front of your web camera and play a game of Rock, Paper, Scissors against a computer-generated arm that extends from the shirt itself.
ScavengAR (Porter Noveilli, Metaio) 2010 smart phone, gprs geo tag, compass, graphic image recognition, graphic markers, animation, multi-user, walk trough the city, multi-player hunt game, points can be earned, pictures and texts can be placed, space specific in the city of Austin, game, quest, twitter
Players open up the junaio application available on the iPhone App Store and tune into the ScavengAR channel. Player's camera is on live view. This view will display a range of pre-set geo/AR tags created for the game within about 30 feet. Tags first appear as question marks. When close enough to the tags, players click question mark Geo/AR marker to collect the clue and score points.
Sgraffito in 3D Joachim Rotteveel (AR+RFID LAB) 2008 desktop, graphic image recognition, marker logo's, still, multi-user, small field of view, reactive to marker position, still, in front of any computer with camera, museum, informative, educational, history
rom October 25, 2008 until January 4, 2009 the Museum Boijmans van Beuningen in Rotterdam was exhibiting its wonderful collection of sgraffito objects from the period 1450-1550. Sgraffito is an ancient decorative technique in which patterns are scratched into the wet clay. The Dutch plates, bowls and cooking pots are part of the Van Beuningen-De Vriese collection. The artist Joachim Rotteveel has made this archaeological collection accessible in a spectacular way using 3D reconstruction techniques from the worlds of medicine and industry, including AR application provided by the AR+RFID Lab.
The Haunted Book Camille Scherrer (EPFL CVLab) 2009 desktop, graphic image recognition, graphic images, animation, multi/single user, small field of view, reactive to object position, linear narrative, user can turn the page and start animation, in front of any computer with webcam, poetry, book, student work, markerless
Diploma project by an ECAL Media & Interaction Design student with the EPFL CVLab. An artwork that relies on recent Computer Vision and Augmented Reality techniques to animate the illustrations of a poetry book. Because we don't need markers, we can achieve seamless integration of real and virtual elements to create the desired atmosphere. The visualization is done on a computer screen to avoid cumbersome Head-Mounted Displays. The camera is hidden into a desk lamp for easing even more the spectator immersion.
Triceratops Georg Klein (University of Oxford) 2009 handheld screen, envoirenment mapping, video, still image, semi multi-user due to large screen, 360 choice of perspective, small field of view, reactive to hand position, still, informational about the physical object, after mapping, space-specific, natural museum oxford, museum, education, informational
University of Oxford Natural History Museum Augmented Reality Tour. A map is made around a triceratops skull in the museum and an AR model is added. This work extends Georg Klein's Parallel Tracking and Mapping system to allow it to use multiple independent cameras and multiple maps. This allows maps of multiple workspaces to be made and individual augmented reality applications associated with each. As the user explores the world the system is able to automatically relocalize into previously mapped areas.
U-Raging Standstill CREW 2007 head-mounted display and headphones, camera, performers, pre recorded and live videofeed and sound, videofeed and sound, single-user, 360 large field of view, linear narrative chosen by the performers, could be set up anywhere, long setup, theatre, choreography, video, performance
The visitor of'U'becomes the protagonist in his own story. A story where he partly loses himself and eventually finds himself almost physically.
Unmakeablelove Sarah Kenderdine, Jeffrey Shaw (Museum Victoria, UNSW iCinema Centre, EPIDEMIC.) 2008 round screen, infrared light, the viewers, filmed and virtual imagery, multi-user, 180 large field of view, no narrative, anywhere, long set up, torch, flash-light
To explicitly articulate the conjunction between the real and virtual spaces in this work, the viewer’s virtual torch beams penetrate through the container and illuminate other viewers who are standing opposite them on other sides of the installation. This augmented reality is achieved using infra-red cameras that are positioned on each screen pointing at its respective torch operators, and the video images are rendered in real time onto each viewer’s screen so as to create the semblance of illuminating the persons opposite them. The resulting ambiguity experienced between the actual and rendered reality of the viewers’ presences in this installation, reinforce the perceptual and psychological tensions between ‘self’ and ‘other’.
[syn]aesthetics_09 Halvor Høgset 2009 handheld display, shape recognition, markers, videofeed, animation and sound, multi-user, 360 small field of view, reacts to hands position, linear narrative, anywhere indoors, long setup, XGA, Galleri ROM
The installation uses the whole exhibition space, and consists of 3 handheld monitors (1 cordless and 2 with cable) equipped with progressive XGA video cameras, headphones and buttons for interaction. Through these monitors the users explore the space, augmented with digital structures and spatialized sounds, and interact with them in a real-time experience.
?
augmented reality solutions. Together with a camera and display screen, the software lets LEGO packaging reveal its contents fully-assembled within live 3D animated scenes.
?
The user interacts with the system by creating, modifying, and presenting sketches to the camera.