Anne Deane Berman, PhD
& Friends

Ashes to Ashes was intended for performance in a six sided cave located at Iowa State University’s Virtual Reality Applications Center (with Lead PI Carolina Cruz-Neira) in which images appear on the floor and ceiling, creating a fully immersive experience.

The first performance of the prototype for Ashes to Ashes was held at the 2002 International Super Computing Conference with a subsequent installation at 2004 Mundo Internet Conference in Madrid Spain.  A portable virtual reality system consisting of four self-contained display modules was used at the conferences. Using four modules, the system can be made into a CAVE-like device with four walls or a single 32-foot long display.

DIGIVATIONS co-founders Steven Lee Berman and Anne Deane Berman with Dawn Haines interviewed first responders and survivors in February 2002.  This source material collected in New York City - witness stories – was transformed into music, a kind of animated poetry that provided the content for the installation. Larry Tuch, Head Writer and Creative Consultant for the Paramount Simulation Group, collaborated with me in the narrative design from the source interviews.


A participant can choose to take the fastest, most linear route through the narrative, thereby experiencing the basic, fundamental story being told about 9/11.  Or, if they choose, at any particular scene, they can pick more stories from other witnesses who are talking about their experience of that particular time in the larger story.


Additionally, in making the piece, we wanted to give the viewer control over how many stories they heard in certain acts so that viewers wouldn’t feel emotionally trapped since the material was so intense as told in the immersive context. The interaction time to move from story to story, gives the viewer time to reflection offering some distance from the material.  Alternately, some stories are mandatory; giving the piece it’s dramatic shape. 


The interactive narrative design of Ashes to Ashes can be compared to a tree, with a basic structural trunk being represented by the basic 9/11 story as told by “Billy.” Billy was the sole survivor from his company of firemen from Engine 6 who responded to the first tower after it was hit by the plane.  A participant could experience the piece, only hearing from Billy. 


This would give the basic, dramatic story which, by itself is astonishing.  However, the interactivity of the work enables the participant to make additional choices within each Act allowing them, if they choose, to listen to several of the other witnesses with varying points of view.


These secondary stories act as the branches of the tree, i.e. horizontal stories giving a richer picture of the scene, since multiple witness stories have different perspectives of the same event.


For instance, after hearing Billy describe his experience walking up the stairs into the first tower, getting to the 36th floor before the 2nd tower collapsed and he was ordered to evacuate, the participant could choose to hear from some of the survivors, those evacuees who met the firemen on the stairs as they were coming down.


Since all the stories are told by the actual witnesses in the first person, the viewer is able to become “Billy” and then, to imagine what it might have been like to be one of the survivors, as they evacuated.


These secondary stories are by no means less significant then Billy’s story, and in fact, are sometimes more poignant in their dramatic impact. The people coming down the stairs speak about the grim faces of the exhausted firemen as they trudged up the stairs carrying hundreds of pounds of fire hoses and equipment on their backs.  The survivors also speak about how they allowed the burn victims from higher floors to go past them on the stairs, unable to tell the race of the victims, only seeing their eyes.  


These perspectives enlarge Billy’s account, just as a tree’s branches give the tree it’s distinction. In creating spaces filled with sonic events, with many different narrative paths, participants decide how the piece will be told, actuating the unfolding of the piece.  In other words, each different viewer experiences their own unique version of the work, depending on how many and which branches of the narrative they choose.


Once the stories and ordering were identified, I experimented with various sound processing methods to build various new “instruments.” I combined these instruments to create musical excerpts that enhance the emotional aspects of the stories.  By coloring the bare voices in an interplay of sound images, the listener is encouraged to focus on poignant aspects of the stories. 


The piece can be implemented using two kinds of visual display devices including stereo glasses and a head-mounted display. The person who drives the piece wears a wireless headmounted tracker.  This magnetic tracker is used to monitor the driver’s position and orientation.


This maintains the driver’s correct perspective which is used to calculate a true stereoscopic view as one moves into and around objects that appear within the virtual space. The system also tracks the location of various input devices, such as wands and gloves. 


The first prototype implemented a hand-held wand allowing the driver to pick from various stories by selecting from different spheres floating above a virtual platform. The on-lookers witness the piece using light-weight stereo glasses.

-excerpt from Immersive Sonic Environments, ICMC 2005, Anne Deane Berman

Explores how emerging technologies can be employed to serve expressive and social goals.  We focus on the rich opportunities for synthesis across disciplines within music, film, games, theater, media installation, architecture, and systems engineering.

Ashes to Ashes/911 Memorial Virtual Reality Exhibit, Virtual Reality Applications Centre, Iowa State University

Beloved Mnemosyne - Immersive Space about Alzheimer's

For my first experimental immersive space, Beloved Mnemosyne, I collaborated with UCLA’s Film and Television Department’s Hypermedia Studio (Jeff Burke) and a visual artist in Santa Barbara (Bill McVicar).


The prototype combines wireless participant tracking technology, embedded wireless sensors, and computer-controlled production equipment in the creation of an interactive piece about memory in which lighting and sonic events are triggered as participants walk through the space and manipulate the objects they find there.


Centered around interviews with 9 brothers and sisters whose birth dates span over three important decades of American history (the 1950’s, 60’s and 70’s), the piece uncovers their relationships of the now deceased parents. 


Their stories help to uncover the illusive memory of the father for the youngest child, who grew up with a father who had no memory: for the last years of his life, the father lost his memory to Alzheimer’s disease. Sonic material taken from interviews of the siblings is used to create 15” to 30” sonic tone poems that can be arranged over time in multiple ways so that the more a person interacts with the object, the deeper into the story they go with very little repetition.


The participant wore a wireless microphone which picked up ultrasonic frequencies from hidden speakers at each of the four corners of the space.  Other sensors were mounted on objects so that when the participant touched these objects, different sound and lighting events would unfold.


Wireless headphones were used to give the user an intimate experience of the music, as if it was happening inside his/her head.  Headphones also enabled several individuals to experience the installation at once. Technology enabled an unprecedented level of flexible, rapid control over the production environment which was transparent to the user.


Common everyday objects displayed within an enclosed space inspired the sculptural elements of the installation. Relying upon their implied viewer relationship to elicit interaction (i.e. a water fountain, a chair with photograph album), objects allowed the viewer to become an active participant in the unfolding of the piece.


In addition to touch sensitivity, proximity to objects could trigger different sound and lighting events. As participants walked through the space and move toward each object, the stories about the objects unfolded. Depending on the level of interactivity by the participant, more information was revealed. If an object was touched or picked up, new material offering a transitional moment in the story was revealed, offering a kind of climax or moment of closure.


Lighting events were triggered in different ways as participants moved through the space.  Lights hanging from above or projecting out from within the object created an immersive effect. Lighting events helped guide the participant through the space and define various “memory rooms.” When a participant came within a certain proximity to an object, a light followed their movements. 


When the participant touched the object, the light cues changed accordingly to create an intimate atmosphere around the object.  The result was a lovely, intimate engrossing setting for the soundscape.


The installation was premiered at the Ex’pression Center for New Media in the Bay Area, September 8 – 10, 2000, as a feature of the MB5 2000 Conference.  The premiere was received with such enthusiastic response that it was then invited to DIGIVATIONS, a global technology and content conference presented by the University of California, and to the Art & Technology Network conference at Silicon Valley’s Tech Museum, presented by GroundZero and New York’s The Kitchen.  In 2002, the prototype was remounted at the International Computer Music Conference in Sweden’s Göteborgs Konstmuseum.


The technology was a seamless part of the piece acting as a metaphor for memory.  As users moved through the space they could sometimes retrieve a certain memory by revisiting a certain spot, sometimes not.  Many respondents reported that their intermittent success of retrieving certain memories reminded them of what the experience of having Alzheimer’s might be like.

gucvi prada fuzzy bag prada outlet shoes chatgpt ethics carlos prada prada button prada bucket challenge prada cologne l'homme prada luna carbon travel agent slogan outlet crossbody bags chatgpt developer mike and tom eat snacks prada pr 09zs instagram followers report catalina island visitors guide prada moccasin prada bag repairs prada l'homme perfume prada panties prada bracelet women's prada pattina bag silver prada cherokee amazing grace youtube camisa prada t shirt prada ferragamo shoes outlet prada massage spa women's prada eyewear designer sunglass outlet gucci items chatgpt on iphone sheets of cork board prada promo code prada opticals prada shoulder bags instagram ranking followers cardinal building materials prada orange purse rank instagram followers prada gift card cmyk for reflex blue yacht club hat concept printing gucci bags official website when i get older i will be stronger song prada basketball pennsylvania prevailing wage frequently asked questions the gucci store shop prada prada bag 2005 bonnie lure simon premium outlets vip silver prada sunglasses prada valentine tx 1 instagram followers prada lipstick skirt instagram followers private large pin boards random instagram followers prada shies amanda peet identity prada sport glasses instagram followers ranking cinnabon mix prada outlet prices prada milano sweater chatgpt developer prada heels sale prada bags on sale outlet prada label authentic vintage prada loafers in line commercial construction prada backpack used instagram million followers gucci fr chatgpt nvidia vinyl building material identity ray liotta sosa gucci prada ripe rambutan nvidia chatgpt factory outlets online shopping burgundy prada sneakers cloud buster prada financial chatgpt theverge prada bag authenticity how to cut rambutan prada dog backpack outlet prices captain epaulets


-excerpt from Immersive Sonic Environments, ICMC 2005, Anne Deane Berman

Positve Thinking - Interactive Performance on AIDS