Future Media Fest: What Does Camouflage Sound Like?

Last week, I found it difficult to relate to the other attendees of Georgia Tech’s FutureMedia Fest 2010; in fact, I failed to connect with them rather spectacularly on many levels. Moving back and forth between academia and a considerably more commercial environment is jarring. I failed to share the speakers’ interest in markets, business models, and monetization. I didn’t have an iPad, Blackberry, laptop, twitter feed, or cell phone–or even a wristwatch (they’re just a fad, I keep telling myself). And even if I had been plugged into Intertoobies like everyone else around me, I wouldn’t have been able to communicate with them. They seemed to be using a sort of Pidgin English that uses many of the words I am familiar with, but which recombines them in ways that make the meaning go away. They coined words that shouldn’t be, like “megatrend”; they piled on unnecessary verbiage, as in “we provide download solutions” (no, you provide downloads); they abbreviated phrases which nobody in the history of language has ever thought needed abbreviating (“BC” allows the busy exec to downsize the superfluous letters in “business community”).

The last straw was a VP for Coke told me (and a hundred others), without irony, “Coke stands for authenticity.” I sipped a Pepsi as he spoke.

For most of the conference, then, I have felt slightly out of step with the rest of the crowd. At least until Thursday, when Georgia Tech’s researchers opened their labs to conference goers. Almost two full floors of labs and workspace were available to tour. I spent a good deal of time talking to people in the Sonification Lab, an interdisciplinary lab that draws on the insights of both computer science and psychology. Many of the projects in this lab that I saw translate the visual world into meaningful, navigable audioscapes. For instance, take the SWAN Project, which Ph.D. student Jeff Lindsay showed me. He gave me headphones and a videogame controller, and I apologize ahead of time to the researcher if I mangle my description. When you put on the headphones, you are listening to an audio space. You hear a guitar string being plucked, and when you turn your head, the “location” of the plucking string moves as well, as if you were listening to surround sound. As you “walk” toward the object using the controller, the speed of the plucking increases. Then you hear another object in the distance, turn your head to “face” that object, and off you go again. Those sounds, he explained can be used to represent objects or obstacles, doorways or fire hydrants, and can be used to help a visually impaired user navigate a sidewalk. Jeff (I think it was Jeff) explained that as it was, the system would probably not replace a cane, but that it offered the user an enhanced environment.

Another project in that lab is the Bonephones Project. This one was not a little surprising. Basically, these headphones conduct sound straight into your skull, not into your ears. The headphones that they gave me made contact near the ear but not in it, allowing me to simultaneously hear music coming through my skull and carry on a conversation. They pointed out that you could combine SWAN and Bonephones to offer a highly enhanced audio environment for the visually impaired.

At some point during my tour of the Sonification Lab, somebody mentioned that I should go see the fish. At first I thought it was some sort of code, but when I saw a brightly lit aquarium through a window in a lab, I took it to be the demonstration that they were talking about. The Accessible Aquarium Project is an attempt to allow the visually impaired (that’s “V.I.” to the busy execs) to experience dynamic complex visual motion. To achieve this, you need:

  1. 4 brightly colored fish
  2. high speed video cameras
  3. a computer, loaded with tons of custom software
  4. speakers

The high-speed video cameras film the fish from (at least) two different angles, the computer identifies the type of fish, tracks their speed (and other components of motion), and translates each fish’s movements into music. On the screens, little vectors appear in real time over the image of the fish, plotting their motion, and I assume those are what are translated into music. Hard to imagine? Check out this .mov and track the little Nemo-looking clown fish. When he appears in shot, you will hear a new tune play, when he speeds up, you’ll hear that as well, and when he disappears, his song vanishes. It’s sort of like Peter and the Wolf in the Matrix.

Oh, and to answer the question posed in the title, the sound of camouflage is silence.

The most gratifying aspect of the demonstrations was the ability to sit down and talk at length to the researchers, and the best conversation I had was with Claudia Rebola, a designer and researcher who was demonstrating a small, user-friendly multimodal platform designed for older adults, but which seemed positively bulging with educational potential. What was most interesting about our conversation, however, and what left me wondering why Dr. Rebola was not giving talks to the Brittain Fellows, was that she had done her dissertation on non-verbal communication in electronic environments. I gave her the names of a few contacts in the Writing and Communication Program, and we exchanged business cards. We will, I assume, be in touch.

Making personal connections with enthusiastic researchers was, for me, by far the most important and fruitful part of the conference.

PlayPlay
Share articles with your friends or follow us on Twitter!
Bookmark the permalink.