CityGram – interactive map for our ears

24. duben 2015

Citygram-Sound Project is a collaboration between NYU Steinhardt, NYU CUSP, and CalArts. The Citygram Project is a large-scale project that began in 2011. Citygram aims to deliver a real-time visualization/mapping system focusing on non-ocular energies through scale-accurate, non-intrusive, and data-driven interactive digital maps. The first iteration, Citygram One, focuses on exploring spatio-acoustic energies to reveal meaningful information including spatial loudness, traffic patterns, noise pollution, and emotion/mood through audio signal processing and machine learning techniques. Citygram aims to create a model for visualizing and applying computational techniques to produce metrics and further our knowledge of cities such as NYC and LA, while lately also the city of Prague has been included.CityGram radio program has been produced in collaboration with New York University Manhattan, New York University Prague and Czech Radio.citygram.smusic.nyu.edu

Tae Hong Park: Sinescapes
Sinescapes is a composition that focuses on environmental sound as a metaphor to
embrace the idea of “earth time” opposed to human-centric time. One could say that the
earth is “very old” and things change very slowly when compared to what happens during
the course of human life. While the overarching trajectory of earth’s biological clock
extends long periods of time, subtle, small changes simultaneously occur continuously,
but are almost invisible to our senses. I try to capture some of these ideas in Sinescapes.
The sounds themselves are based on single sinusoids where frequency, dynamics, and
energy change is controlled at the micro and macro level via data captures from Citygram
sensors.

Evan Kent: Antrophony
I have been interested in speech and soundscapes as separate entities for
a while now, and through Anthrophony I, I explore speech as perhaps the
purest extraction from the human soundscape. The texts, which
thematically deal with sound, are driven by feature data from the recorded
soundscapes of both New York and Prague, expressing the speech-scape
quite literally. Navigating the space between documentary and composition
has been important for me lately, especially with recorded media. The
speech-scapes allowed me to work with language in a purely syntactic way,
as one might hear in instrumental or acousmatic music, but also brought
my to attention a sort of, semantic Hauptstimme in the texture, in which the
meaning of the words, however abstracted, became relevant again. I would
like to thank recordist Darien Henshaw, and performer Sarah Segner for
their contributions to the piece.

Andrew Telichan: Mikenko.
Using a system developed in the Max environment, Mikenko is designed through the
detection and deconstruction of human-voice fragments from an ongoing soundscape
recording, playing these fragments back in a random order and at varying speeds. For
this particular piece, the recorded soundscape entails a series of recordings taken in
Times Square, New York. The more playback speech the system detects, the more
individual syllables and phonemes it catalogues and plays back, thereby creating
feedback loops that continuously intensify and diminish over time. During this process,
the speech fragments are sent through various levels of spectral and granular re-
synthesis, creating a diverse – and constantly (de)-evolving – output, while envelope
followers read time and amplitude information of the vocal fragments, which are then
used to control parameters of external synthesis engines. Additional parameters –
including granular size, granular playback speed, and panning – are controlled by real-
time data streams related to the soundscape recording itself, gathered and pushed via
NYU’s CityGram network. The overall result is a mix of textural sonorities composed of
voices and synthetic sounds that grow and develop over time.

Michael Music: Aggressive City Rhythm.
Aggressive City Rhythms is a generative composition whose source materials, as
well as algorithmic control processes are derived from the synchronized urban
sound data that was collected in Prague and New York City. For the
presentational format that will be heard, the ten locations are randomly selected
every ten to thirty seconds. The sonic qualities that define these locations are
streamed from a database and mapped to the rhythmic generation, dynamics,
complexity, length, and so on of the composition. The rhythmic bursts that
become the primary musical material for the composition are also derived from
the soundscape recordings. As each location is played in the background, it is
passed through an acoustic event detector that extracts short samples and passes
these to the player module. Finally, each location was also compressed from a
thirty-minute recording to five, ten, and thirty second representations. These
demonstrate the spectral similarities and differences between these locations and
cities. These recordings are sprinkled in to the composition as a way of directly
getting a sense for the unique sonic qualities of a location.
This piece is intended to invoke the excitement and controlled aggression that
the soundscapes of these big cities offer its citizens.

author: Michal Rataj
Spustit audio