dist/dans is a mobile app that generates music based on proximity to the people around you.
A work in progress in collaboration with Jia. It will hopefully be part of the ITP Show.
Just making some noise with OLoS. Inspired by Tone.js, an oscillator can now act as a clock.
When people look at their phones during an event, that’s generally an indication of boredom. And when people bury their heads into screens on the subway, it’s an escape from the public space, a way to avoid making eye contact with strangers.
I do these things all the time. I know it’s rude. But it’s becoming a social norm as we come to terms with the myriad ways our devices augment our reality.
What if our devices could enhance, rather than diminish, our sense of presence? What if you could connect with people around you through shared participation in an augmented reality?
I’m working with Jia to develop an immersive audio-visual experience using smart phones, bluetooth, and possibly Google Cardboard. It’s a creative exploration of interactions mediated by technology.
We’re honing in on a couple variations on this concept, which Jia outlined here.
One idea is that every participant will emit a frequency that fellow participants can hear and modulate. The “carrier frequency” can have multiple parameters, like amplitude, detune, filter frequency, delay time, and delay feedback. The amount of modulation is dependent on distance between the modulator and carrier. As participants get closer, they’ll modulate each others frequencies with greater intensity. Some of the frequencies might be slow enough that they’ll create a rhythm, while others will be faster to create a sort of drone.
Here’s a sketch of what this might be like using one phone and some Estimotes:
The frequencies can be mapped to visuals as well. The participants would wear their phones over their eyes with Cardboard, and they’ll see a slightly augmented version of what’s in front of them.
Initially we’d hoped to use three or four iBeacons to triangulate every participant’s location in the room. If the phone is on your head, we could use the compass to figure out which direction participants are facing. From there, we could associate each frequency with specific coordinates in the room, and run these through the web audio api’s Head-Related Transfer Function to create a 3D/binaural audio effect. That’s probably out of the scope of this project, but it would be pretty cool!
Inspiration:
I was inspired by “Drops,” one of several IRCAM CoSiMa projects, which was also performed at the Web Audio Conference
Janet Cardiff’s Walks, with binaural audio
Audience participation in performances by Dan Deacon and Plastikman tho that’s not quite what we’re going for.
Golan Levin’s Informal Catalog of Mobile Phone Performance – lotsa good inspiration here.
–> slides <–
Where I’m At is a sonic graffiti app for iPhone and Android.
The main user interface consists of a record button and a map.
When you open the app, or hit the refresh icon, a bunch of things happen: It uses the geolocation API to fetch your location. Then it sends your latitude/longitude to the Foursquare API to locations near where you are at. It changes the location listed below the record button to the location nearest you. When you click on the name, you’ll see that the app repopulates the list of available locations. It also uses your lat/long to center the map, which uses the Google Maps API. And it gets the most recent data from my database, populating the map with pins representing sonic graffiti tags.
When there are multiple tags at a location, only the most recent tag is accessible through the map. It would be interesting to also provide access to the history of sounds, and find a way of representing the date at which the sounds were recording (which I am storing in the database).
It’s currently anonymous so there is no login. It would be interesting to allow sign in, so that you could follow friends and hear their tags. For now that’s not necessary because I’m already friends with all the current users, and there are only a few of us.
I set up a LAMP server on Digital Ocean following these directions. From there, I was able to use PHP scripts to handle file uploads, and to save JSON-structured data to a text file.
USER RESEARCH / FUTURE PLANS
One of my major questions is, how would people use audio? So far that’s still a bit unclear, people are shy when using their voice in a way that they aren’t when using photo and/or text apps. Some want to leave audio tips about locations, others want to leave notes for their friends.
I’ve found that the 10 second restriction is cool once people get the hang of it, but some wanted to make longer recordings. Should I allow for this, or keep the limitation? Maybe sounds should be limited to 16 seconds or something just a bit longer.
I like how simple the interface is, because the recordings people make are raw, they have to be their own editor, all they have is a button to start and when they let go the recording is done. But people sometimes want to make a longer recording and then trim them down after, similar to Instagram. In the process, they could even add effects, samples, and/or mix/stitch multiple recordings together to create their audio graffiti. I think I could do it with something like EZ-Audio and an NSNotification (for ios at least). This has the potential to become a rabbit hole, especially because right now I’m not doing anything to access the audio buffer, just using the Media plugin which has very limited functionality.
Users were confused about whether the map shows locations that you can select in order to tag them, or whether it shows existing tags near you. Currently, it’s the latter, but this would be clearer if the pins were more than just red dots – they could show the name of the location and an audio icon or date of the recording relative to the current time.
The map should show your current location, and update when you change locations or re-open the app.
The app needs a more cohesive visual aesthetic. During class, I got some feedback encouraging me to go for more of a “graffiti” aesthetic.
I’d like to analyze the sounds that are recorded and provide some data about them. For example, I could send audio to the Echo Nest to detect what song is playing, and detect some data about the recording such as loudness. I’d like to offer sonic profiles of spaces so that the app becomes usable even without listening—it’s not always convenient to wear headphones or hold the phone up to your ear. let alone allow some piece of unvetted sonic graffiti to blast out of your speaker in public.
If somebody leaves a tag at Fresh & Co saying “the tuna sandwich is fine now” in response to somebody else’s tag, it should maintain that thread. I’m not sure whether tags should allow the user to enter optional keywords, or I could have the ability to make a tag in response to another tag to create a discussion, similar to twitter.
In the meantime, Kyle tipped me off to an ITP thesis project from 2012 that has some similarities: Dig.It by Amelia Hancock.
Where I’m At is a sonic graffiti app that tags locations with audio. I’ve been working on it, and I’m currently getting nearby locations with Foursquare, saving audio files to my remote server.
Every location has only the most recent audio file associated with it, so there is some competition to tag the place with your sound. Like if you’re a graffiti artist and your work gets painted over. Maybe there can be a way to access history, but it will be an advanced feature.
To do list:
More importantly, how would people want to use this? I’m still researching location-based audio, and I’d like to focus on one aspect that will help me make decisions about what to do next, how to prioritize features, the visual design etc.
1. This could be for sharing audio recordings with strangers and/or friends. I’ve been looking at Chit Chat for inspiration.
2. This could be for sharing location-based music – it tags music that’s playing like Shazam or lets you seach for a song and share it with your location. Example: Soundtracking.
3. If this was an app for creating and publishing site-specific Audio Tours, I could do both those things, and might have a clearer picture of who would use this. There would be tour guides, and tourists. It could allow users to trigger songs, or audio recordings, and might also need to orient based on the compass. There are already some apps for this, like toursphere, locacious and locatify.
Homework assignment #2 from Ken Perlin’s computer graphics class, source code here.
For Always On, Always Connected, I want to explore location-based audio and develop a mobile app around that idea.
First, I need to learn more about how people approach location + audio. What could make location-based audio interesting enough to create? To share privately, or publicly? Is there data about audio that might be interesting, or should I focus just on the audio itself?
I have a few ideas already. The first one is called Where I’m At. It’s basically a mashup of Shazam (or, more likely, the Echo Nest’s musical fingerprinting service), and FourSquare.
Shazam has a social component. But most people don’t use it. That’s because if you are shazamming something, the assumption is that you didn’t know what song was playing. You had to use Shazam to find out. That’s not a cool thing to share.
But the idea of listening to a specific song in a specific place at a specific time changes what it means to tag a song. For example, I could tag that I’m at Brad’s, and they’re playing “Freaks Come Out At Night” by Whodini, and then share that with friends. The geolocation isn’t interesting, the song isn’t that interesting, but maybe the combination is.
Tagging the song becomes an easy way to share the vibe of where I’m at. Where I’m At could be the name of the app.
The lessons of David Travis’ Fable of the User-Centered Designer seem obvious: Start by getting to know who you are designing for, test your assumptions, and iterate.
But the approach is not quite so obvious because, as the fable’s protagonist discovers, there is no manual for user-centered design. You can’t just go by the book. You have to engage with your users at every step of the process.
I wish I knew this back in 2008. I was leading beta tests for a social music website. When I came to the project in February of that year, there was already a spec outlining all of the features the site was to offer. All that was left was to build the site (we hired a boutique company who gave us a non-profit rate), and fill it with great Creative Commons music (that was my job).
We had imagined that the site would appeal to many different types of users. But we had not done any user testing outside the world of our small nonprofit.
As the beta testing began, I discovered that the red routes were not as clear as we had imagined. I had to outline step-by-step instructions for how to do key things like upload a song, or create a playlist. My first round of beta tests were conducted by remote, without the ability to observe. I learned much more by actually watching people use the site, and talking through the process.
Some of these lessons wouldn’t really hit home until the site was opened to the public. At that point, I started to identify who our users really were. The categories we had initially envisioned didn’t fit the reality of our userbase. I fielded help emails describing issues I never imagined anyone might ask, mostly because I’d been living with the site for so long I had never thought to question. Additionally, many of the features from the original spec went unused. The design looked nice, but in reality it was too cluttered. A user-centered design process could have helped prioritize the information.
At the time, I was more concerned with testing functionality than with testing the user experience. The site was developed with a grant, and we had a limited timeframe to do what we said we would do, and we said we would do a lot of things. In hindsight, we should have allocated resources to learn more about our potential users as part of a more iterative design process.
Instead, I learned more about our users after the site launched. Many of the users who we’d imagined showed no interest in the site. For example, we thought remix artists would use the site to find legal samples. The original grant proposal even described an area of the site that would be dedicated to remix artists. But I vividly remember my first on-the-job conversation with an actual remix artist, because I found that she didn’t care about “legal music” because—the idea was actually quite insulting to her because she believed that, as an artist, her sampling is protected under fair use. Instead, it turned out that online video producers would become the primary audience. With more user research, we might have been able to predict this and either focus the site’s features, or reprioritize the goals of the site to cater to the other users we had hoped to attract. As we began to focus more on content for video producers, this audience continued to grow, but it also raised new issues that would have been better served by an iterative, user-centered design process.
And that’s why I came to ITP!