The four chapters we read this week were very inspiring as I reflect on the experience of my digital music workshop.
:: In “The DJ Factor,” Mike Challis describes a curriculum for at-risk 14-16 year olds in the UK who have not had any previous music experience. The key to his experience is to start from music that the students are interested in, which is novel for students who often have a sharp divide between music in and out of school. In the UK, kids listen to electronic music, UK garage and hip-hop. The first stage is to put music into their hands as DJs, and through a 4-step process of removing elements to fill with original elements, they produce a piece of original music.
In Stage 1, beatmatching is used as a way to learn about beats and bars. I’ve struggled a bit to teach this concept to the kids in my workshop, and despite the fact that they are much younger (some are as young as 7), I can’t think of a more hands-on approach than using two turntables and a mixer.
In Stage 2, Reason is introduced as a beat-making tool, and students create original beats to layer over their DJ mixes. The main lesson here for me is that the students aren’t given a comprehensive overview of how the program works, but instead they just dive in to figure it out on their own. Reason gives immediate feedback, so it may in fact be easier to learn from diving right in than by a general tutorial. The instructor’s role is to answer questions, monitor, intervene when necessary and provide feedback.
In Stage 3, students add a bassline either with a live performance using MIDI, or by sequencing each step of the melody. And in stage 4, structure is introduced by listening to the music that the students brought to the program. At this point, the original mix can even be removed to leave each student with a completely original composition (not to stay that remixed elements couldn’t also be used in original ways). The article mentioned that Dizzee Rascal had this type of opportunity when he was a teenager, I found this to be really inspiring.
:: The next chapter, “Composing & Graphical Technologies” by Kevin Jennings, gave me a lot to think about as I design my web-based audio editor. I’m working on the design, and the visual representation of sound will have a significant impact on how people use this tool.
Graphical interfaces can provide novel ways for young people to dive into composing without needing to learn notation or a sequencer. However, music is not visual, so any attempt to represent it visually biases and affords certain types of uses, whether it’s traditional notation or sequencers or Finale or Logic or the original programs Hyperscore and Drum Steps mentioned in this paper.
I’m really inspired by idea idea of Hyperscore, in which ‘motives’ can be brought into a ‘sketch.’ The author believes that this particular interface is best-suited to teaching rhythmic concepts, texture and form, but despite his efforts, questions concerning pitch did not come about as naturally.
If you set a young person free to explore music composition in an environment with certain affordances, they will come to understand many different types of musical concepts on their own. After an initial stage of “bricolage” or “just messing around” to become familiar with the interface, Jennings observes the way students settle on a clearer idea to explore. This bricolage is an important part of the process that reminds me of Mike Challis’ approach in teaching Reason. The questions come up as part of the exploration process, and in the case of Hyperscore, concepts like inversion, repetition, and variation appear in students’ composition without any mention of these concepts by the teacher. In short, musical concepts don’t need to be explained as long as the system affords the potential to discover them through exploration.
In contrast to HyperScore, Drum-Step as a graphical interface that students seem more likely to interpret in non-musical ways. Their motivations are more about making things that look cool than by the musical inspiration. So graphical interfaces can have drawbacks. Interfaces need to incorporate non-musical elements, and there is a point where those might eclipse the musical elements. If the goal is music education, then it’s best to design an accessible path into musical understanding.
In the case of my web-based audio editor, there is a question of whether to follow the established ways to represent sound, or invent my own. There is also a question of how to describe the various effects that can be used. If an effect is offered, it will be used, is the message of Jennings’ article. However, in the case of a “feature-bloated” program like Microsoft Word, only a small percentage of the program’s features are actually used on a regular basis by most users. That is the problem I’m trying to solve by creating a program that streamlines some of the audio editing features available in other programs like Audacity. But I need to think very carefully about how they are represented visually.
:: Reading Steve Dillon and Andrew Brown’s 2005 chapter on networked collaborative music-making here in the year 2013, I’m struck by the way they approach “computer as instrument,” “cyberspace as venue” and “network as ensemble” with such novelty. The trick for this type of program to work is to send data (i.e. MIDI notes, OSC messages) instead of the actual audio (which is also data, but a different kind that takes more memory).
Some of the benefits of networked jam2jam are that this type of experience invites listening and creating at the same time. Participants need to use musical terms to communicate their ideas, and jam2jam provides chat boxes that facilitate.
Dillon & Brown identify three ways that students collaborate without being in the same physical space (country, even), : Disputational involves individual decisions without agreement. Cumulative collaborators largely agree, but avoid confrontation. Exploratory is a give-and-take, modifying each others ideas to build something that is greater than the sum of its parts, and that seems like the goal here.