The Album Format

I like the format of albums and am a fan of when tracks flow seamlessly into the next, this might stem from the first album I bought on iTunes, Sgt. Pepper’s Lonely Hearts Club Band.

Time limits are no longer constrained in the digital domain, with the medium itself allowing for ‘unlimited’ storage.

The general consensus seems to be that streaming caters to ‘low’ attention and instant gratification. The trend of shorter pop songs, Tik Tok, and streaming income.

I was talking to my flatmate about how we each listen to music:

I usually listen to albums as a whole, from specific artists while she said that she usually listens to individual songs in genre playlists, both curated by herself and others (ai curation?).

Types of Listening

Causal Listening: Listening for the purpose of gaining information about the sound’s source.

Semantic Listening: Listening for the purpose of gaining information about what is communicated in the sound.


Reduced Listening: Listening to the sound for its own sake, as a sound object, by removing its real or supposed source and the meaning it may convey.

Deep Listening: Listening to everything all the time, and reminding yourself when you’re not. Actively listening, both in the moment, and outside.


Performance/Improvisation

I’ve written before that I don’t consider myself a performer and currently am not very into the idea of performing in front of people. The notion of performing and improvisation together is very strong. I often enjoy improvising when experimenting with sound design and have been doing so as a natural process while using Max, as changing parameters is so flexible it means that every time that you use the patch it can have different results.

I aim for my submitted work to be an improvised single take, something I’ve not done before, and can already foresee challenges, such as structure, timing, direction and managing audio level. I hope this will be an interesting approach for me to learn many things.

Using Max MSP

For this project, I have been learning to use Max 8. I decided to do this as there are many artists I like who have used Max before, such as Fennesz (who I recently saw perform), Tim Hecker, Jonny Greenwood, and Autechre.

Max, also known as Max/MSP/Jitter, is a visual programming language for music and multimedia developed and maintained by San Francisco-based software company Cycling ’74.

So far I have been really enjoying using the software. The structurally open nature of modular coding makes it inspiring to create things in a non-linear structure. This has been making me think differently about how to approach my work in comparison to using clips in Ableton Live.

Initially, I started by creating a basic drum machine (a kick, snare and hi-hat) with simple time-based triggers but found that I preferred to use samples and audio-in to process audio.

A drum sequencer/synthesiser and sample-based wavetable

I have mostly been experimenting with creating loop-based systems, with changing variables to a degree of indeterminism.

Max 8.2 (and now 8.3) allows for multi-channel devices. The current patch I’m working on uses 64 channels of the same sample. Each of these is the same audio source but all are playing at different random speeds, creating a blended swarm of sound. These are then each ran through an audio downsampler and then randomly panned before being mixed down to stereo. at different speeds, these variables change (speed, downsampling, panning) creating an evolving unpredictable soundscape. As the input continues the sample buffer gradually gets shorter until there is no longer space to load sound and the sound stops.

Below is an improvised session with my bass and Max

part 1
part 2
small section
The patch used for the performance(s)

Avant-Garde / Underground- Outsider

I wouldn’t consider myself to be underground or avant-garde as I find these labels very scene and group-specific, whereas I don’t really identify as working in a certain group. The concept of ‘avant-garde’ sound art has become more a genre than a mentality in experimentation and ends up contradicting itself when it becomes its own sub-mainstream, much like the concept of an anti-culture or counterculture that gradually diminishes to nothing but a romantic aesthetic.

I think the concept of such movements will become less prevalent with the continued evolution of digital sharing.

I think it does bring up the interesting conversation of authenticity. When we looked at the websites and Bandcamp pages of specific labels, I found the chosen aesthetics interesting, with Avant-Garde labels going for a more minimalistic and utilitarian approach, while underground labels went for a more DIY approach, with less of a rigid visual style. I found this interesting as minimal design is in many ways simpler to implement

Graphic Scores/Notation II

I’m not a stranger to graphic scores (I have previously made a blog here about them). I usually use them to create sketches for arraignment, specifically in the case of dynamics. For all the projects I have done so far I have used notation, though, this is to visually collect pre concocted ideas as opposed to coming up with something on the spot. 

The session we had on graphic notation was interesting due to the open interpretation of many of them which resulted in more improvisation than explicitly reading. My notation was not performed, however, I did take part in performing Uinseans notation, which gave me a good grasp on what mine might have sounded like due to conceptual similarities such as polyrhythm and counterpoint.

Though I cannot read classical sheet music, I don’t feel Im missing out due to the prevalence of recordings in the modern age. In centuries past, the only way you would hear music (in the strict sense) would be from hearing someone play it, thus written notation was extremely important in preserving a piece of music (written vs oral.)

I have recently been trying to teach myself how to play the guitar (and bass guitar) and find that guitar tabs are a very intuitive system that can be understood very easily, though it is limited (much like staff notation) to the conventions of western instruments and scales.

This got me thinking about something I read a while ago about signs that have been designed for a nuclear waste disposal site. The signs were designed with the intent that they could be interpreted by language (even going so far as a future where humans might not be around)

https://en.m.wikipedia.org/wiki/Long-term_nuclear_waste_warning_messages

After the session, I experimented with uploading pre-existing scores to an Ai that created new versions based on the image.


In my first project, I used an image and converted it into a raw format (TIFF) and imported it into audacity, converting it to audio, to use as an element in my piece. The process of creating music in modern DAWs is in many ways notation. The graphical process of midi on a piano roll is drawn before the sound it emitted.

Below are some designs I created for the same process as above.

The sound above is the 1st image (grid) in Raw data.

I find it conceptually interesting to use images as scores that were created with different initial intentions such as the topographical map below (what made me think of For Airports), the tube map, and also made for think of the Situationist International

Psychoacoustics – Hyperacusis

I have my own personal experience with psychoacoustics as I had hyperacusis while growing up. My hearing is still sensitive, however, I no longer experience the extreme effects that I did when I was younger. My main memories of it began during morning assembly at primary school when during the assembly singing it sounded like an extremely resonant filter kicked in on a certain frequency which caused me to cry and panic from the pain. After this, I tried to avoid car/fire alarms and other loud noises.

I also tried to avoid loud/busy crowds as my ears would be extremely sensitive and I’d find it overwhelming. I remember that many people having conversations at once, in particular, would overwhelm me as I could hear all the conversations at once and would try to follow along with them all. This is still the case today and I find myself very aware of background noises.

Having hyperacusis had a dramatic effect on me growing up, one of the main things being my dislike of loud public places leading to me becoming quite agoraphobic. This also leads to me having a different social experience with music on the whole. I’ve only recently started going to see live music.

Since I was very young I’ve also struggled with tinnitus at night as, much like background noise, I can’t help but focus on it. Because of this, I grew up listening to music or audiobooks a lot at night, with the intent to fall asleep to them.

I also suffer from migraines which increase my sensitivity to sounds.

All of the following experiences have led me to naturally become interested in ’deep listening’ to the point that I would do it before I even knew about it conceptually. It’s also why I’m more focused on the introspective ‘internal’ effect of sound than the externality of dancing (though my enjoyment of singing might not align with this?). I believe it also has an effect on how I listen to music. When listening to albums for the first time I often choose to listen to them at night in the dark. This is also the way I listen to more ambient and ‘experimental’ music, whereas, when I’m in public places, such as on the tube, I listen to more rhythmical and structured music that I’m already familiar with, though, this is as much due to the ability of the music to cut through the loud sounds (I do not own active noise-cancelling headphones.)

The voice

I think voice is an interesting subject to talk about. 

I enjoy singing, and do it every day but am not particularly good or confident in using my voice, even in a social context.

A while ago I was having a conversation with my Dad about music and how the voice of the singer is often a deal-breaker in whether people enjoy the music of a band. The example we were talking about was Morrisey in the Smiths and how his overall vocal style is a definite part of the band’s aesthetic.

The voice is very personal and conveys a lot of meaning, both in the context of words used and also in its sonic qualities.

Listening to music in unknown languages moves the voices from a position of lyrical content to pure sonic qualities.

I find listening to music in languages unknown to me to be an interesting experience. For example, I enjoy the music of Japanese artist Ichiko Aoba. I have listened to one of her albums many times, and find it beautiful, yet I have no idea what she is singing about. It lets me appreciate her voice for its purely sonic properties. If I knew what she was singing about would I like the music more? It might even be quite offensive to ignore the lyrics of the music.

Elizibeth Fraser (of Cocteau Twins and This Mortal Coil) is know for her creative vocal style and abstraction of lyrics. Using a song technique called Puirt à beul. She often uses preexisting words without attaching the meaning of the word to it. I remember reading about how she would pick words from the dictionary at random. Im a fan of Cocteau Twins and Elizibeth Frazer is an amazing vocalist, though I must admit, that as someone who likes singing along to music it makes it more difficult to sing to.

this is supposed to link to Pearly Dewdrops

A case that works in advantage of this is Jónsi’s singing in Sigur Rós. For the album () all the lyrics are sang in a made up language. This is interesting as their earlier albums were sang in Icelandic yet become popular outside of Iceland.

Less focused on extended technique but once again treating the voice as pure sound, a large influence on me is the work of Liz Harris/ Grouper. Her voice is often buried is reverb and treated as an instrument in the mix. I recently saw Grouper perform and the Barbican and enjoyed it a lot. Her use of low filtered noise (either nature field recordings or portable tape machine hums) fed through guitar pedals created a bed for her guitar and voice to sit on, and was like a glue between songs as long stretches of noise reduced your state of listening, only to be brought back with subtle changes.


I am currently deciding wether I want to use my voice in my piece or not as it can be a versatile tool however I’m not sure i have the current ability to use it as effectively as I might wish.




Hannah Kemp-Welch

Hannah Kemp-Welch is a sound artist with a socially-engaged practice. She produces audio works with community groups for installation and broadcast, using voices, field recordings and found sounds. She also delivers workshops, makes zines and builds basic radios, aiming to open out sonic practices and technologies for all. Hannah is a member of the feminist radio art group Shortwave Collective and arts cooperative Soundcamp. 

Hannah is currently a PhD student with CRiSAP, developing and testing methodologies for collective listening within socially-engaged art.

I found Hannah’s talk to be very moving, specifically her project The Right to Record. https://www.sound-art-hannah.com/right-to-record

It reminded me once again of a topic that came up when I attended the Ultra-Red sessions at LCC (and met Hannah previously), where do we place ourselves in this work. Its important not to come across as some kind of protagonist, and to give others the tools to express themselves instead of dictating the piece.

This project was successful in telling an empathetic story unknown to me, that has also been impactful in lobbying the government to act and change these corrupt laws.

I also found the topic of the Shortwave Collective to be interesting and I’m happy about it existing in order to explore shortwave radio outside of the current older, male-dominated scene and structure. I’m interested in shortwave radio myself, mostly thanks to recordings of scanner and number stations in works by Godspeed You! Black Emporer, Tim Hecker, William Basinski and the Conet Project.