Synth Only versions (set to pentatonic major scale)
Author: Benjamin Thorn
Guitar
I bought a cheap guitar in April and have been teaching myself how to play since. Although it’s not particularly avant-garde of me, it is always valuable to learn new things and it has already given me a different perspective as a non-musician.
Playing an instrument takes time. Your fingers hurt, taking mental and physical fortitude. I have so far found it rewarding
The way chords and notes are approached is interesting. The 6 strings mean 6 possible notes a chord, introducing the concept of drone chords, and lower pitch strings allowing for a structural base note that progresses down the line.
Standard and non-standard tunings, open-scales and Nick Drake. Using a chord as a basis for a scale.
I chose to try to learn the guitar for a number of reasons:
- I wanted a physical instrument as an audio source to process effects through (mostly midrange frequencies)
- I’m a fan of guitar music (particularly in the context of shoegaze and earlier post-rock)
- My dad plays the guitar, allowing him to teach me some stuff as well as a bonding activity 🙂
I am a fan of playing instruments unconventionally. I have previously read about extended techniques when it comes to guitar. Earlier in the year, I read about Keith Rowe and the techniques he used. I also read about Glenn Branca who I did not know about before, even though he is very influential. I sometimes like bringing my phone up to the pickups to get electrical interference and then using it as a slide.
The more ambient pieces below were made while having the guitar flat on my bed and occasionally lightly tapping the strings with a flatlined compressor and a bunch of effects in Ableton (Valhalla delay/vintage verb, pedal effect and the same max effect I’ve been using a lot recently).
Here are some examples of a recent improvised/experiment/practice with just guitar and effects (was recorded live in one take so its not mixed all that great):
Don’t feel obliged to listen to them all, I just had to break them up due to the file size limit.
More Ambient
More Guitar, a bit gloomy
An accidental mess of a combination of all these files on top of each other when exported
As is sometimes the way, on listening back to these audio clips, it sounded better at the time of playing/recording it. I could somewhat blame it on these files being compressed mp3s so that I could actually upload them due to the 1o mb upload limit.
Generative music
Generative music – designing a system to create music
I watched an online interview/class with Mark Fell.
https://en.wikipedia.org/wiki/Snd_(band)
https://www.youtube.com/watch?v=EcTAAdDPhTk.
https://www.thewire.co.uk/in-writing/essays/collateral-damage-mark-fell
He talks about Max patches as compositions in themselves. I’ve had similar thoughts recently on the format of digital audio software and daws. I was thinking about the idea of a music listening platform that only plays ‘patches’, allowing others to listen to music but also see what’s underneath it (how it was made and how it performs itself)
Laurie Spiegel
“I automate whatever can be automated to be freer to focus on those aspects of music that can’t be automated. The challenge is to figure out which is which.”
Brian Eno
Steve Reich
Markov chain
65 days of static z03
65DOS also made the score to No Mans Sky, games and generative music go hand in hand.
Sound For Screen
Today we improvised over film clips using hardware synthesizers (Korg Volcas). I mostly used the Volca Kick and the Volca keys.
Sound and Music in film are very important. It can manipulate emotions and completely alter the meaning of scenes. I always think back to an example I was shown doing Media at school of scenes from horror films with different sounds on top of the video.
Initially, I found the limitations of the Kick quite restricting, mostly as I was unfamiliar with using this specific device, however, as I experimented further I found that the drive and envelope could be used to get more effective sounds, especially in the context of the clip we were scoring to from Bladerunner.
Many people in class found that as they rewatched the clip they found their improvisation to be less sporadic and not as effective. I found it to be the opposite as after rewatching the clips I had a better understanding of the rhythm of the editing. It also allowed me to become more familiar with the instruments at the same time, seeing what worked and what didn’t while experimenting.
I think one of the most important elements this introduced was the use of silence, and how when being told to improve over the clip as a single performer it’s important to resist the urge to fill the entire timeframe with sound. It makes me think back to my previous comments on John Cage, as well as on Talk Talk’s Spirit of Eden / Laughing Stock and post-rock in general. I think that if we’d had heard everyone’s sounds at the same it would have been an interesting exercise
I mostly found myself creating more musical soundscapes for the clips, as opposed to dietetic sounds. I enjoyed watching the clips with their sound after our improvised attempts. I found Johnny Greenwood’s score, and the use of audio in You Were Never Really Here to be very effective, especially in how it interacted with the scenes, placing the sound as a focus in its creation as opposed to just a tacked-on feature afterwards or for the sake of sensory continuity.
Something I found interesting is that due to being familiar with the existing scores of the films shown, it was somewhat distracting not to mimic them. This helped in the case of Vangelis’s score as I was using the Kick so it wasn’t really possible to copy.
Here is the original improvisation with Key over You Were Never Really Here, as well as an unfinished creation using only the improvisation as material.
On a tangent, I personally have a large amount of respect for films with a very minimal soundtrack (if any at all). I’ve consistently mentioned it this year but Chantel Akerman’s ‘D’Est’ is one of my favourite films as it is essentially field recordings with visuals. There is no story, just video and sound yet to me each scene tells its own story.
Controllerism
- Using phones, tablets, and computers to manipulate recorded elements in DAWsÂ
- Investigating iOS and Android hardware potential for sound manipulation e.g. accelerometer, gyroscopeÂ
- Apps, such as L.K. and Touch OSC, for controlling Ableton (iOS and Android)Â
- Mapping phones to computer DAWs. Routing iOS keyboards in desktop synths (expressionPad MIDI/Synth – freeware)Â
- Creating custom layouts for iOS and Android (e.g. Touch OSC) Using iPhone/ Android gyroscope to generate midi data
Today we looked at controllerism, focusing on using Ableton Live’s midi mapping.
Since getting an Ableton Push 2 I seldom find myself using Live without it. I enjoy the Simpler controllers for sampling as well as the Grid layout which makes it very accessible for someone like myself who doesn’t play keys and would otherwise get stuck in the same few scales.
The Push 2 has polyphonic aftertouch and pressure but does not have full MPE features such as slide. The Wavetable synth inside Ableton is the best for exploring this as it fully supports MPE meaning you can change the wavetable position for each note you’re playing depending on the pressure used (with each finger) allowing for more control over sounds.
Instead of using the Midi Mapping feature I often just make device racks that automatically come up on the push, allowing me to make my own instruments and presets.
Earlier in the year, I experimented with a VR headset and found it to be an exciting field to explore due to the specialisation of sound and the possibilities of interactability and controllerism. Using unity I created objects that emitted sound and could physically interact with them, moving them closer or further away. Some of these objects had physics and would fall to the ground, while others were frozen in space when let go of. This is something I would like to explore in the future, exploring how techniques made accessible through VR could be used in an artistic way as opposed to just novelty.
My laptop also has a touchscreen, something that I rarely use but have once before while experimenting with a Max patch. (as seen below) For my previous project, I mapped a foot pedal to Max so that I could control the length of the loops while interacting with a microphone, allowing me to be away from the computer itself.
https://photos.app.goo.gl/9GRgPLZzrnmLe1Er6
I have a small groove box/sampler/midi sequencer/controller called the OP-Z which has a built gyroscope for modulation yet I seldom use the feature. I enjoy using the devices as a sequencer and a tool for making music however I still have mixed feelings on it as a whole as for the same price I could have got a physical synth that would work alongside Ableton and Max.
I find it unlikely at the moment for me to use my phone as a controller.
For my hand-in, I aim to create a generative piece to explore creating a piece with no interaction on my part, other than pressing play/record.
Homework:
1. Develop one of the pieces begun in today’s session to be played in next week’s session.
Samson Young
Multi-disciplinary artist Samson Young works in sound, performance, video, and installation. In 2017 he represented Hong Kong with a solo project titled Songs for Disaster Relief at the 57th Venice Biennale. He was the recipient of the BMW Art Journey Award, a Prix Ars Electronica Award of Distinction in Sound Art and Digital Music, and in 2020 he was awarded the inaugural Uli Sigg Prize. Â
 Â
He has exhibited at venues such as the Guggenheim Museum, New York; Gropius Bau, Berlin; Performa 19, New York; Biennale of Sydney; Shanghai Biennale; National Museum of Art, Osaka; National Museum of Modern and Contemporary Art, Seoul; Ars Electronica, Linz; and Documenta 14: documenta radio, among others. Recent solo projects include: the De Appel, Amsterdam; Kunsthalle Düsseldorf, Düsseldorf; Talbot Rice Gallery, Edinburgh; SMART Museum, Chicago; Centre for Contemporary Chinese Art, Manchester; M+ Pavilion, Hong Kong; Mori Art Museum, Tokyo; Ryosoku-in at the Kenninji Temple, Kyoto; Monash University Museum of Art, Melbourne; and Jameel Art Centre, Dubai, among others.Â
Â
Samson Young studied music, philosophy and gender studies. He was Hong Kong Sinfonietta’s Artist Associate in 2008, and graduated with a Ph.D. in Music Composition from Princeton University in 2013.Â
Although I appreciate the galleries ability to recontextualalise and present art in a ’neutral’ environment, I still am not a big fan of gallery art, especially in the case of a lot of modern art.
I find the gallery space to be quite alienating, not only to me as an individual, but culturally in its abstraction and sterility, as well as its surrounding culture and industry
Jitter: Midi Interactive Visuals
I’ve been experimenting with Max’s Jitter. I’m currently struggling to find a reliable way to record playback of the visuals. The visuals are mostly modulated by midi signals however I have tried to use audio signals as the modulator but its not as effective.
Creative Synthesis
P1 – Software
We played around with subtractive software synths in different DAWs. The one in Logic (retro synth) sounded more like a hardware synth, with the filters sounding more analogue whereas the Ableton Analog instrument sounds less thick, and the interface is a bit of a mess. My personal favourite Ableton synths at mimicking analog devices are the POLI and BASS Max devices.
We also tried using VCV rack. I have a small amount of experience in VCV as I have tried it before. VCV rack simulates a modular set-up very well and can produce very nice sounds. I find it appealing that it is free to use and that there are so many free modules to use. The main downside is that you don’t get the physical experience of patching like you would with a real Eurorack modular setup, however, the fact it is free more than makes up for this (especially considering the price of an actual modular setup).
Here are some recordings I made of POLI and BASS in Ableton:
Advantages of software synths:
- Always Midi compatible meaning you can rearrange your input after ‘recording’
- Cost/Accessibility/Practicality – Software synths are generally cheaper and cost less power to run. As well as cost, they take up no physical space and don’t need to be set up with cables and external systems that could otherwise cause complications (input latency etc.)
Disadvantages
- Sound Quality – The quality can vary between software synths and they often don’t sound as good as analogue
- Tactility (controller dependant) – A big part of using a synth is the facility of interacting physically with it while using it.
- Character – This is as much a case-by-case (negative and positive) and personal opinion but due to the precise reliable nature of a lot of software it misses out on the quirks of individual devices such as hardware errors.
I created this basic synth in Max to try to get a better understanding of how FM works. Due warning, the audio below is quite coarse and loud so try to start with low volume.
I currently use software synths as my main source of synthesis. I find it accessible and easy to understand. I hope to learn how to make my own complex synths in Max/MSP. I would like to get into hardware synths however my budget is holding me back, as well as the choice of many different synths. I was looking at the following on Thomann a while back: ASM Hydrasynth Explorer, Dreadbox Nymphes, Korg Wavestate
I created the soundscape below in Ableton. 1 track was an Operator (the filtered wind sound), the second track was processed recordings of a radio station (one of those beeping shortwave ones like the pip – audible parallels with Bill Fontana’s Fog Horns) and the 3rd track is electric guitar filtered very low. It’s too large a file to attach so here is a link that should hopefully work.
https://drive.google.com/file/d/1zUuue2_Ad8OzKmlTHOwv0v3RnikfF-xS/view?usp=sharing
P2- Hardware
Today we used the hardware synths available at LCC.
My personal favourite out of these was the Moog Matriarch as it sounded the best and I enjoyed switching between the different voicing settings. Of all the synths available at the time it was probably the only one that appealed to me more than just using familiar software.
When I first arrived on the course my knowledge of synthesis was self-taught based on using software synths in daws and memories from the very cheap Casio keyboard at my Nans. Although I knew how ADSR envelopes and filters functioned it was initially quite daunting trying to see how they were arranged on specific synths, like the MS-20. Thankfully I’m now a lot more familiar with hardware design conventions.
Do you agree all the points made here? What has been left out?
Joseph Kamaru
Currently studying sonic arts in Berlin, Joseph Kamaru aka KMRU is a Nairobi-born, Berlin-based sound artist whose work is grounded on the discourse of field recording, noise, and sound art. His work posits expanded listening cultures of sonic thoughts and sound practices, a proposition to consider and reflect on auditory cultures beyond the norms, and an awareness of surroundings through creative compositions, installations and performances.
It was very interesting to listen to Kamaru’s talk. I had previously listened to his music and watched the short documentary KMRU: Spaces, made in partnership with Ableton.
I found the relationship between environmental noise interesting.
I own a field recorder and have previously used it to record environmental sounds, however, I find it to be a difficult practice for myself. I have agoraphobia and find it quite challenging to build up the commitment and courage to leave the house for recording excursions. I have previously been pleased with material collected this way, I find the textures of recordings to be very inspiring, especially when warped and processed.
Music Concrete
History of music Concrète – notable figures etc.
Pierre Henry, Pierre Schaeffer, Daphne Oram, Delia Derbyshire
In today’s session, we used Ableton to manipulate samples that we recorded around the uni building. I would say that I’m very familiar with sampling in Ableton as I often resample sounds that I have previously made and recorded. I often use the Simpler device with my Push 2 due to its intuitive visual interface and flexibility in playback (with the option to playback loops polyphonically, as well as chop the sample into 64 smaller slices). The Sampler device is better for multisampling, however, the interface is less intuitive on the Push.
I’ve read about Music Concrete before as I’m very interested in sampling, plunderphonics and the concept of sampling in relation to soundscapes such as KFL’s Chillout, the music of Boards of Canada and some of Lawrence English’s work. This is something I hope to explore further in the future, and will likely explore in the context of digital worlds due to my experience with game engine environments.
Inspired by Music Concrète, I made a piece. The sound sources are a paper booklet, me cracking my knuckles, an old radio drama recording, some cello samples and feedback sounds from playing headphones over a microphone.
I used Max to speed up sounds beyond recognition and used Paulstretch to slow down more percussive sounds to completely remove the transient. I also used max to create a looping effect that randomly changes the loop length (and playback rate of the sound). I arranged it in Ableton live 11.
I think I should have attempted to create more percussive ‘attack’ focused sounds. This juxtaposition would have created more variety in the piece and aided in the compositional structure to give a sense of tension.
The concept of sound objects makes me think of Matmos and their use of sampling and abstracting physical materials.