Showing posts with label Fairlight. Show all posts
Showing posts with label Fairlight. Show all posts

Saturday, 6 June 2020

The great escapes... of the music making process


From the first rhythms of hitting two stones together in a cave to a music workstation packed in our luggage, music and its creative process staged four great escapes from their confines. 

The hugely disruptive inventions, which caused those great escapes, are now taken for granted - because their results changed our world so significantly. 

Arguably, the first musical scales and their reproducible definitions, which were used to tune musical instruments, essentially allowed the creation and playing of music to travel from one person to another. Pythagoras certainly has a big claim in that department, and his scale has shaped even metaphysical musings on music and its significance for several millennia.

However, learning and reproducing music has remained a superbly tedious process. It could take even up to ten years to become an ecclesiastical singer in the early Middle Ages, as the only rudimentary musical notation available to the monks merely captured a vague outline of the musical piece. 

The so-called neumatic notation was merely indicating, for example, whether the melodic line was going up or down. Singers learned compositions by listening to, and repeating, others.


The first truly great escape of music came from the colossal idea of a Benedictine monk in the early 11th century. 


Guido of Arezzo
has had the phenomenal idea of working out a musical notation that allowed musicians to reproduce a piece of music on sight. He also invented a method of teaching, and even to the Pope's great surprise, a score could be instantly performed by boys who have never seen or heard the musical piece before.

This was absolutely unheard of until Guido's invention of modern staff notation. A score could be sent to singers somewhere else, and they could instantly reproduce the chant...

More than a thousand years later we take it for granted that someone can produce a score, send it or publish it to others, and it is a truly ordinary concept for us that anybody who understands the notation is able sing or play the composition on sight, anywhere else in the world.

It is somewhat amusing to think that many centuries after Guido, some tried to lock music up behind certain walls. 

The most famous example is that of Allegri's Miserere, which was considered so divine that its score was not allowed to 'escape' the walls of the Vatican... Only three authorised persons were given transcriptions of this indeed sublime work.

Imagine the pleasure that Guido would have felt, if he had seen a young musical genius called Mozart listening to the piece during a visit to Rome, and then transcribing it from memory... 

Thus, Allegri's masterpiece had literally escaped the mighty walls of the Vatican.

Still, music remained an ephemeral wonder. One had to be physically present at a performance, and once having listened to it, one could only rely on one's memory to evoke the sounds and emotions of the work. 

One may not have had the means to attend performances, and one's access to certain types of music performed in certain settings may have been limited or completely made impossible for one's entire life.


Thomas Edison's humble wax cylinder has changed everything in 1877. 


True, it was shockingly rudimentary by today's standards, but suddenly, any musical performance could be recorded and reproduced elsewhere, any number of times, by practically anybody.

For us, it seems absolutely banal that ephemeral musical performances could be preserved for posterity - or that one could repeatedly listen to performances by musicians one could not meet, from venues one could not access.

This initial, and later immensely developed, recording technology allowed radio and all other broadcasts, too in the years and centuries that followed.

It may not seem like an invention that had direct and vast impact on music creation, but... composers were no longer creating pieces of music that were laying around on pages of scores that were only usable by trained musicians, and audible only by people who could attend performances by such musicians. 

Composers could create musical scores that were recorded once in a recording studio, and then their creations could reach millions of people scattered around the globe, who could listen any number of times to their beloved musical favourites. 

This even had impact on the format and content of what they composed, e.g. in popular genres some songs 'had' to fit onto certain mediums in terms of duration.


Dave Smith's & Chet Wood's invention of MIDI in 1981 brought us the next great escape of the music creation process.


Imagine if Bach had had a MIDI keyboard and the means to record MIDI information... His ephemeral (and reportedly stunning) improvisations could have been captured for posterity, and reproduced instantly as if he had been sitting at the keyboard. 

MIDI, or the musical instrument digital interface, became the perhaps most stable standard that could carry not the sound, but information of the actual musical events in a musical performance. 

It encoded, in a form universally understood by any MIDI-capable instrument and software, the musical notes, the way in which they were played expressively by the musicians, and heaps of extra information of that very performance. 

Musical compositions created on digital instruments and computers could be instantly transformed into a musical score, passed to entire orchestras as a finished piece of music noted down in traditional form. Guido would have loved to see this...

We may take it for granted, but for the first time in mankind's history, musical notes and their performance details could be instantly captured, reproduced and developed further, sent to someone else to collaborate on quasi-instantly... The actual musical composition process suddenly escaped any physical confines of locality and time. 

One could return to a complex composition weeks later and continue where one had left off... One could instantly recall elements of a work, could change it, elaborate on it... 

It also brought another type of escape: a break from human limitations

Imaginative and revolutionary composers could now develop pieces of music that were literally impossible to perform by humans, no matter how technically gifted they may have been as players. In terms of complexity and tempo, MIDI allowed the creation and reproduction of compositions that could never have been born without it and the instruments that could turn MIDI information into sounds. 

Only a few years later, the next great escape of music creation & production occurred.


The 1980s have brought us the affordable and portable music workstations that eventually made the entire music creation process, from composition to mixing to mastering, fully portable...



Ensoniq
and Korg were at the forefront of this revolution, if we don't count the Synclavier in the late 1970s or the Fairlight CMI, which were pricey inventions in their initial incarnations. These were for quite some time confined to high-end studios or were in the hands of established successful musicians who could afford them. Also, in terms of features, they were not yet the end-to-end music production tools that later workstations at a fraction of price have become.

However, workstations like Ensoniq ESQ-1, Korg M1, and their vastly powerful successors have changed everything. With their immense sonic range, on-board effects, MIDI recording and editing, even multi-track digital recording and mastering, allowed one to pack the studio into a bag... and take it anywhere. 

Later the arrival of purely software workstations running on personal computers, especially laptops, truly made the music studio portable. 

Not just the composition, but the entire music production process has become something that one could pack into a bag, travel with, unpack during travel or on arrival, pour the fruits of one's labour into other equipment... or make even a CD master copy without using any other tool. 


Where would the next great escape come from? What could it be?

Perhaps we lose our dependence on the instruments and studio production tools packed into a mighty software or hardware workstation? Maybe the next great escape comes from outside music technology, in the form of wearable and implanted tech... 

We might see the 'escape' of the very early stages of the musical creation process, i.e. turning thoughts directly into compositions that can be downloaded to anything or anyone else, without the reliance of an external musical instrument to first play it on...

We could think up perhaps musical pieces, sounds, soundscapes, directly translate them in our heads into audible and reproducible works, which then can be transferred to others... without having a laptop or a bulky synth workstation carried around with us. 

Whatever it may be as a next disrupting and world-changing step, for now, we can just reminisce on where we ended up since a humble monk in an Italian monastery wanted to write down music that could be instantly understood and reproduced by others...


Friday, 24 August 2018

From oxygen to outer space - Jean-Michel Jarre at 70

Photo: AFP

Jean-Michel Jarre, perhaps the most prominent post-avant-garde names of the French School of electronic music, turned 70 today.

Whilst he was already a prolific experimental and soundtrack composer before the 1976 release of his landmark album Oxygène, the latter has really projected his name onto the firmament of both popular and critically acclaimed electronic music.

Even in 2018, the album sounds futuristic, timeless and perfectly at home with state-of-the-art current space rock and ambient electronic albums - a fluid, bubbling and seamlessly flowing electronic symphony that still continues to hold many lessons for budding electronic musicians who choose to compose with intent a descriptive and emotionally involving sub-genre of electronica.

As they say, the rest is history...

Whilst Jarre has become perhaps even more known for the record-breaking gigantic concerts, where audiences were in their millions (absolute record was 3.5 million people) and the stage could often be an entire city even, his imaginative musical creations cannot be ignored.

His music was seen by some regimes as ideologically clean and "safe", the music of a technological future - hence it is not an accident, that he was the first Western musician officially invited to give live performances in post-Mao China.

While Jarre established himself as an unparalleled visionary when it came to live performances, with hugely innovative multimedia technology at work alongside his futuristic electronica, his use of innovative new musical instruments was also remarkable.

Cities in Concert - Live in Houston, TX

Fairlight, the pioneering sampler that completely changed music across countless genres, was mostly used even by luminaries like Herbie Hancock, Peter Gabriel, Art of Noise and Kate Bush as a digital instrument capable of playing back sound samples.

Then Jarre released the to this day astonishing album Zoolook, where he has taken the Fairlight to an unprecedented level, projecting us into a never before heard sonic Universe.

His use of sound processing and alteration via the new instrument sounds simply stunning even today - and all this was not done in a purely academic manner, making Zoolook actually enjoyable by the masses.

Whilst he ventured very happily into the realm of chirpy, dancey, highly trendy electronica, too, we cannot forget the fact that he also composed vast, almost cosmic requiem-sounding suites like Rendez-Vous, and ventured into "pure" electronic ambient music, too (the epic length title track on Waiting for Cousteau).

Even under the surface of sometimes very pop-sounding electronica, he often managed to hide complex musical ideas. A simple example would be Equinoxe, his second album, where the most popular track has employed time signatures that one is challenged to find in any chart-topping creation...

Even in 2018, even at 70, he is not only keeping up with the absolute latest greatest technological advances in sound synthesis, processing and music production, but he remains an influencer and a shaper of sound technology.

His latest studio double opus, the Electronica Vol. I and II., shows how he can collaborate with numerous electronic musicians who come from vastly different musical and technological backgrounds.

The tracks composed with the biggest names, ranging from Vince Clarke to Hans Zimmer to the late Edgar Froese (founder of the veritable Berlin School institution that is Tangerine Dream), show that Jarre's artistic range and sensitivity is able to integrate myriad musical ideas and sources into a coherent concept.

In ways that transcend particular subjective tastes and electronic music preferences, Jarre's trailblazing efforts in the field have left their mark on countless facets of music technology, including creative tools and approaches to the vast world of synthesizers.

His music is also testament to the fact that the most high-tech instruments are mere instruments, and the human using those instruments remains the key factor in the creative process... making the resulting music sometimes unashamedly romantic even, whilst created with (the still often misperceived as "cold") electronics.





Saturday, 3 March 2018

Converging worlds, stable antagonisms

Famously, and somewhat infamously, Klaus Schulze's first fully digital recording Dig It proclaimed the "death of an analogue" in one of its tracks.

Although the lyrics were ironic, and digital was understandably called an "automat" at that time, 1980 was not quite the best moment for heralding a tectonic shift toward an exclusive relationship with the emerging digital instruments.

Eminently digital synths (from early samplers to the later FM synths and beyond) have expanded the sonic palette to before unimaginable dimensions - but when it came to an "analogue" sound, they had been operating with a couple of crucial limitations.


In terms of sound synthesis and processing, the available computational power  was one of the factors that had been limiting the bit resolution and sample rate of the digitally represented signals that the synth operated with. This then introduced sonic artifacts, e.g. the especially notorious aliasing and a puny performance of early digital filters. Many of the still surviving, and largely outdated, stereotypes about the "digital sound" artifacts originated in this era.

The revolutionary Fairlight CMI
Both sample size and sampling rate also meant memory impact, especially for samplers. The characteristic sound of a Fairlight was partly due to its humble 8-bit sampling. 

Early Emulators used nonlinear compression tricks from the field of telecommunications standards, which exploited the way in which we hear things. Ergo they could store increasingly decent audio with fewer bits, hence with less (ludicrously expensive at that time) memory usage.

In an FM synth, like the revolutionary Yamaha DX7, the processing power was limiting the signal representation, the precision of the mathematical operations and how many of those it could perform in real time. 

The maths involved in even much later synths, like the E-mu Morpheus with its mind-bending morphing filters, limited how much control and changeability they allowed the humans to have in real time.

This was another key issue: how much we, users, could meddle in the inner digital processes and how much instantaneous control we have over the parameters that shaped our sounds.

The user interfaces on these digital synths were notoriously minimal, compared to the analogue synth users' joy of having an immediate, continuous and direct control over myriad parameters via many lovely knobs.

Even if some "programmer" kits helped one a little bit to get inside the digital beasts, the processing power still meant that one could not expect major real-time control over major number of key synthesis parameters. Notorious example, alas, is the aforementioned DX7, but even something like a Roland D50 engine was not a dream to deal with even with the programmers manufactured for them.

However, the analogue and digital worlds began to converge, and with huge steps in more recent times. NB convergence does not mean that the two (may) end up absolutely indistinguishable from each other, nor that someone may have the sheer audacity to claim that. Latter would immediately assemble the either purely digital, or purely analogue (never hybrid) execution squads in many internet forums...

The age-old debate about how much, in what conditions and in what way can one hear or not the differences between real analogue synths and their digital emulations have never been more heated.

It may be obvious, but it is most often missed: the very factor that makes such debates on increasingly subtle aspects even possible is the huge strides achieved in the digital/analogue convergence. At the time of  the release of Dig It, the topic would have been hilariously absurd.

With current sampling frequencies and bit-precisions achievable in internal computations and sample representations, with current volatile and non-volatile memory amounts, and considering the sheer processing power in multi-core engines, the ability to emulate analogue circuit behavior has increased exponentially. So did our ability to control the processes - think of the user interface of a Roland System-8 or Korg Radias for example.

VA, or virtual analogue, synths have the increasing ability to handle many tweaks to many beloved knobs, altering in real time the synthesized and processed signal. We take this for granted now, but not so long ago this came at huge expense, if it was even possible. Also, the level at which characteristic irregularities in analogue circuitry can be modeled have vastly increased.

The Roland JV-1080 (and its successors) tried, for example, to imitate some crucial irregularities and instabilities with what they called the "1/f modulation". Fast forward, and now certain Roland VA gear, like the Boutique series, have detailed circuit modeling with even a control to adjust the age of the instrument - in order to simulate the components' sound-altering decay over time.

Which then lands one in the everlasting debates about how "good" they sound or whether analogue reigns supreme, full stop.

Roland Boutique series JP-08 VA synth
The answer to latter, looking at some forums, is typically a resounding quick "yes", or similarly emphatic "no". 

However, both such irrational extremes disregard a core contextual element.

"Analogue sounds best" is still very true, for...  the sphere of analogue synth sounds, especially within the confines of substractive synthesis. 

There is a very obvious reason why even decades ago creative minds embraced all other synthesis methods, too, including eminently digital gear... but even in current times some lock themselves into an exclusive, hence by definition self-limiting relationship with just one specific corner of the sonic Universe.

Latter is possible within the confines of certain sub-genres of electronic music, so exclusion of vast other sonic possibilities is not an issue. 

There is psychology at work, too, especially if one defines oneself by the used tools - instead of treating them as just tools. Musicians fall into the very same trap as e.g. photographers have been doing for ages, we really are not as different nor special as we sometimes would like to believe.

While many photographers were caught up in film vs. digital debates, the creative bunch embraced both technologies and used what was best for a certain purpose - same goes for synth artists of recent past and present.

Taking such shamelessly utilitarian approach, it boils down to something eminently simple but missed completely on a daily basis in many forums: is the tool in question the best one to use for the task?

Questions like "how can I create a realistic piano with my XY analogue gear" or "how can I do multi-operator FM synthesis via analogue means" (to quote two concrete examples) show how the use of the right tools for the job is entirely ignored in favor of a bordering-on-fetish approach. 

In the two examples, the approach itself is a by-definition failure from the start. If one thinks of e.g.  multi-operator FM synthesis's vast sonic changes introduced by minute alterations of some parameters, lack of precise and exactly reproducible control in a purely analogue approach makes the task eminently pointless.

Also, the task in question may well have not just parameters like music genre, musical or sonic style, technical range etc., but also crucial factors that define personal work flow.

If one needs instant recall and stability, then one goes for a hybrid or a fully digital tool, in order to be able to focus on reproducing the needed sounds as quickly and precisely as possible.

If one puts the sound source through (no pun intended) convoluted chains of processing, the "I can hear the difference immediately" between an analogue or digital source may no longer actually mean nor matter much- especially not in the final mix. Internet forum rhetoric is superb, until one plays games with an audience and subjects them to creatively processed sounds from plethora vastly different origins.

There is also the effort element in the workflow. It is often left out of the sizzling debates, exactly because it is highly personal and goes to the creative process of one or the other individual.

Ansel Adams's superb prints can be appreciated not just because of their visuals, but also because of the dark room efforts they involved - latter efforts can be nowadays reduced by order of magnitude in a digital dark room. Let's just think of his elaborate multi-masked dodging and burning, which required often a dozen paper cut-out masks to adjust precisely and locally the tones... However, he and many others used the best possible tools available to achieve what they set out to visualize.


Ansel Adams in his darkroom
As fundamental as it sounds, it is remarkably absent in many debates: as much as one may subjectively appreciate the mechanics of translating ideas into images or sounds, those are just the mechanics of the process - and some actually distance one from the end goal. It is admirable to suffer through a certain workflow for the sheer heroics involved, but...

Even seasoned judges in photography competitions have fallen into the trap of trying to guess, when separate categories were not defined, whether the photograph emerged from a digital or a traditional dark room. Watching them agonize over the prints was in a way entertaining. Did the origins of, and workflow leading to the image, really matter? In some cases, perhaps, but trying to reach judgement centered on content and message while mixing it with considerations on medium, process and tool-related aspects was and is symptomatic of the subjective traps.

There is marketing and financial side, too. Clearly, spending vast amounts on a certain piece of kit takes a huge degree of objectivity and honesty to allow the owner to admit that some kit at a fraction of cost is "close enough" for what the end result wants to be. It is not different from the debates about whether an Alien Skin plugin reproduction of the special je-ne-sais-quois feel of a certain film stock is good enough compared to shooting on that very film, then scanning and post-processing it...

Even digital relics have been brought into the present, with extra oomph... The Synclavier monsters' computational power nowadays can fit multiple times in an ordinary laptop,  and a Fairlight dinosaur can come to life in a cheap plugin. A legendary monster like the PPG Waveterm is nowadays wonderfully reproduced by apps like Audioterm coupled with a super-affordable Waldorf microWave or Blofeld that emulates the PPG Wave's characteristic analogue filters.

Synclavier
Roland Boutique VA synth can reproduce "well enough" the analogue originals at a fraction of the cost.

It is a cliche by now that the compromise between "good enough" and cost & effort is an eminently personal one.

Perhaps warranty periods and obtainable state-of-the-art (and affordable) components outweigh in some studios subtle differences in sound.

However, putting to one side psychology, ego, preferences in work flow, personal finances and priorities (feeding into the subjective), the brutal technological fact is that if something nowadays has set out to be a good VA instrument, then it has unprecedented chances of coming "close enough".

In some debates on "close enough", the use of arguments centered on aliasing, converter bit precision, computational precision and complexity are rather anachronistic nowadays, unless it is a really badly made gear. The subjectivity of such arguments is betrayed by how much they are in denial of the signal processing realities lurking under the bonnet.

When it does go wrong, it may actually add character... Waldorf Blofeld's surprisingly bad metallic reverb is horrid to some ears, but perhaps in someone else's studio it adds a characteristic thing that is missing in the other superb quality digital effects... In certain patches, it actually becomes essential to the final sound and pumping it through good quality reverb loses that certain something...

So while the galaxies of personal motivations, attempts of self-definition via the used tools will continue to swirl on and on, the convergence up to a point of the two (in some minds still) antagonistic worlds is also unstoppable.

Does true analogue sound best? Yes, for true analogue sounds, if that is all one needs... and when target audience can hear the current VA vs. analogue differences... and when they self-consciously care.

Does the audible differences in analogue-wannabe digital imitations matter? Yes, if in the sonic creation we make that authenticity a priority over myriad other artistic elements. Even eminently analogue legends perform nowadays with their vintage pieces emerging live from digital and hybrid gear, while internet forums of home musicians spiral into a frenzy for months and years debating some VCO vs. DCO sonic differences.

Thorsten Quaeschning of Tangerine Dream
While Daft Punk famously replied "Daft Punk" to the question "who will hear the difference between the three different microphones" on the track Giorgio by Moroder (from the album Random Access Memories), they embraced all technology at their disposal for achieving the creative goal.

How many listeners of Tangerine Dream's expansive improvised live sets on recent Sessions I and II albums lose sleepless nights trying to identify where the Doepfer modular ends and the JD-Xa's digital engine part begins?

As simple and obvious as it may be, countless such electronic artists, who do not have cramps about self-defeating puritanism about one sort or another, have demonstrated that even having attention to detail at obsessive levels is not an obstacle in going for the main goal that matters to them: putting every available tool in the service of creativity