Deep learning in music?

avatar

image.png

Musical expression is in the minute fractal deviations from regularity: micro-rhythms, micro-tones, and micro-sounds. It flows from fingers and lips, carrying the influence of pulse and breathing, the awareness of people (ensemble and audience) and events surrounding the musicians in a point of historic time.

Pop music based on MIDI is fake - the quantized irrational pitch (2^(1/12), metronomic time, over-regularized waveforms of synthesis, and computerized sequencing are suitable for background Muzak and soundtracks, but do not pass muster as music to trained musicians - unless they over-intellectualize the music hearing only the notations.

I was reminded of this by contrasts in the great TIME:SPANS concert series. The opening live concert was six conservatory trained pianists generating MIDI control information feeding six instances of Pianoteq that ran physical models of micro-tuned pianos for a brilliant piece.

The timing of the MIDI had milliseconds of uncertainty going through an asynchronous interface in a computer architecture and communications protocols that are not designed for real time information. I am familiar with the pianists, having recorded five of them, and their timing on acoustic pianos is considerably better than what I heard. (three of them played ensemble with two pianos).

The converse was a concert on solo cello and electronics, with assistance from an electronicist. In one piece there was a complex of loops with processing that were triggered by the cello players' feet. He had practiced for years to play this piece in that modality, like a drummer or organist with independent limb rhythms, and the musico-emotional effect was powerful.

https://towardsdatascience.com/generating-music-using-deep-learning-cb5843a9d55e

Lockhart Tech Blogs



0
0
0.000
3 comments
avatar

Really? And what about the AI generate compositions that are played by real musicians, like a few years ago in Royal Albert Hall in London? Results from that experiment was: The audience was as moved by the performance (and composition) as they would be to human-created compositions.

ps 11 months ago your last comment?

0
0
0.000
avatar

Remember, we're trying to discern the difference in expression. Any kind of music, whether it is written by a human or by AI sounds vastly different when it is played through biological fingers. You can't escape the rigid modality of studio produced timings and sounds. This is but one of the reasons people are perhaps unaware of their attachment to live performances over studio "recordings" - in the sense that the performance embodies the entire essence of the musician (including their flaws) but makes them unique in a way that midi and studio processing would wipe clean.

0
0
0.000
avatar

I do understand what you are saying. When back in the days CDs came to market, I wanted always to buy the AAD versions, not the DDD. The latter sounded way too clean. But peeps adjust over time. Also, AI generates music. We'll be seeing humanification of AI-generated music. Ie, we will start post-processing AI-generated music, so it sounds like humans played it. Do I like to say that happening? Not at all. Have too many friends being musicians and music producers. But does that prevent technology to be able to create music the mass is very ok with? I am pretty sure the music-phile will eventually not be able to differentiate music created and played by humans, from music created and played by AI. I wish for this will take still many many decades, but am afraid this is much more around the corner than we can imagine. Still need to read the article, bookmarked it since am very interested in music and what directions it'll can and will take in terms of technology and complex AI algo's.

0
0
0.000