top of page

AI-Assisted Compositions to be Featured at China Now Music Festival/ Composing the Future: A Concert with The Orchestra Now

Updated: Oct 16






China Now Music Festival and The Orchestra Now (TŌN)

Saturday, October 12, 2024 @ 7:30pm – 9:30pm (EDT)

Carnegie Hall (Stern Auditorium / Perelman Stage), New York, NY, United States

Get tickets

$29-$70 (student/senior discounts available at the box office)


Li XiaobingAI Suite




Du YunHundred Heads

The opening concert of the 7th annual China Now Music Festival features future-focused new symphonic works by contemporary Chinese composers, and an experiment in AI composition from the Artificial Intelligence department at China's Central Conservatory of Music. The China Now Music Festival is an annual series of concerts presented by the U.S.-China Music Institute of the Bard College Conservatory of Music, in collaboration with the Central Conservatory of Music, China.


About Jindong Cai, conductor

More info

Directions

Google Calendar

881 7th Ave


New York, NY 10019


United States



The Violin Channel sat down with two faculty members of the Central Conservatory of Music to learn more about how to implement AI while composing

 

The 7th annual China Now Music Festival is entitled “Composing The Future” and will feature concerts from October 12–19, 2024, at Carnegie Hall.

In keeping with the future-focused theme of this year’s festival, China Now asked China's Central Conservatory of Music's Department of Music Artificial Intelligence to contribute an ‘AI Suite’ to the concert program, composed completely by AI, to open an innovative dialogue between the composer, the orchestra, the human and the machine. Additionally, China Now asked for works that experimentally incorporate AI technology in live performances.

Audiences will be able to hear the Artificial Intelligence Composition System, Central Conservatory of Music's piece AI Suite, which uses a “Cloud Chorus” of 1,000 voices gathered from around the world, and a piece by Sun Yuming, where a traditional guzheng zither is played on stage without the performer touching the instrument.

We got the chance to talk with composer Li Xiaobing — Professor and Director of the Department of Music Artificial Intelligence and Music Information Technology at the Central Conservatory of Music, China — to learn more about this innovative new approach to composing.

 

Could you describe the process of using artificial intelligence to compose music?

Three key elements are essential in AI composition: data, algorithms, and computing power. Let’s start with data. Imagine we have a large warehouse filled with various types of music—classical, pop, rock, and many different styles. This “warehouse” is essentially the “big data” we use to train AI. The AI first listens to this music, learning its melodies, rhythms, and styles.

Next, the AI translates this music into a special language we call “symbolic language.” Through this process, the vast amount of audio data in our warehouse turns into a “symbolic database” specifically designed for AI models to learn from. By learning from this large symbolic database, AI can understand the internal logic and structure of music.

As for algorithms, our AI uses a complex music model. The model’s learning process involves predicting what the next “symbol” will be. It’s similar to how, when writing an article, we predict what the next word might be based on the previous content. In the case of music AI, it’s predicting the next note or melody.

In essence, the automatic composition model allows the machine to learn and create music similarly to humans. Although the process sounds complex, its core is about learning and predicting. AI can efficiently discover deep patterns in notes, melodies, and harmonies through self-supervised learning and can even create entirely new artistic works, akin to human creativity. So, AI composition is essentially a process of learning from data and generating new music.

 

What does the final work look like?

Different models produce different types of works. For example, a song composition model would generate songs, a twelve-tone composition system would create modern music, and an orchestral model could produce works for an orchestra.

 

How do you give AI instructions to achieve the desired outcome?

We can input lyrics, style, instruments, or even provide a theme or style for each type of compositional model, allowing the AI to compute and generate a new piece of music employing these inputs.

 

Do you think AI could replace the work of existing composers? Is this idea dangerous?

From a humanistic perspective, this notion does seem dangerous. With the rapid development of AI worldwide, many jobs in the future could potentially be replaced by AI—not just in music, but in many other industries as well. However, I believe there’s no need to worry because history shows us that technological revolutions can lead to disruption and change, but progress continues. What we need now is to be fully prepared for the future.

Personally, I believe that in the near future, AI will not replace top composers. However, AI can assist composers in quickly and effectively achieving their goals, especially for foundational tasks. Therefore, I think composers should embrace AI and treat it as a friend. In my opinion, the future development of music will unfold in three directions:

First, traditional music (including modern music) will continue to develop.

Second, AI will empower innovation in traditional music.

Third, AI will generate new forms of music that will develop independently.

In short, AI in music is meant to assist and empower humanity, not replace it. With the advent of music AI, human art will become even richer and more valuable. I hope the composers of our time will join hands to explore the future of music.

 

Can AI composition reach the same level as human composers?

It’s difficult to compare. As of now, AI can do well with certain commercial music that doesn’t require much depth, such as canned music or background music. In these areas, AI may take over a large portion of the market. However, in the future, if AI achieves a breakthrough in self-awareness and fully understands human emotions, we may need to reconsider several issues, such as how to coexist with AI, the ethical implications, and new legal and risk-related questions.

As I mentioned earlier, AI will empower innovation in traditional music. This requires composers to master AI. I believe that with the rapid development of AI, the future world will inevitably bring new philosophy and new art. Human art in the future will stand on the shoulders of AI giants and move forward.

 

 

The concert will also feature a piece by Sun Yuming — Lecturer on Electronic Music Composition at the Central Conservatory of Music, China — where a traditional guzheng zither is played on stage without the performer touching the instrument. He employed AI engineer Zhang Xinran — from the Department of Music Artificial Intelligence and Music Information Technology at the Central Conservatory of Music — to help. We had the pleasure of getting their thoughts as well.

 

How did you use artificial intelligence for this work?

Sun Yuming: “Starry Night” is a concerto for guzheng and orchestra. In order to allow the guzheng to break through its own limitations and play in multiple keys, I tried a few things: I used two guzhengs for the performance and included the action of moving the bridge during the piece. However, in this process, I wanted the guzheng’s playing technique to vary, so I went to an AI engineer for help.

Zhang Xinran: To provide composers and performers with richer performance options, we designed a virtual instrument using computer vision technology. By capturing specific movements through optical motion capture, this system triggers certain musical materials. This approach gives composers more creative freedom and offers performers new ways to interact with and play their instruments.

 

What is the end result?

Sun Yuming: During the performance, the guzheng player plays two guzhengs: one with a pentatonic scale and one with a heptatonic scale. Traditional sounds are produced by playing the instruments, while non-traditional sounds, such as guzheng sounds with added effects and electronic music elements, are triggered by AI motion capture technology.

Zhang Xinran: During the live performance, we capture the player’s hand movements using a computer to trigger these materials. Different trigger logics are set for different sections of the music, allowing for real-time interaction between the performer and the system.

 

How did the process actually work?

Zhang Xinran: At the performance site, we capture the player’s hand movements with a computer to trigger the musical materials. By setting different triggering events for different sections, we achieve live interaction with the performer.

 

Do you think AI might replace the work of composers, and is this idea dangerous?

Sun Yuming: AI is an inevitable product of technological development. We shouldn’t fear its arrival but rather embrace it. It should serve as “wings” for humanity, helping to accelerate the development of human civilization. In music, the development of AI technology should first provide composers with more ideas and possibilities, becoming their partners. As AI technology develops, new creative methods, forms of works, and aesthetic standards will emerge. For composers, it’s almost inevitable for AI to strengthen their professional skills, enhance their aesthetic sensibilities, and boost their creativity to keep up with the times. In this process, we must learn to coexist with AI and draw nourishment from it.

 

On October 22, there will be a pre-concert event at Carnegie Hall that will see a panel of composers and music researchers convene for the second annual US-China Music Forum to explore how technology and music can intersect in new music composition.


1 view0 comments

Comments


bottom of page