In September, a crowd gathered on the MIT Media Lab for a concert by musician Jordan Rudess and two collaborators. One of them, violinist and singer Camilla Bäckman, has previously performed with Rudess. The other — a synthetic intelligence model informally called “jam_bot” that Rudess has been developing with an MIT team over the past few months — made its public debut as a piece in progress.
Throughout the show, Rudess and Bäckman exchanged the signals and smiles of experienced musicians finding a groove together. Rudess' interactions with the jam_bot suggested a distinct and unfamiliar style of exchange. During a Bach-inspired duet, Rudess alternated playing a number of bars, allowing the AI ​​to proceed the music in an analogous baroque style. Each time the model took her turn, different expressions flashed across Rudess's face: amusement, concentration, curiosity. At the top of the piece, Rudess confessed to the audience, “This is a mix of loads of fun and really, really difficult.”
Rudess is an acclaimed keyboardist – one of the best of all time in line with a Music Radar magazine poll – known for his work with the platinum-selling, Grammy-winning progressive metal band Dream Theater, which is on tour this fall to rejoice its fortieth anniversary Anniversary is coming. He can be a solo artist whose latest album “Permission to fly“was released on September sixth; an educator who shares his skills through detailed online tutorials; and founding father of the software company Wizdom Music. His work combines a rigorous classical foundation (he began studying piano on the Juilliard School on the age of 9) with a genius for improvisation and a love of experimentation.
Last spring, Rudess was a visiting artist on the MIT Center for Art, Science and Technology (CAST), working with the MIT Media Lab's Responsive Environments research group to develop recent AI-powered music technology. Rudess' key collaborators in the corporate are Media Lab graduates Lancelot Blanchard, who researches musical applications of generative AI (drawing on his own studies in classical piano), and Perry Naseck, an artist and engineer specializing in interactive, kinetic , lighting and lighting technology makes a speciality of time-based media. The project shall be led by Professor Joseph Paradiso, head of the Responsive Environments group and long-time Rudess fan. Paradiso joined the Media Lab in 1994 with a resume in physics and engineering, working part-time designing and constructing synthesizers to explore his avant-garde musical tastes. His group has a practice of exploring musical boundaries through novel user interfaces, sensor networks and unconventional data sets.
The researchers got down to develop a machine learning model that channels Rudess' distinctive musical style and technique. In one Paper Published online in September by MIT Press and co-authored with Eran Egozy, professor of music technology at MIT, they articulate their vision for what they call “symbiotic virtuosity”: that humans and computers duet in real time and learn from each duet them playing together and making performance-worthy recent music in front of a live audience.
Rudess contributed the info that Blanchard used to coach the AI ​​model. Rudess also provided ongoing testing and feedback while Naseck experimented with ways to visualise the technology for audiences.
“Audiences are used to seeing lighting, graphics and scenic elements at many concert events, so we wanted a platform that permits AI to construct its own relationship with the audience,” says Naseck. In early demos, this took the shape of a sculptural installation with lighting that modified each time the AI ​​modified the chord. During the September 21 concert, a grid of petal-shaped panels mounted behind Rudess got here to life through choreography based on the AI ​​model's activity and future generation.
“When you see jazz musicians making eye contact and nodding to one another, it makes the audience enthusiastic about what’s about to occur,” Naseck says. “The AI ​​effectively generates notes after which plays them. How can we show what’s next and communicate that?”
Naseck designed and programmed the structure from scratch within the Media Lab with assistance from Brian Mayton (mechanical design) and Carlo Mandolini (fabrication), and sourced a few of its movements from an experimental machine learning model developed by visiting student Madhav Lavakare and Music Points depict moving in space. With the power to twist and tilt its petals at speeds starting from subtle to dramatic, the kinetic sculpture distinguished the AI's contributions through the concert from those of the human performers while concurrently conveying the emotion and energy of her performance: she swayed gently, as Rudess the voice took lead, for instance, or curled and unfolded like a flower while the AI ​​model generated stately chords for an improvised adagio. The latter was certainly one of Naseck's favorite moments of the show.
“At the top, Jordan and Camilla left the stage and allowed the AI ​​to totally explore its own direction,” he recalls. “The sculpture made this moment very powerful – it kept the stage alive and enhanced the grandiose nature of the chords the AI ​​was playing. The audience was visibly captivated by this part and sat on the sting of their seats.”
“The goal is to create a musical visual experience,” says Rudess, “to point out what’s possible and to enhance the sport.”
Musical future
As a place to begin for his model, Blanchard used a Music Transformer, an open-source neural network architecture developed by MIT assistant professor Anna Huang SM '08, who joined the MIT faculty in September.
“Music transformers work similarly to large language models,” explains Blanchard. “Just as ChatGPT would generate the most certainly next word, the model we’ve got would predict the most certainly next notes.”
Blanchard refined the model using Rudess's own fiddling with elements from bass lines to chords to melodies, variations of which Rudess recorded in his New York studio. Along the best way, Blanchard ensured that the AI ​​can be nimble enough to reply to Rudess's improvisations in real time.
“We redefined the project,” says Blanchard, “when it comes to musical futures that were hypothesized by the model and that for the time being were only realized based on what Jordan decided.”
As Rudess puts it: “How can the AI ​​react – how can I actually have a dialogue with it?” That’s the revolutionary a part of our work.”
Another focus emerged: “In the realm of ​​generative AI and music, you hear about startups like Suno or Udio, that are in a position to generate music based on text input.” “That’s very interesting, but they lack controllability says Blanchard. “It was necessary for Jordan to give you the option to predict what was going to occur. If he saw the AI ​​making a call he didn’t want, he could restart the generation or use a kill switch so he could take control again.”
In addition to giving Rudess a screen previewing the model's musical decisions, Blanchard also inbuilt various modalities that the musician could activate while playing – for instance, causing the AI ​​to generate chords or lead melodies, or a call-and-play -Initiate response patterns.
“Jordan is the mastermind of the whole lot that happens,” he says.
What would Jordan do?
Although the residency is complete, staff see many opportunities to proceed research. For example, Naseck would really like to experiment with more ways in which Rudess could interact directly together with his installation, for instance through features akin to capacitive sensing. “We hope to give you the option to make use of much more of his subtle movements and postures in the long run,” says Naseck.
While the MIT collaboration focused on how Rudess can use the tool to enhance his own performance, other applications are easily imagined. Paradiso recalls an early encounter with the technology: “I used to be playing a chord sequence and Jordan's model was generating the leads. “It was like a Jordan Rudess musical 'bee' was buzzing across the melodic foundation I used to be laying, doing something Jordan would do, but depending on the straightforward progression I used to be playing,” he recalls, adding to his face reflects the enjoyment he felt on the time. “You will see AI plugins to your favorite musician which you could integrate into your individual compositions, with some knobs to regulate the main points,” he posits. “That’s precisely the world we’re opening up.”
Rudess can be concerned with exploring educational uses. Because the samples he recorded to coach the model were just like the listening exercises he conducted with students, he believes the model itself could someday be used for teaching. “This work has greater than just entertainment value,” he says.
The foray into artificial intelligence is a natural progression for Rudess' interest in music technology. “This is the following step,” he believes. However, when he talks to fellow musicians concerning the work, his enthusiasm for AI is usually met with resistance. “I can have sympathy or compassion for a musician who feels threatened, I completely understand that,” he admits. “But my mission is to be certainly one of the individuals who move this technology towards positive things.”
“In the Media Lab, it’s so necessary to take into consideration how AI and folks come together for the good thing about all,” says Paradiso. “How will AI advance us all? Ideally, it’ll do what so many technologies have done – put us in a distinct perspective where we’re more powerful.”
“Jordan is ahead,” adds Paradiso. “Once it’s established with him, people will follow.”
Jamming with MIT
The Media Lab first landed on Rudess' radar before his residency because he desired to check out the knitted keyboard developed by one other member of Responsive Environments, textile researcher Irmandy Wickasono PhD '24. From that moment on, “it was a discovery for me to learn concerning the cool things happening within the music world at MIT,” Rudess says.
During two visits to Cambridge last spring (assisted by his wife, theater and music producer Danielle Rudess), Rudess discussed final projects in Paradiso's course on electronic music controllers, whose curriculum included videos of his own previous performances. He brought a brand new gesture-controlled synthesizer called Osmose to a course on interactive music systems taught by Egozy, whose credits include co-developing the video game Guitar Hero. Rudess also gave tips about improvisation to a composition class; played GeoShred, a touchscreen musical instrument he co-developed with researchers at Stanford University, with student musicians within the MIT Laptop Ensemble and Arts Scholars program; and experienced immersive audio within the MIT Spatial Sound Lab. During his most up-to-date campus trip in September, he led a master class for pianists in MIT's Emerson/Harris program, which provides conservatory-level music teaching support to a complete of 67 scholars and fellows.
“Every time I come to school I feel a way of rush,” Rudess says. “I feel like, wow, all of my musical ideas, inspirations and interests have come together on this really cool way.”