The Goshen College physics room is home to some of the most unique sounds on campus, all made on a computer.

In 2009, John Buschert, a professor of physics, and a group of students engaged in a project that improvised music on a system of connected instruments. Although this project was a success, it was limited. Buschert and his students are seeking to refine and advance the past model.

Buschert, in effect, is a physicist who has a sideline career as a music mixer. His work combines the Bela computing platform and other elements to create new sounds.

“The idea is that it makes anyone into a musician,” he said.

Buschert and the students working on the research a decade ago called the earlier model “The Musician Maker.” He and the current team of students were planning to call the new model “Chimes.” Then Buschert thought of a better alternative: JEENI, which stands for “John’s Electronic Ensemble of Novel Instruments.”

He said that over his many years of his teaching at Goshen College, he learned how to use electronic instruments in computer programming and had a passion for interface design.

“I’ve played a bit with electronic instruments, and I’ve always been interested in interface design,” he said, “and so with this project, I was thinking of the possibility of having easier interfaces to control musical instruments of some kind.”

Buschert described what “The Musician Maker” was really about: “Imagine several people on stage each holding a wand-like controller of some type similar to the Wii controller. They are gesturing with their wands as music is playing from a computer system. Though they are not trained or especially skilled, each one is controlling some aspect of the music.”

Buschert then described an example of interaction with JEENI: “One of them waves their wand in a way that seems to steer a melodic line around. They don’t select each note, just the general contour of the melody line and whether it is staccato or legato. Another is controlling a bass line and punching the air to choose when notes happen and whether it’s a high or a low note but again they don’t seem to need to be choosing the exact pitch, just the general position and all the notes seem to work together harmonically no matter what the performers do.”

“Another person is waving a hand as if conducting and this person is able to speed up or slow down the tempo and also to make everything louder and softer.”

Buschert said the research with JEENI requires three key elements: 1) hardware and interfacing to connect the motions of some kind of controllers to a computer program; 2) a musical score with structure and constraints but also freedom in certain parameters; 3) software that uses the motions to control various free parameters and generates the ongoing musical output.

All three of these elements are open to a variety of approaches at this point. With the ongoing creation of a completely new and improved JEENI, his hope is that they continue to implement new ideas and make progress.

“A lot changed once we tried to implement the ideas,” he said. “But much of the core dream is still on target. One big change is that I left behind waving wands in the air and chose physical interfaces that naturally come with haptic feedback. Gestures in my opinion are not the way to go.”

The original system had room for improvement.

“The old system connected every instrument through some interfaces to a laptop then messages are sent through an external sound module and that was connected to one set of speakers and not the others,” he said. “First, the latency between an action and the sound was about 20 milliseconds. Second, the sounds could not be altered in tone (or timbre); one could only control pitch and volume and there was no way to add that capability. Third, there were only two speaker channels; and finally, the accompaniment track was coded in a very simple way that restricted it to certain regular repeating patterns.”

Elise Jantz, a sophomore physics major from North Newton, Kansas, worked alongside Buschert as a Maple Scholar this past summer in the revival of the music project. Jantz recognized that she and Buschert both had a similar passion for music.

“I grew up with a lot of music, and playing violin and piano and singing in church,” she said. “Whether it’s playing with others or playing for others, it’s important to me to make music. I think an essential part of music is connection with other people.”

The pair sought to improve on the original model by creating a new system that gave the player more control both in learning the system and controlling expression.

“John is my advisor and I had classes with him, so we knew each other already and it wasn’t hard to talk to him about designs,” she said. “He had a system that was built by previous students that had several issues. We talked about which ideas would work. Usually it was just thinking out loud about pros and cons. So, we built a new system that was much faster and easier to write music for.”

Buschert continues to think about how the project may evolve.

“What I would like is to somehow codify more complex constraints that allow the new player more freedom but that still works musically,” he said. “Another dream of mine is that other people could write music for the system that anyone could then perform. We did this in a sense with chord progressions on the old system but in the new one there is much greater flexibility in how the accompaniment track is written. So, it will be time for a different type of student to contribute to the system.”