The concept of using AI/ML models that can generate music and sound has expanded dramatically in recent years. However, despite media attention towards musical AI research, music involving AI is rarely heard in concerts apart from a few special research events. This is partly due to a lack of musical ML systems directed towards music performers.
In this project, you will help to change this by developing a new ML model that can interact with a human in live performance. This model could connect directly into existing music technology components such as digital audio workstation software (DAWs) or be a self-contained computer musical instrument, touchscreen app, or custom sensor-based device for musical expression.