In a world where technology increasingly permeates creativity and well-being, there is a growing need for systems capable of generating art that adapts to a person’s internal state. The BREATH research, presented in the paper BREATH: A Bio-Radar Embodied Agent for Tonal and Human-Aware Diffusion Music Generation, demonstrates a multimodal system for personalized music generation that combines physiological sensing, reasoning via large language models (LLMs), and guided audio synthesis.

This study, accepted at the LLM4Music @ ISMIR 2025 conference, establishes a new biomusic feedback loop, linking radar sensing, reasoning-based prompts, and generative audio modeling.

Technological Foundation: Bio-Radar Sensing

A key component of the BREATH system is the non-invasive acquisition of vital biosignals. The system uses a millimeter-wave radar sensor to capture both heart rate and respiration rate.

This technology represents the forefront of contactless monitoring. For example, related research such as Respiro has already demonstrated the potential of ultra-wideband (UWB) radar for effective respiration monitoring in wearable devices. Using a chest-worn radar, Respiro reliably extracted breathing signals with an error of less than one breath per minute in 71% of measurements, averaging 1.11 breaths per minute deviation. This highlights the viability of radar technology for unobtrusive yet effective physiological monitoring, which forms the foundation for interactive systems like BREATH.

The BREATH Biomusic Feedback Loop

BREATH transforms physiological data into musical structures through several stages:

  1. Sensing and Interpretation: The radar non-invasively captures heart and respiration rates. These physiological signals, combined with environmental data, are interpreted by a reasoning agent based on LLMs.
  2. Deriving Musical Descriptors: The reasoning agent generates symbolic musical descriptors, including parameters such as tempo, mood intensity, and importantly, traditional Chinese pentatonic modes.
  3. Music Generation: These descriptors are expressed as structured prompts, which guide a diffusion-based audio model to synthesize expressive melodies. BREATH emphasizes cultural grounding through tonal embeddings, ensuring that generated music resonates with tonal traditions.

Evaluation and Potential

The BREATH system was evaluated using a design-based research methodology, including targeted control experiments, thematic studies, and expert feedback.

Key findings include:

  • Physiological variations can meaningfully modulate musical characteristics.
  • Tonal conditioning improves alignment with predicted modal characteristics.

Experts using the system reported that it provides intuitive and culturally resonant musical responses. This work demonstrates BREATH’s potential for therapeutic and interactive applications.

In conclusion, BREATH represents a significant advancement in creating adaptive, embodied musical interactions, where music is not merely played back, but actively responds to the listener’s internal physiological state.


P.S. For readers interested in other cutting-edge music technologies, check out how will.i.am and Mercedes-Benz turned driving into an interactive musical experience: Mercedes Sound Drive.

Shares:
Leave a Reply