A Framework for Synthetic Audio Conversations Generation using Large Language Models

Innovative Cognitive Computing(IC2) Research Center
School of Information Technology, King Mongkut's University of Technology Thonburi

Abstract

In this paper, we introduce ConversaSynth, a framework designed to generate synthetic conversation audio using large language models (LLMs) with multiple persona settings. The framework first creates diverse and coherent text-based dialogues across various topics, which are then converted into audio using text-to-speech (TTS) systems. Our experiments demonstrate that ConversaSynth effectively generates highquality synthetic audio datasets, which can significantly enhance the training and evaluation of models for audio tagging, audio classification, and multi-speaker speech recognition. The results indicate that the synthetic datasets generated by ConversaSynth exhibit substantial diversity and realism, making them suitable for developing robust, adaptable audio-based AI systems.

Methodology


ConversaSynth Framework

Our approach is structured around key stages, including the selection of a suitable large language model (LLM), the design of distinct conversational personas, the process of generating conversations, the conversion of text to speech, and the concatenation of audio dialogues as displayed in Figure above. Each stage is carefully designed to ensure the creation of coherent, contextually relevant, and diverse audio conversations. By leveraging a combination of advanced models and fine-tuned techniques, we aim to produce high-quality synthetic dialogues that maintain consistency in character voices and offer a realistic conversational experience. The following sections detail the methodologies applied at each step, from LLM selection to audio post-processing, to achieve our desired outcomes.

Sample Audio

To demonstrate the capabilities of ConversaSynth, we generated two versions of the same conversation using different background noise settings. The first audio sample presents a conversation without any background noise, showcasing the clarity and coherence of the dialogue generated by the framework. The second audio sample includes simulated background noises, mimicking a real-world environment where conversations often occur. This helps in assessing the performance of audio models trained on synthetic data under varying conditions.


Audio_1.wav (without background noises)
Audio_1.wav (with background noises)

Audio_2.wav (without background noises)
Audio_2.wav (with background noises)

Audio_3.wav (without background noises)
Audio_3.wav (with background noises)

Poster

Poster

BibTeX

@article{kyaw2024framework,
          title={A Framework for Synthetic Audio Conversations Generation using Large Language Models},
          author={Kyaw, Kaung Myat and Chan, Jonathan Hoyin},
          journal={arXiv preprint arXiv:2409.00946},
          year={2024}
        }