Portrait
Kaung Myat Kyaw
Undergradute Student
King Mongkut's University of Technology Thonburi (KMUTT)
About Me

I am a final year undergraudate student KMUTT, Thailand, majoring in Comptuer Science. I am also a student researcher at IC2 Research Center under the supervision of Assoc. Prof. Dr. Jonathan Hoyin Chan.

I am interested in Generative AI, with a particular focus on Diffusion Large Language Models (dLLMs) and reasoning models. I am looking for internship opportunities and research collaborations in these areas.

Education
  • King Mongkut's University of Technology Thonburi
    King Mongkut's University of Technology Thonburi
    Department of Computer Science
    Undergraduate Student
    Aug. 2022 - present
Honors & Awards
  • First Runner-up Award at The 16th Internation Cybersecurity and Generative AI Competition
    2025
  • First Runner-up Award at ASEAN Data Science Explorers National Final
    2024
  • First Prize at Future of Food Hackathon by Reactor School and Singapore Global Network
    2024
News
2026
I am starting my experiential learning course at IC2, supervised by Assoc. Prof. Dr. Jonathan Hoyin Chan.
Jan 11
2025
Our team won the first runner-up at the 16th International Cybersecurity and Generative AI Competition 🎉
Nov 27
A paper has been accepted to SOICT 2025.
Oct 20
2024
Our team won the first prize at Future of Food Hackathon by Reactor School and Singapore Global Network 🎉
Oct 14
A paper has been accepted to WI-IAT 2024.
Sep 15
Our team won the first runner-up at ASEAN Data Science Explorers National Final 🎉
Sep 05
2023
I have received Academic Excellence Award from School of Information Technology 🎉
Aug 17
Selected Publications (view all )
CandleGen: Generating Synthetic OHLC Data for Different Market Trends using GANs
CandleGen: Generating Synthetic OHLC Data for Different Market Trends using GANs

Kaung Myat Kyaw, Jonathan Chan, Udom Silparcha

International Symposium on Information and Communication Technology (SOICT) 2025 In Press

A GAN-based system for generating synthetic OHLC data tailored to various market conditions. By training separate GANs for distinct market states, we captured the unique characteristics of each condition, resulting in synthetic data that mirrors real market behavior. Our evaluations demonstrated that CandleGen preserves the statistical properties and produces realistic samples, making it a valuable tool for applications in algorithmic trading and risk management.

CandleGen: Generating Synthetic OHLC Data for Different Market Trends using GANs

Kaung Myat Kyaw, Jonathan Chan, Udom Silparcha

International Symposium on Information and Communication Technology (SOICT) 2025 In Press

A GAN-based system for generating synthetic OHLC data tailored to various market conditions. By training separate GANs for distinct market states, we captured the unique characteristics of each condition, resulting in synthetic data that mirrors real market behavior. Our evaluations demonstrated that CandleGen preserves the statistical properties and produces realistic samples, making it a valuable tool for applications in algorithmic trading and risk management.

A Framework for Synthetic Audio Conversations Generation using Large Language Models
A Framework for Synthetic Audio Conversations Generation using Large Language Models

Kaung Myat Kyaw, Jonathan Chan

International Conference on Web Intelligence and Intelligent Agent Technology 2024

ConversaSynth, a framework designed to generate synthetic conversation audio using large language models (LLMs) with multiple persona settings. The framework first creates diverse and coherent text-based dialogues across various topics, which are then converted into audio using text-to-speech (TTS) systems.

A Framework for Synthetic Audio Conversations Generation using Large Language Models

Kaung Myat Kyaw, Jonathan Chan

International Conference on Web Intelligence and Intelligent Agent Technology 2024

ConversaSynth, a framework designed to generate synthetic conversation audio using large language models (LLMs) with multiple persona settings. The framework first creates diverse and coherent text-based dialogues across various topics, which are then converted into audio using text-to-speech (TTS) systems.

All publications