IWDS 2019: The 2nd International Workshop on Dialog Systems International Science Innovation Building at Kyoto University Kyoto, Japan, February 27, 2019 |
Conference website | http://sigai.or.kr/workshop/bigcomp/2019/iwds |
Submission link | https://easychair.org/conferences/?conf=iwds2019 |
Submission deadline | December 14, 2018 |
Notification of Paper Acceptance | December 31, 2018 |
Author Registration | January 7, 2019 |
Camera Ready Submission | January 7, 2019 |
In conjunction with the IEEE BigComp 2019 - 6th IEEE International Conference on Big Data and Smart Computing
Motivation
When people look for information or find particular services, they used to put queries into search engines and choose a desired one among candidates. Although this way of human computer interaction (HCI) makes it possible to find their desired ones much efficiently than before, now they want more convenient way. Dialog system is the one, which makes people to communicate with computers through natural language or voices. The dialog system is successfully applied to various applications, such as intelligent speaker (e.g., Amazon echo, Google home) and intelligent counsellor, courtesy of great advance of machine learning techniques. It usually consists of several cascade steps (e.g., speech to text, natural language understanding), and it is necessary to find a way of improving of the steps and effectively incorporating them. We want to discuss and share the knowledge about how to solve these issues.
Theme, Purpose, and Scope
This workshop aims to create opportunities to discuss about the state-of-the-art studies, and to share on-going works. We hope that this will enhance collaboration among the researchers related to dialog systems. There are many challenging issues, such as out-of-domain detection, distant voice recognition, and end-to-end systems. We want to discuss about how to solve such issues, and share the experiences of applying the dialog system to real-world applications.
List of Topics
- Intelligent dialog systems
- Chatbot systems
- Speech recognition
- Speech synthesis
- Natural language understanding
- Information extraction
- Dialog management
- Language resources and representation scheme for dialog systems
Submission Guidelines
All papers must be original and not simultaneously submitted to another journal or conference. Prospective authors are invited to submit their papers, 4 pages, in English according to the IEEE two-column format for conference proceedings. The author list may appear in the paper, but can be omitted if the authors want to. The direct link for paper submission is https://easychair.org/conferences/?conf=iwds2019. All submissions will be peer-reviewed by the Program Committee of the workshop. All accepted workshop papers will be published in the IEEE Xplore Digital Library as conference proceedings.
Important Dates
- Submission of Workshop Papers:
November 30, 2018December 14, 2018 (UTC -12) - Notification of Paper Acceptance:
December 21, 2018December 31, 2018 - Camera Ready Submission:
December 28, 2018January 7, 2019 - Author Registration:
December 28, 2018January 7, 2019 - Workshop: February 27, 2019
Organizational Committee
- Program Committee
- Byungsoo Ko, Researcher, Naver
- Dongkeon Lee, Researcher, KAIST
- Hee-Cheol Seo, Researcher, Naver
- Hyounggyu Lee, Researcher, Naver
- Joonghwi Shin, Researcher, Naver
- Kyoung-Soo Han, Researcher, Naver
- Sa-Kwang Song, Researcher, KISTI
- Seung-Ho Han, Researcher, KAIST
- Yoonjae Jeong, Researcher, NCSOFT
- Zae Myung Kim, Researcher, Naver
- Organizing Committee
- Young-Seob Jeong, Professor, SoonChunHyang Univ.
- Jonghwan Hyeon, Ph.D Candidate, KAIST
- Ho-Jin Choi, Professor, KAIST
Invited Talk
Monaural Speech Segregation Using Pitch Classification Based on Bidirectional LSTM with Probabilistic Attention
Han-Gyu Kim (Researcher, Naver)
Abstract
Speech recognition has become unprecedentedly important with the popularization of the intelligent agents. The performance of the speech recognition is greatly influenced by the noise interference, which is unavoidable in practical situations. Humans can concentrate on speech signal in noisy environments. Such ability is enabled by auditory cues in human ear, which analyzes acoustic signal steadily in time and frequency domain. Speech segregation is an algorithm that mimics such ability of humans and it helps improving the performance of the speech recognition in noisy circumstances. In this talk, recent researches for speech segregation will be introduced, such as non-negative matrix factorization and deep clustering. In particular, my recent work of speech/music pitch classification based on bidirectional LSTM with probabilistic attention will be explained minutely.
Bio
Han-Gyu Kim received the B.S. degree in electronic engineering from Tsinghua University, Beijing, China, in 2009, and the M.S. degree and the Ph.D degree in School of Computing, KAIST, Daejeon, South Korea, in 2011 and 2018, respectively. He is currently a researcher in NAVER Corp., Gyeonggi-do, South Korea. His research interests include speech recognition, source separation, machine learning and artificial intelligence.
Contact
All questions about submissions should be emailed to chairs Jonghwan Hyeon (jonghwanhyeon@kaist.ac.kr) or Young-Seob Jeong (bytecell@sch.ac.kr).