WCRML 2019: Workshop on Crossmodal Learning and Application |
Website | https://crossmodallearning.github.io/ |
Submission link | https://easychair.org/conferences/?conf=wcrml2019 |
Submission deadline | April 5, 2019 |
WCRML workshop puts the emphasis more on how different modalities semantically interact with each other, rather than simply learning with information integration from multiple modalities and retrieving them.
The goal of this workshop is to address questions such as:
- How to handle noise or imbalance in data and a small number of labelled samples for cross-modality data?
- How to efficiently transfer knowledge from one modality with abundant supervision information to another modality with less or even no knowledge?
- How to translate data across different modalities, e.g. the generation of motion-sensor data from visual input or visually indicated sound?
- How to align cross-modal data by using appropriate alignment functions and similarity measurements?
- How to better utilise different modalities in an optimal way to satisfy requirements,which are sometimes even contradicting each other, like business demand, cost constraints and user satisfaction?
- The sources of the multi modal data are not restricted in any way, which could be from users, devices, machines, systems and distributed environments.
This workshop does not only attempt to leverage knowledge across modalities but also motivate their application in industry and society.
Submission Guidelines
To contribute to the understanding of cross-modal technologies, we invite original articles in relevant topics, which include but are not limited to
- Multi modal representation/feature learning
- Cross-modal retrieval
- Data alignment across modalities, e.g., synchronising motion sensor with video
- Data translation, e.g., visually indicated sound
- Learning using side information, e.g., modality hallucination
- Knowledge transfer across modalities, e.g., zero-shot/few-shot learning
- Applications with cross-modal data–IoT (Internet of Things)–operation and maintenance–surveillance–public transportation–logistics–health care–task-oriented dialog–human-robot interaction with vision and audio–user/product/job search and recommendation–social media retrieval and analysis–others
More detailed submission guideliness could be found in https://crossmodallearning.github.io
Committees
Program Committee
- Martin Klinkigt
- Manikandan R
Organizing committee
- Martin Klinkigt, Hitachi Ltd
- Bin Tong, Hitachi Ltd
- Sheraz Ahmed, DFKI Germany
- Jorn Hees, DFKI Germany
- Manikandan R, Hitachi India Pvt Ltd
Invited Speakers
- Dr. Andreas Dengel, DFKI Germany (https://agd.informatik.uni-kl.de/team/lehre/prof-dr-prof-hc-andreas-dengel/)
Publication
WCRML 2019 proceedings will be published in ACM ICMR 2019 (http://icmr2019.org)
Venue
The workshop will be held in Ottawa, Canada on June 10 2019
Contact
All questions about submissions should be emailed to
- Martin Klinkigt (martin.klinkigt.ut@hitachi.com)
- Manikandan R (manikandan@hitachi.co.in)