EMTCIR 2024: The First Workshop on Evaluation Methodologies, Testbeds and Community for Information Access Research Waseda University Tokyo, Japan, December 12, 2024 |
Conference website | https://emtcir2024.github.io/ |
Submission link | https://easychair.org/conferences/?conf=emtcir2024 |
Submission deadline | October 3, 2024 |
Evaluation campaigns, where researchers share important tasks, collaboratively develop test collections, and have discussion to advance technologies, are still important events to strategically address core challenges in information access research. The goal of this workshop is to discuss information access tasks that are worth addressing as a community, share new resources and evaluation methodologies, and encourage researchers to ultimately propose new evaluation campaigns in NTCIR, TREC, CLEF, FIRE, etc. The proposed workshop accepts four types of contributions, namely, emerging task, ongoing task, resource, and evaluation papers.
Important Dates
- July 19, 2024: First call for papers
- September 23, 2024: Second call for papers
- October 3, 2024: Paper submission due
- October 18, 2024: Paper acceptance notification
- November 1, 2024: Camera-ready submission due
- December 12, 2024: EMTCIR 2024
All the deadlines are the end of the day in Anywhere on Earth (AoE).
Submission Guidelines
Submissions
Each paper must contains two to six pages (including figures, tables, proofs, appendixes, acknowledgments, and any content except references) in length, with unlimited pages for references. The manuscripts must be written in English and in the PDF format. The paper format must follow the new ACM guidelines (e.g., using the ACM LaTeX template on Overleaf here) with the “sigconf” option. The PDF files must have all non-standard fonts embedded.
All papers will be peer-reviewed by the program committee. The review process is single-anonymized: authors should include author information (i.e., names and affiliations) in submitted papers and reviewers are anonymized for authors.
At least one author of each accepted paper must attend the workshop on-site and present their work. Papers should be electronically submitted by the deadline through EasyChair: https://easychair.org/my/conference?conf=emtcir2024.
Type of Papers
A paper must be one of the following contribution types:
Emerging Task Papers
Papers in this category are expected to introduce new or emerging information access tasks. The authors are expected to describe how important the task is and identify technical challenges to be solved in the paper. Some approaches to the emerging task should be discussed, but are not necessarily fully matured. A primary experiment on this task would be helpful for further discussing potential challenges.
Evaluation criteria:
- Originality of the proposed task
- Importance of the proposed task
- Timeliness of the proposed task
- Clarify of required challenges
- Readiness (e.g., resources are available or a preliminary experiment has been conducted)
- Quality of presentation
Ongoing Task Papers
Ongoing task papers describe ongoing attempts that have already been accepted as an evaluation campaign or those that have recently been concluded. The authors are expected to describe the motivation of the ongoing task, highlight technical challenges, and explain the task design and evaluation methodology. We highly encourage task/track/lab organizers to take this opportunity for further discussing the task design and attracting more participants.
Evaluation criteria:
- Originality of the proposed task
- Importance of the proposed task
- Timeliness of the proposed task
- Clarify of required challenges
- Readiness (e.g., resources are available or a preliminary experiment has been conducted)
- Quality of presentation
Resource Papers
Resource papers are similar to those expected in SIGIR and CIKM: papers describing a new test collection. We especially welcome authors planning a new shared task based on the new test collection. Resource papers should include a motivation of a new test collection, its potential applications, the details of the dataset development, and test collection statistics. Some examples of applications (e.g., comparison of existing methods) are also expected.
Evaluation criteria:
- Originality of the resource
- Predicted impact of the resource
- Availability of the resource
- Soundness of the resource development methodology
- Quality of presentation
Evaluation Papers
A new task may require a new evaluation methodology. A new approach to replace traditional evaluation methods may also emerge due to new technologies. Existing evaluation methodology may require further discussion because of some technical problems. Evaluation papers are expected to include contributions regarding evaluation of information access technologies. Expected authors are researchers who are interested in EVIA (International Workshop on Evaluating Information Access), collocated with NTCIR.
Evaluation criteria:
- Originality
- Technical soundness
- Potential impact
- Related work
- Quality of presentation
Topics of Interest
The topics of this workshop include those of SIGIR-AP 2024 and those expected in IR evaluation campaigns such as NTCIR, TREC, CLEF, and FIRE.
Workshop Format
The first half of the workshop mainly focus on the presentation of accepted contributions (except for ongoing task papers), while the latter half focuses on discussion for exploring new tasks. As successful examples, ongoing tasks are introduced by those who submit ongoing task papers or those invited by the organizers. We then have round-table discussion where new tasks are discussed. Authors of emerging task or resource papers are assigned to each table and serve as a leader of the table. The leaders may invite authors of the other contributions (e.g., evaluation or resource papers) to their table if they think evaluation methodology or resources presented in the workshop are useful to design a discussed task. After the round-table discussion, each table is expected to have a short presentation on a new task.
Committees
- Makoto P. Kato (University of Tsukuba)
- Noriko Kando (National Institute of Informatics)
- Charles L. A. Clarke (University of Waterloo)
- Yiqun Liu ({Tsinghua University)