TTC'19: 12th Transformation Tool Contest Eindhoven University of Technology Eindhoven, Netherlands, July 19, 2019 |
Conference website | http://www.transformation-tool-contest.eu |
Submission link | https://easychair.org/conferences/?conf=ttc19 |
Submission deadline | March 25, 2019 |
Case notifications | April 8, 2019 |
Call for solutions | April 15, 2019 |
Solution submission deadline | June 10, 2019 |
Solution notifications | June 15, 2019 |
Open peer feedback deadline | July 1, 2019 |
Live contest announcement | July 15, 2019 |
2019 Transformation Tools Contest (part of STAF 2019) - Call for Cases (deadline: March 25th)
What is this about?
Transformations of structured data such as relational data, abstract syntax trees, graphs and high-level software models are at the heart of a wide range of applications. Their success heavily depends on the availability of powerful and easy-to-use tools. There is an increasingly large number of transformation tools that follow many different approaches, and this creates challenges for the community at large. Users and tool experts may have missed a recent development in the area, and they may not use the best tool for the job. Tool developers may wish to compare their tool against others, but risk the threat to validity that they may not be using the other tools to their full extent.
The Transformation Tools Contest aims to help users, experts and tool developers to learn about the state-of-the-art through practical case studies. While some of these case studies may revisit well-known transformations, we are always looking for new case studies from the community that look at the bleeding edge in the field or challenge current tools in some way. If you have an interesting transformation problem in hand, or if you know about one, we would like to hear about it!
Case submission
Please submit your case description in PDF format through EasyChair. The case description should include a URL to a source code repository (e.g. GitHub, Bitbucket, GitLab) that contains a reference solution and an evaluation methodology, and a basic issue tracker that solution authors may use to ask questions about the case study. For the evaluation methodology, you are welcome to draw from past case studies. If you have an idea for a case study but do not know where to start or which previous case to base it on, feel free to start a discussion with us at ttc19 AT easychair DOT org.
The case description should be in the ACM acmart LaTeX document class (see here and here), using the "sigconf" style and with the "review" option enabled, and not exceeding 10 pages (excluding references and appendices).
The description should answer these questions:
- What is the context of the case?
- What is the subject to be modeled?
- What is the purpose of the subject?
- What are the variation points in the case?
- Are there any specific research questions you would like to answer through the participation of the community? This may be useful if you have a publication path in mind for the results of the contest.
- What are the criteria for evaluating the submitted solutions to the case?
- What should be the prizes for the case? Usually, there may be one or more "Best X" (where X is a desired attribute), and then a "Best Overall" solution with the best balance between all attributes.
- Correctness test: which are the reference input/ouput documents and how should they be used? Ideally, a case description includes a test suite, as well as a test driver. The test driver can be an online web service, a local script that can be deployed in SHARE, or a Docker image on Docker Hub. You can reuse frameworks from past case studies - feel free to ask us!
- Which transformation tool-related features are important and how can they be classified? (e.g., formal analysis of the transformation program, rule debugging support, ...)
- What transformation language-related challenges are important and how can they be classified? (e.g., declarative bidirectionality, declarative change propagation, declarative subgraph copying, cyclic graph support, typing issues, ...)
- How to measure the quality of submitted solutions, at the design level? (e.g., measure the number of rules, the conciseness of rules, ...)
- How can the solutions be evaluated (ranked) systematically?