TestingDCAI2025: The First International Workshop on Integrated Approaches to Testing Data-Centric AI Systems: Methods, Metrics, and Benchmarks The Hills Hotel Laguna Hills, CA, United States, October 1-3, 2025 |
Conference website | https://aitest-aixse.github.io/TestingDCAI2025/ |
Submission link | https://easychair.org/conferences/?conf=testingdcai2025 |
TestingDCAI2025 aims to explore the integration of data quality evaluation with traditional software testing methodologies to improve the reliability, robustness, and trustworthiness of data-centric AI systems. As AI models become central to decision-making in high-stakes domains, there is a growing need to adapt and extend conventional testing frameworks to account for data-driven behaviors, non-determinism, and continuous learning processes. Key topics include test adequacy criteria for AI systems, evaluation of input data quality, robustness testing, explainability under test, benchmark datasets and tools for AI testing, and the emerging area of self-testing AI systems, systems capable of evaluating and improving their own reliability.
Submission Guidelines
All papers must be original and not simultaneously submitted to another journal or conference. The following paper categories are welcome:
- Full papers describe original and significant work or reports on case studies and empirical research.
- Posters describe late-breaking research results or work in progress with timely and innovative ideas.
- The tool track provides a forum to present and demonstrate innovative tools and/or new benchmarking datasets in the context of software testing for Data-centric AI.
List of Topics
- Data quality and test adequacy criteria for AI models.
- Robustness and reliability testing for AI systems.
- Evaluating Prompts for Effectiveness and Sensitivity in Large Language Models (LLMs).
- Tools and frameworks for AI validation and debugging.
- Benchmarks and datasets for AI testing.
- Explainability and interpretability under test conditions.
- Testing Large Language Models (LLMs) and generative AI applications.
- Self-testing or self-evaluating AI systems.
- Experimental and empirical studies on AI assurance.
- Case studies and best practices in industry settings.
- Other relevant topics.
Committees
Program Committee
- To be updated
Organizing committee
- Junhua Ding, Reinburg Endowed Professor, University of North Texas, USA
- Haihua Chen, Assistant Professor, University of North Texas, USA
- Yang Zhang, Clinical Assistant Professor, University of North Texas, USA
- Sharad Sharma, Professor, University of North Texas, USA
Venue
The conference will be held in The Hills Hotel, Laguna Hills, California
Contact
All questions about submissions should be emailed to Haihua Chen (haihua.chen@unt.edu)