Tags:computer adaptive test, diagnostic test and validation
Abstract:
Before the COVID-19 pandemic the Mathematics Education Support Hub (MESH) at Western Sydney University ran a series of face-to-face refresher workshops for incoming undergraduate students who wanted to improve their mathematics and statistics skills required for university study. With the onset of the pandemic, it became necessary to redesign these workshops as online modules which students could study in their own time. One aspect of the face-to-face delivery that was lost was the ability of MESH tutors to direct students to the parts of each workshop where they needed to focus their studies.
To overcome this deficiency MESH developed a series of diagnostic tools to help students to determine which sections of each module they needed to study. In order to make the diagnosis as efficient as possible these tools were developed as computer adaptive tests, meaning that questions asked depend on responses to previous questions and students are not asked questions on concepts which they can be assumed to have mastered or which it appears that they are not conversant with.
In order to develop the tools we first needed to construct a “knowledge map” of concepts covered in each module with logical linkages between them. The tools were then built using the Numbas testing system’s Diagnostic test algorithm. Since deployment of the tools in February 2023, the diagnostic tool has been attempted over 800 times.
Whilst we feel that this tool is serving its intended purpose, we felt that it was important to validate its efficacy. To do this we have been able to access full details of all attempts and have used item response theory to rank questions using both direct and implied scoring. In this talk we will discuss the development of the diagnostic tool and the results of the analysis to date.