P.O. Box 7001
6710 CB EDE
Phone: +31 318 64 8531 / 6984 (secretary)
E-mail: Marianne Hubregtse
Competence based assessment in vocational education in The Netherlands
In the past five years competence based assessment has become the prominent method of examination in vocational education in the Netherlands. The majority of the exams are practical, authentic competence based assessments. This research proposes to look into certain unresolved issues regarding practical and performance assessments.
To assess the quality of the exams, the classification accuracy of a competence based exam is evaluated. This classification accuracy is measured in the total percentage of misclassification (“should have failed exam but passed” and “should have passed but failed”). Furthermore, the influences of decision rules, cut-off score and distribution of ability on the classification accuracy is investigated.
It is not always necessary to measure all supposed constructs with an equal amount of dimensions in a multidimensional IRT model (Reckase, 2009). In the case of competence based assessment, it is not clear whether the competences as they are used in exams, overlap in such a way that they should be seen as parts of one dimension, or even a combination of two dimensions. This research proposes to use multidimensional IRT modeling (Reckase, 2009) in an exploratory fashion to investigate the structure of the competences.
For competence based assessment, it is important to work with authentic test situations in which the student’s performance on different competences is assessed (Gulikers, 2006). However, the authentic situations tend to be different for each student. It could be that this yields also different difficulties of assessment per student, since decisions and thus actions of a person are always embedded within the specific context (Roelofs & Sanders, 2007). Does the lack of standardization of the context in fact impacts the validity and reliability of the inferences from the performance assessment or not?
Often, performance assessments are ended with a criterion based interview or an interview in which the student is asked to reflect on the exam. This research proposes to find out how well students (and assessors) are prepared for this cognitively complex task.
In general, assessment by more than one person tends to be more reliable than assessment by only one person. Furthermore, independent, or objective, assessors tend to be less sensitive to adverse effects, such as halo or horn effects. However, a single assessor, that has seen the student for an extended period of time in his internship has more data available to base his decision on. Besides, it is very cost ineffective to assess a student with two independent observers, and a logistic nightmare. How much, if at all, does the quality of the assessment suffer if only one dependent observer is used?
Prof. dr T.J.H.M.Eggen (Twente University)
1 September 2009 – 1 September 2013