A computerized adaptive knowledge test as an assessment tool in general practice: a pilot study

Ann Roex, Jan Degryse

Research output: Contribution to journalArticlepeer-review

8 Citations (Scopus)

Abstract

Advantageous to assessment in many fields, CAT (computerized adaptive testing) use in general practice has been scarce. In adapting CAT to general practice, the basic assumptions of item response theory and the case specificity must be taken into account. In this context, this study first evaluated the feasibility of converting written extended matching tests into CAT. Second, it questioned the content validity of CAT. A stratified sample of students was invited to participate in the pilot study. The items used in this test, together with their parameters, originated from the written test. The detailed test paths of the students were retained and analysed thoroughly. Using the predefined pass-fail standard, one student failed the test. There was a positive correlation between the number of items and the candidate's ability level. The majority of students were presented with questions in seven of the 10 existing domains. Although proved to be a feasible test format, CAT cannot substitute for the existing high-stakes large-scale written test. It may provide a reliable instrument for identifying candidates who are at risk of failing in the written test.

Original languageEnglish
Pages (from-to)178-83
Number of pages6
JournalMedical Teacher
Volume26
Issue number2
DOIs
Publication statusPublished - 2004

Keywords

  • Algorithms
  • Computer Systems
  • Educational Measurement/methods
  • Family Practice/education
  • Humans
  • Pilot Projects

Fingerprint

Dive into the research topics of 'A computerized adaptive knowledge test as an assessment tool in general practice: a pilot study'. Together they form a unique fingerprint.

Cite this