Optimizely Certification Exam Scoring
Standardized testing programs often develop different versions of the same exam content. The content on the exam is aligned to predetermined content and statistical specifications meant to ensure that each form measures the same constructs.
Most testing programs use multiple versions of an exam. This helps to increase test security and helps ensure that the exam scores are valid for all test takers.
As scores from exams need to be comparable across different test forms to make fair and consistent inferences, the scores must carry the same meaning no matter which version of the exam a candidate takes. Although several metrics can be reported on an assessment, the use of scaled scores is a certification industry best practice.
Types of Scores
A raw score is the total number of questions on a test that the candidate answered correctly. A percent correct score represents the percentage of questions that the candidate answered correctly. For example, if a candidate answered 50 out of 60 questions correctly on their exam, their raw score would be 50 and their percent correct score would be 83%.
To have consistent and fair scores across different versions of an exam, testing programs can transform the test’s raw scores into scaled scores. A scaled score takes a candidate’s raw scores and converts it to fit a standardized scale that is consistent and comparable across different versions of a test.
This transformation accounts for the potential differences in difficulty across unique exam forms. Using this transformation allows scores to have a consistent meaning for all test takers, regardless of what version of the test they received.
Exam Cut Scores
The passing score or cut score for each Optimizely certification exam was established by a group of Optimizely Subject Matter Experts (SMEs) with knowledge in the corresponding content area. A psychometrically valid standard setting process was followed to assure that each cut score set was appropriate for the target population and the difficulty of the form. During this standard setting process, the SMEs discuss the minimal level of competence required for a candidate to pass the exam. The SMEs also evaluated and analyzed the difficulty of each of the questions on the exam, as well as each exam’s specific questions. This information was used to set the passing score for the exam. This cut score is the standard for the exam.
Equating
Every exam has an original form which was used to set the initial cut score. To ensure comparable scaled scores across different versions of the exam, any new set of questions is adjusted to the original scale by a statistical procedure called equating. In the equating process, the difficulty of new test versions is compared to that of the original form of the test. Adjustments are made that account for any differences in test difficulty, so that the passing score needed for each form of the test requires the same level of candidate performance. Equating allows for equivalent and comparable passing standards across forms.