Date of Graduation

8-2013

Document Type

Dissertation

Degree Name

Doctor of Philosophy in Educational Statistics and Research Methods (PhD)

Degree Level

Graduate

Department

Rehabilitation, Human Resources and Communication Disorders

Advisor/Mentor

Ronna Turner

Committee Member

Wen-Juo Lo

Second Committee Member

George Denny

Third Committee Member

Giovanni Petris

Keywords

Pure sciences, Education, Ability distributions, Item response theory, Multidimensional data, Psychometrics

Abstract

When test forms that have equal total test difficulty and number of items vary in difficulty and length within sub-content areas, an examinee's estimated score may vary across equivalent forms, depending on how well his or her true ability in each sub-content area aligns with the difficulty of items and number of items within these areas. Estimating ability using unidimensional methods for multidimensional data has been studied for decades, focusing primarily on subgroups of the population based on the estimated ability for a single set of data (Ackerman, 1987a, 1989; Ansley & Forsyth, 1985; Kroopnick, 2010; Reckase, Ackerman, & Spray, 1988; Reckase, Carlson, Ackerman, & Spray, 1986; Song, 2010). This study advances the previous studies by investigating the effects of inconsistent item characteristics of multiple forms on the unidimensional ability estimates for subgroups of the population with differing true ability distributions. Multiple forms were simulated to have equal overall difficulty and number of items, but have different levels of difficulty and number of items within each sub-content area. Subgroups having equal ability across dimensions had similar estimated scores across forms. Groups having unequal ability on dimensions had scores which varied across the multiple forms. On balanced 2PL forms, estimated ability was most affected by the estimated item discrimination, and was closer to the true ability on the dimension with items having the highest discrimination level. On balanced 3PL forms, the theta estimate was most dependent upon the estimated difficulty level on each set of items, and was higher when true ability was above the difficulty level on at least one set of items primarily measuring that dimension. On unbalanced forms, the ability estimate was heavily weighted by the true ability on the dimension having more items. This study adds to the importance of test developers maintaining consistency within sub-content areas as well as for multiple test forms overall.

Share

COinS