EOS-SEI: Questions database for service (and other) courses
Questions or Goals
- Building effective quizzes, self-tests and exams based mainly on multiple choice questions is difficult and takes practice. Ideally, instructors should be able to select questions from a large collection which both (i) reflect the learning goals of their course and (ii) have well characterized testing characteristics. This data base aims to make this possible.
- Once question metrics are in place, several interesting questions can be posed.
- Do experienced instructors build "better" questions than one-time sessionals or "beginners" at teaching the course?
- Are some topics or learning goals harder than others to assess using multiple choice questions?
- What are common misconceptions about the concepts being tested?
- Which questions are not very effective?
- Ultimately we hope to (a) raise the expertise of faculty at writing good multiple choice questions by providing information about which questions work and which are not very good, and (b) enable more efficient production of frequent quizzing and testing opportunities for students.
Implementation
We aim to build a database of questions (initially multiple choice) used as assessments in service courses, starting with eosc114, "Natural Disasters". The database will serve both as a way of accumulating questions used, and as a means of measuring or calibrating how well they serve their purpose. Initially the DB has been build using MS Access and analysis is being carried out using spreadsheets, but in the long run, if the process can be shown to be useful and practical, an online version that implements analysis automatically will be implemented for internal Departmental use.
The measurement process is being implemented initially using basic item analysis (see for example Introduction the Multiple Choice Question Writing), and subsequently (we hope) incorporating more sophisticated statistical metrics that Item Response Theory can provide (see for example http://en.wikipedia.org/wiki/Classical_test_theory ).
- Francis Jones is P.I. on this project
- Brett Gilley is lead instructor in EOSC114
- Eva Schaffer (summer student in 2008) build the first version of the DB and carried out initial research.
- Sarah Henderson (summer student in 2009) contributed time towards data collection and analysis.
- GHenderson (summer student in 2009) contributed time towards data collection and analysis.
Progress
- May - Aug 2008: one full time 4th yr undergraduate summer student, Eva Schaffer, developing this database, populated it with 1500 questions, and carried out initial background research into ways that Item Response Theory might be used. For details see the report for Phase I of the project.
- July-Aug 2009: Questions from 12 midterm and final exams are being entered and analyzed using basic item analysis. Results will include correlation of metrics with the topics and learning goals that are targeted by each question. Whether it is appropriate to progress to Item Response Theory for question characterization will be considered once the benefits and limitations of basic item analysis has been presented and considered.
- September 2009: planned implementation of PeerWise into several courses in EOS.
Products (papers, presentations, etc)
- Phase one report (please ask F. Jones if interested).
Intentions
This is always speculative - they are ideas we would ideally like to pursue
- Gathering questions for all EOS service (and other) courses into a consistent format.
- Characterize all questions in terms of department- and course-level "key concepts", learning goals, and answering history.
- Consider use of Item Response Theory (IRT) to build adaptive quizzes that do a better job of meeting the needs of the very wide range of students we see in service courses.
- Enable students to contribute to the database, initially using a third party system such as PeerWise, which allows questions to be built, tested, and reviewed by peers and instructors.
Anticipated benefits to undergraduate learning
It is well known that repeated “retrieval” of newly learned knowledge or concepts, and timely, meaningful feedback are important aspects of effective learning. This database, and use of IRT for classification, will provide a wide range of questions, enable a wide array of deployment options, and facilitate an efficient growth and maintenance model for which involves all teaching faculty in the Department. Examples of potential deployment scenarios include (i) study aids, (ii) starting points for bulletin board discussions, (iii) pre-tests to help students (and instructors) assess the level of preparedness for course modules; (iv) regular online quizzes / tests, (v) practice exams, (vi) midterms and finals, and others that will likely arise as the system matures. In short, the proposed questions database will enable many varied yet consistent opportunities for improving the learning of thousands of students per year, and will help improve effectiveness and efficiency of instructors.