Research   >   EOS-SEI   >   Question database

EOS-SEI: Questions database for service (and other) courses

Questions or Goals

  1. Building effective quizzes, self-tests and exams based mainly on multiple choice questions is difficult and takes practice. Ideally, instructors should be able to select questions from a large collection which both (i) reflect the learning goals of their course and (ii) have well characterized testing characteristics. This data base aims to make this possible.
  2. Once question metrics are in place, several interesting questions can be posed.
    1. Do experienced instructors build "better" questions than one-time sessionals or "beginners" at teaching the course?
    2. Are some topics or learning goals harder than others to assess using multiple choice questions?
    3. What are common misconceptions about the concepts being tested?
    4. Which questions are not very effective?
  3. Ultimately we hope to (a) raise the expertise of faculty at writing good multiple choice questions by providing information about which questions work and which are not very good, and (b) enable more efficient production of frequent quizzing and testing opportunities for students.

Implementation

We aim to build a database of questions (initially multiple choice) used as assessments in service courses, starting with eosc114, "Natural Disasters". The database will serve both as a way of accumulating questions used, and as a means of measuring or calibrating how well they serve their purpose. Initially the DB has been build using MS Access and analysis is being carried out using spreadsheets, but in the long run, if the process can be shown to be useful and practical, an online version that implements analysis automatically will be implemented for internal Departmental use.

The measurement process is being implemented initially using basic item analysis (see for example Introduction the Multiple Choice Question Writing), and subsequently (we hope) incorporating more sophisticated statistical metrics that Item Response Theory can provide (see for example http://en.wikipedia.org/wiki/Classical_test_theory ).

People (contacts)

Progress

Products (papers, presentations, etc)

Intentions

This is always speculative - they are ideas we would ideally like to pursue

  1. Gathering questions for all EOS service (and other) courses into a consistent format.
  2. Characterize all questions in terms of department- and course-level "key concepts", learning goals, and answering history.
  3. Consider use of Item Response Theory (IRT) to build adaptive quizzes that do a better job of meeting the needs of the very wide range of students we see in service courses.
  4. Enable students to contribute to the database, initially using a third party system such as PeerWise, which allows questions to be built, tested, and reviewed by peers and instructors.

Anticipated benefits to undergraduate learning

It is well known that repeated “retrieval” of newly learned knowledge or concepts, and timely, meaningful feedback are important aspects of effective learning. This database, and use of IRT for classification, will provide a wide range of questions, enable a wide array of deployment options, and facilitate an efficient growth and maintenance model for which involves all teaching faculty in the Department. Examples of potential deployment scenarios include (i) study aids, (ii) starting points for bulletin board discussions, (iii) pre-tests to help students (and instructors) assess the level of preparedness for course modules; (iv) regular online quizzes / tests, (v) practice exams, (vi) midterms and finals, and others that will likely arise as the system matures. In short, the proposed questions database will enable many varied yet consistent opportunities for improving the learning of thousands of students per year, and will help improve effectiveness and efficiency of instructors.