Learning Object Evaluation

Evaluation


Three main strategies were used to assess the quality of the pharmacology learning object and to collect formative date for improving the resource. Early in the design stage the instructional designer conducted usability testing with a third year pharmacy student to obtain feedback on design and navigation issues. This informal meeting was conducted as a 'think-aloud session' where the instructional designer recorded the reflections of the student as she interacted with the learning object. Secondly, peer reviewers were asked to evaluate the quality of the learning object using an established rating instrument and to provide feedback for improvement using an instructor survey. Thirdly, questionnaires were distributed in order to carry out a learning impact study of students using the learning object.

Participants

The initial reviewer involved in the think-aloud session was selected because of her ability to provide feedback from the perspective of being a former pharmacology student and as a current pharmacy student (both were identified as target audiences for the learning object). Health science faculty and students from three separate institutions participated in the formal evaluation of the learning object. The institutions were members of the CLOE consortium and were committed to collaborating in such activities. None of the participants were required to have experience with learning objects or online resources. The only criteria for selection was that they indicated an interest in potentially using the learning object for instruction or studying.

Instruments

The Learning Object Review Instrument (LORI), version 1.4 developed by Belfer, et al. (2002) was used to collect faculty's individual assessments of the quality of the pharmacology learning object and to ensure that a consistent evaluation criteria was used by all participants. Faculty were asked to assess the learning object in the following areas using a five point rating scale ranging from low to high and to provide a rationale for their score.

1) Content Quality: Veracity, accuracy, balanced presentation of ideas, and appropriate level of detail

2) Learning Goal Alignment: Alignment among learning goals, activities, assessments, and learner characteristics

3) Feedback and Adaptation: Adaptive content or feedback driven by differential learner input or learner modeling

4) Motivation: Ability to motivate, and stimulate the interest or curiosity of, an identified population of learners

5) Presentation Design: Design of visual and auditory information for enhanced learning and efficient mental processing

6) Interaction Usability: Ease of navigation, predictability of the user interface, and the quality of UI help features

7) Reusability: Ability to port between different courses or learning contexts without modification

*8) Value of accompanying instructor guide: ability of resource to enhance instructional methodology

Source: Adapted from Belfer, et al. (2002)

*(The original items in the LORI which dealt with accessibility and standards compliance were replaced with the category in item 8 because the participants lacked the necessary knowledge to provide an assessment in both of those areas).

Slight modifications were made to the instructor and student surveys developed by Dawn Leeder (2003) of the Universities' Collaboration in eLearning (retrieved from http://www.ucel.ac.uk/resources/dev_pack.html 11/2003). These tools were used to provide faculty with an opportunity to suggest modifications for improvement and to provide evidence that the use of the learning object positively impacted student learning of pharmacokinetics.

Process

The evaluation of the pharmacology learning object and the learning impact study took place over a period of two weeks and was completed individually. The LORI was not used in combination with the convergent participation model for the evaluation of learning objects as proposed by Nesbit et al. (2002). The goal of using the instrument was not to increase inter-rater reliability and the process of a collaborative evaluation would have been difficult to manage and would have required more time of the evaluators.

The website location of the learning object along with the following electronic documents were sent via email to aid participants in the evaluation process.

1) A description of the learning object review instrument developed by Belfer, et al. (2002). This document laid out the criteria and scale to be used for evaluating the learning object. It also included examples of characteristics of low, average and high ratings for each category.

2) The learning object rating sheet where the results of the evaluation were recorded. Participants were encouraged to include the rationale for their rating in each of the specified criteria.

3) The learning object instructor survey which allowed the faculty evaluator to propose modifications.

4) An instructor guide was offered as a complement to the Pharmacology learning object and was intended to provide information that could enhance instruction. The strategies included were not prescriptive and instructors were not required to employ these methods when using the learning object.

5) The student evaluation survey containing 25 questions which were based on the students' use of the learning object and student guide. This formed the basis of the learning impact study lite.
6) The student guide. A resource for instructors wishing to provide training in specific strategies for using the learning object effectively.

The responses to the questions were entered into the electronic documents, re-saved and returned to me via email.
Future postings will reflect on the findings of this evaluation and the process itself.
References
Belfer, K., Nesbit, J.C., Archambault, A., & Vargo, J. (2002) Learning object review instrument (LORI). Version 1.4

Nesbit, J., Belfer, K. & Vargo, J. (2002). A convergent participant model for evaluation of learning objects. Canadian Journal of Learning and Technology, 28 (3). Available online http://www.cjlt.ca/content/vol28.3/nesbit_etal.html

For more information about the learning object review instrument that was used and the research behind its' development goto the Elearning Research and Assessment Network (Elera).

Vargo, J., Nesbit, J., Belfer, K., & Archambault, A. (2003). Learning object evaluation: Computer mediated collaboration and inter-rater reliability. International Journal of Computers and Applications, 25 (3).

No comments:

Post a Comment