Assessment in Online Practicals

Introduction
A set of exemplar simulation resources demonstrating online practical experiments and investigative/exploratory activities has been produced to demonstrate the potential of simulations in online education. This document looks at the possibility of using such simulation resources in computer-aided assessment and at the potential for more complex question types that test deeper understanding and can be used for authentic assessment.

In discussing assessment, we are considering both formative assessment, the diagnostic use of assessment to provide feedback to both teachers and students, and summative assessment which is used to make a judgment about the learning that has or has not taken place. In teaching, an important component of formative assessment is the feedback given to help learners become aware of any gaps in their knowledge, understanding, or skill and to guide them to overcome difficulties. Accordingly, we don't just see assessment within simulations as a matter of awarding a mark, but as a method of providing the student and teacher with valuable feedback.

Feedback in simulations
By their very nature, simulations are interactive, providing intrinsic feedback to learners; if the student makes a change to the simulated system the effect of that change is visible. However, it is not sufficient to give a student a simulation and expect them to learn from it, knowing that if variable X goes up then Y goes down does not mean that the student understands why the system behaved as it did or what the implications of such behaviour might be. Thus, the necessity for additional guidance, feedback and scaffolding as a student uses simulations has been recognised for some time. E.g. Thomas and Neilson (1995) and Pilkington and Grierson (1996)

Some support and guidance can be provided by linking the simulation with other multi-media resources that set the scene or provide answers to frequently asked questions. It is possible to hardcode feedback and assessment into a simulation, but for a number of reasons, feedback related to the students needs is usually limited to the provision of generic errors and warning messages. Two factors are:

  1. A single simulation can be used in many different teaching contexts and levels, hard-coded feedback and assessment would not have universal relevance.
  2. Full provision of feedback relies on the programmer anticipating all the needs of the teacher and student in advance, or on the programmer being present to revise and maintain the feedback.

Given these factors, together with the fact that feedback and assessment should be under the control of the teacher, not the programmer, it makes sense to provide feedback via a mechanism external to the simulation which can be tailored by teachers to suit a given teaching context. These considerations suggest that linking the simulations with an assessment engine is a reasonable starting point for looking at the feasibility of providing tailored feedback for formative assessment.

Integrating simulations and assessment software
As well as providing a mechanism for simple feedback, the integration of computer-based simulation and assessment technologies promises opportunities for many new question types that begin to test higher order skills. Authentic assessment of the students performing a task in the environment in which they learnt becomes a real possibility. By reducing the level of feedback to the student, the same mechanism that allows formative assessment could be used for summative purposes too.

Varying degrees of integration are possible.

  • Minimal integration
    Minimal integration could be accomplished if the assessment engine can incorporate or link to a simulation. Under such circumstances, the simulation can be used to pose a question, or as a tool for the learner to use to answer a question. In this case the student will obtain an answer using the simulation and type it into the assessment engine for marking. Such a system could increase the range of question types that could be set by an assessment engine; fuller integration of the systems is required to open up the potential of techniques such as authentic assessment.
  • Full Integration
    Full integration would involve providing the ability to pass information about simulation variables and their values between the simulation and assessment engines. This would allow the assessment engine to define the value of any of the simulation variables when the simulation is started and to receive information not only when the student completes the task, but also as the student works on that task. The simulation engine would be modified so it was able to send information to the assessment engine about any simulation variable:
    • On the press of a button.
    • When the student leaves a page within an interface.

The modifications to the JeLSIM tools would allow any JeLSIM simulations, past or future to be used with assessment engines.

The assessment engine would be modified so as to allow:

  • The assessment engine to send information to the simulation to set the initial state of the simulation variables. Randomisation of the starting variables within a specified range would be a possibility and could be done either by an assessment engine that supports randomisation or within the simulation.
  • Specification of new "simulation" question types and generation of the applet tags that that will require.
  • The assessment engine to receive information from the applet.
  • Specification of the variables and their values or ranges that form the answer. (These variables may be different from those specified in setting the initial simulation variables in the first bullet point).

The sophistication of feedback and assessment that could be provided obviously depends on the assessment engine; those with features such as randomisation, questions with multiple parts and steps could provide greater functionality.

Other forms of assessment and feedback
Although the primary focus of this document is on the combination of simulation and assessment engine, there are other forms of feedback and formative assessment that are possible, but probably outside the scope of current assessment engines.

The possibility exists of collecting data from the simulation

  • At regular pre-defined intervals.
  • Any time a variable is changed, (providing a history of student activity).

Given the availability of such data, it might be possible to provide tools that could intervene with hints, suggestions or contingent questions depending on student actions. Such data could also provide information for debriefing students after exercises or give the teacher insight of the strategies adopted by the student in tackling a problem.

The choice of assessment engine
Ultimately it is desirable that the combined simulation-assessment system will work with any assessment engine. There is a body of work in completing the first integration that can be re-used with subsequent engines. Considerations in the choice of assessment engine include:

  • Degree of QTI compliance
  • Availability of randomisation of questions.
  • Existence of optional steps within questions.

The choice of simulation tool
The JeLSIM tools are ideal for the simulation component of the system, because they allow many interfaces to be constructed from a single model by a non programmer. As each of these interfaces is capable of being used with an assessment engine it will be possible to build new questions from existing material with very low overheads.

Online assessment of practicals experiments
The chemical kinetics simulation resources have been produced after a review of the concerns of teachers over the use of online practicals, as a result, they do not attempt to mimic the mechanics of an experiment and the dexterity required to perform it, but rather attempt to get the student to duplicate the thought processes which are required to complete the experiment. The type of assessment that can be carried out with the resources should be more in line with those which teachers find acceptable.

Assessment of practical activities is costly and time consuming. Methods of using computer-based assessment for both summative and formative assessment of practicals that reduce the overheads would be welcomed**. There are a range of learning contexts in which these resources can be used and all of them can potentially benefit from formative assessment and some from summative assessment. Contexts in which these resources may be used include:

  1. As a simulated experiment (or a variant of the real experiment to avoid direct replacement).
  2. As preparation for the student in advance of the laboratory exercise,
  3. For revision.
  4. To provide activities in experimental design.
  5. To support subject teaching in chemical kinetics.

If full integration of an assessment engine with the JeLSIM tools were undertaken, a wide range of new forms of computer-based assessment would be possible within each of these contexts. These are detailed in turn below.

**It is important to be aware that there will be considerable resistance to the idea of substituting computer-based experiments for real "hands on" practicals for summative assessment. Since the intention of this project has never been to replace the "hands on" experiment, it is hoped that the resistance will be less.

The simulated experiment
The simulated experiment could be assessed either formatively or summatively. The prescribed outcomes for particular practical activities could in many cases be directly assessed. Students can be given tailored feedback during the experiment and information on how they performed could also be provided to the teacher. A range of examples of different types of assessment are given below, the choice of whether they were appropriate for use in summative as well as formative assessment of a simulated experiment could be made by the teacher.

  • Assessment of the students' skills in choosing an appropriate graph, appropriate axes, ranges and labels.
  • Assessment of the accuracy with which students plot data points on a graph.
  • If the experiment is to determine a value, the computer can assess how accurately students have calculated the value from their own experimental data.
  • When students have to convert data readings into another form, for instance calculating a concentration from a volume reading or calculating the concentration of a reagent in the solution, or the rate of the reaction, the computer can check the correctness of the calculation
  • Can students define volumes of reagents in a series of experiments so that they constitute a "fair" or controlled experiment? Feedback can be given on the validity of their experiment.
  • Some practical experiments require the student to make a written conclusion e.g. about trends the experiment has shown and the reason for it. Such assessment is outside the realm of the simulation, but may well be possible within an assessment engine.

Combination of such assessments throughout a practical exercise could provide authentic assessment of the experimental process.

Preparation for "hands on" laboratories
When using the resources for preparation for a "hands on" laboratory, formative assessment is required, giving the student feedback and explanation where mistakes are made and on the accuracy of any calculations undertaken. It is probably not appropriate to undertake summative assessment and provide students with a mark during such an activity.

Revision
The requirements for this type of use are similar to those of the previous section. It is possible however, that the student may like their work to be marked; this would depend on the purpose of their revision.

Experimental design
Criticisms have been levelled at typical practical experiments in that they tend to resemble recipes, where the student simply follows instructions. Although syllabi suggest that students should be given the opportunity to devise their own experiments, this is rarely practicable in the laboratory. The demonstration "investigative" exemplar shows how such a problem might be posed to the student. If an assessment engine were combined with a simulation in the style of the "investigative" interface (where the simulation is used as a tool by the student to design an experiment to solve a problem), then the process of experimental design could be assessed as well. The student can be given feedback on the validity of their experiments as they devise the experiment or at the end of the exercise depending on the type of assessment and educational aims of the exercise.

Chemistry kinetics teaching
The nature of the JeLSIM tools means that once a model has been built, any number of visualisations can be constructed for that model. There is potential for a number of different forms of assessment using simulations produced with the JeLSIM tools. At its simplest, there can be simple multiple choice or numeric answer where simulated data is required to pose a question or must be used for the student to answer a question e.g. reading data from a graph. It would also be possible to pose questions such as those which:

  • Ask the student to predict what will happen to a graph when a variable is changed, or draw an output graph for a given scenario feedback from the simulation being available to show the correct answer.
  • Ask the student to drive to simulation to a given state
  • Test the students ability to read and analyse data from graphs,
  • Test the student's ability to interpret graphs and calculate parameters from example data. E.g.:
    • The order of reactions from concentration vs. time plot,
    • The rate constant for a given reaction.

Testing deeper understanding
The range of assessment possible with the new system would be greatly increased. Questions which require higher order skills could be more easily set. Teachers could potentially gain better understanding of the problems their students face and their misconceptions.

Although almost certainly outside the capability of an assessment engine, it might be possible for the teacher to use histories of student activity to gain insight into the way in which the student tackled the problem and whether they employed random guesswork rather than reflective thought.

Summary

It is clear that a combination of simulation with an assessment engine could provide considerable advances on current practice in computer-aided assessment. It would allow the production of more complex question types that test deeper understanding and allow authentic assessment. Potentially it allows computer-aided assessment of practical experimentation that could reduce costs and pressure on resources.

References Some research resources in this area include:
  • Thomas, R. and Neilson, I. (1995) Harnessing simulations in the service of education: the Interact simulation environment. Computers and Education. 25 (21-29)
  • Pilkington, R. and Grierson, A. (1996) Generating explanations in a simulation-based Learning Environment. Int. J. Human-Computer Studies. 45 (527-551).


this page was last updated May 15th 2003 :CM
© 2003 Jelsim Partnership