Assessing practical methods and improving them is challenging, but essential if we want students to really think about the chemistry they’re doing. Emily Seeber recommends classroom activities that help embed the thought processes involved
Arguably some of the most challenging aspects of practical chemistry are evaluating methods and suggesting ways of modifying them to get more accurate and reliable* results. These skills are now examined within the new GCSE papers at ages 14–16 in England.
Evaluation is a high order thinking skill, which requires students to understand the question, the method, the results and how all of these apply to the underlying theory. While we have a wealth of practical and theoretical knowledge to draw on, students have limited experience, and are expected to be able to evaluate practicals they have never seen before. This means we need to give them tools to structure their thinking about evaluation.
Bringing rigour to ‘fair test’
Students come from primary school able to talk about fair testing, and they are often able to recognise when a method is unfair, but not why it is a problem. This isn’t something they improve without practice. Discussing what will happen to the results if you don’t follow the principles of fair testing should be a regular theme of classroom talk about practical work.
Students also struggle to evaluate the method for an investigation in a rigorous way. One useful way to get students to structure their thinking about fair testing is to get them to use ‘SAME’.
- Subjectivity: Did the practical involve any aspect which was down to human judgement?
- Amounts: Were the same amounts of all of the control reagents used in each experiment?
- Methods: Was the method the same every time?
- Equipment: Was the same equipment used in each experiment?
If the answer is ‘no’ to amounts, methods, or equipment, students should focus on that aspect in their evaluation. Once they have identified the error, suggesting the improvement becomes straightforward; if different amounts were used, make them the same. If the answer is ‘yes’ to subjectivity, then students should also state that in their answers. The SAME approach can be useful as a quick fix to help 15–16 year olds who find it difficult to know where to start in their evaluation.
One way to help students evaluate methods is to build up to written evaluations from a set of data. Start by scaffolding some of the skills students will need to achieve this.
A key aspect of evaluating methods is recognising which method best fits the investigative question. Present students with a research question, such as, ‘Is the reaction between hydrochloric acid and sodium hydroxide exothermic or endothermic?’ Build students’ evaluative skills in a structured way by giving them two methods to choose from for the research question, rather than asking them to come up with the ‘improved’ one. For example, for this question, two possible methods could be:
- Measure out 25 cm3 of hydrochloric acid and place a thermometer in the beaker. When the temperature is stable, add 25 cm3 of sodium hydroxide solution and stir with the thermometer. Record the temperature every 15 seconds for 3 minutes.
- Put some hydrochloric acid into a beaker and place a thermometer in the beaker. When the temperature reading is stable, add some sodium hydroxide and stir with the thermometer. Record whether the temperature increases or decreases.
A task like this helps students think about the kind of data required to answer the investigative question, and prompts high quality classroom discussions. After students have decided which is the best method, ask them to explain their reasoning. You can further challenge some students to suggest a research question that fits the other method. This also makes a great revision task covering a range of experiments for ages 15–16.
Improving accuracy and reliability
There are two classes of improvements students can make to a method: improving accuracy, or reliability. Students often conflate the two in their exam answers, costing them valuable marks.
Try asking students to sort cards showing suggested improvements to an experiment into a pile for accuracy and a pile for reliability.
Once students have sorted the cards into an accuracy and a reliability pile, they can sort each pile in order from ‘most effective’ to ‘least effective’ improvements. This helps them to think about the impact of the improvements they can make. For example, when determining the enthalpy of combustion for alcohols, ‘Use a pipette to measure the water into the copper calorimeters instead of a measuring cylinder’ has a negligible impact on accuracy when compared to ‘Put lids over the copper calorimeters’. So, even though students did not construct the suggestions themselves, they are critically engaging with them.
Follow this up with a planning activity. Students use the key points they identified that improve accuracy and reliability to plan another experiment, like determining enthalpies of neutralisation. This consolidates the relationship between designing and evaluating methods.
These ideas work best if they are built into the regular practice of teaching practical work. Students should be selecting their methods and justifying them on a regular basis, not just at the end of the course.
Furthermore, lots of the required practicals at ages 14–16 have specific evaluative points, which need to be highlighted in context. These need to be annotated onto schemes of work so they are not forgotten. For example, the methods for improving separation during chromatography – changing the solvent or using longer paper – will not come up in any other context during the course.
If students are evaluating practicals, they are doing the most thinking about chemistry they can. And that seems like something we should be facilitating in our laboratories.
* The word reliability can be ambiguous for students as it is used in various contexts in everyday language. The Association for Science Education’s guide to the language of measurement [pdf] provides further information on using terms such as reliability, repeatability and reproducibility. You may wish to check which terms your exam board prefers.