Judging teaching effectiveness during initial teacher education: Examining the duplexity of professional judgment and consistency

Published: 9 September 2024

Thought piece

A study by Prof Sarah Anderson explores judgment-making among educators in the UK, which highlights the complexity of evaluating teaching effectiveness, suggesting that a flexible, systems-based approach is needed to balance quality with professional autonomy.

In today's education system, where accountability is a key focus, evaluating teaching effectiveness has become a central issue in initial teacher education (ITE). With increased scrutiny and inspections, the importance of teaching standards and professional judgment has risen. However, this has led to tensions between the principles of professionalism, such as autonomy and expertise (see OECD, 2016), and the demands of a market-driven approach that emphasizes uniformity, standardization, and compliance (Biesta, 2022). In some cases, this shift has resulted in inspection models that threaten reduced resources or even program closures (Hulme et al., 2023). Both teachers and teacher educators have voiced concerns about the de-professionalization of teaching and what they see as excessive accountability. In fact, in 2020, Professor Linda Darling-Hammond, a leading figure in educational policy, argued that teaching had not yet achieved professional status.

This blog post discusses a study that explored the role of professional judgment among classroom-based mentor teachers, university teacher educators, and university-based school experience tutors in observing and assessing teaching effectiveness. Participants were chosen through purposeful sampling from three ITE programs in England, Scotland, and Wales (approximately 100 participants). Social Judgment Theory (SJT) (Cooksey, 1996) guided this research, focusing on the context of judgment and the cues and policies used, making it an ideal framework. The study used a comparative, embedded, and descriptive multiple-case study design, with a mixed-methods approach for data collection and analysis, followed by a cross-case synthesis (Yin, 2018).

Data was gathered through document reviews of teaching standards, a video observation task, and focus groups. Teaching standards and evaluation tools were aligned with the 10 standards from the Education International and UNESCO Global Framework of Professional Teaching Standards. Participants watched a 15-minute teaching video, answered questions about the effectiveness of the teaching observed, explained their thought process and rationale for their judgments, and expressed their views on factors identified in prior research (such as criteria, training, bias, and experience). Quantitative analysis identified patterns of agreement and disagreement, while qualitative and cross-case analysis explored the underlying themes.

The research highlights differences between the three nations by examining how standards are applied, the challenges of devolved education policies, and issues with compliance-driven evaluation. Results reveal a complexity in the judgment-making process within ITE settings, influenced by numerous interconnected factors that create varying levels of attention, pressure, and resource allocation. The findings suggest a new model (see Figure 1) that considers the natural variability in a human endeavor that requires flexibility, while also safeguarding the education system by recognizing limits to variation to maintain quality.

‘There are no set prescriptions for judgment-making; too often there is a search for linear, straightforward solutions when a multi-modal systems approach is actually needed.’

For example, while personal judgment is more subjective and individual-focused, professional judgment should be objective, informed by expertise, and aligned with established standards. Too much variability in judgments can undermine the rigor required for entering the profession, while too little flexibility can stifle growth and innovation. Judgment-making is complex and socially situated, and while it can't be fully controlled, it can be shaped and reshaped through dialogue and deliberation to achieve greater quality and relevance. This model can be applied broadly to decision-making in ITE, as well as specifically to judgment-making, by embracing principles of complex systems like interconnectedness, uncertainty, adaptability, and feedback loops (Cochran-Smith et al., 2014; Martin et al., 2019). The research has implications for continuous improvement in institutions and their networks, and it encourages a review of assessment practices. These findings pave the way for further discussions on professional teaching standards and the gap between expectations and realities in assessing teaching in ITE.

The full blog can be read on the British Educational Research Association webpage.


 

Funding: This work was supported by the Society for Educational Studies 2022 National Award.

This blog post relates to a paper presented at the BERA Conference 2024 and WERA Focal Meeting on Wednesday 11 September at 16:00pm. Find out more by searching the conference programme here.

First published: 9 September 2024