Archived, for now.
Archived, for now.
Today I had my first NQT observation of the year. This was with my head of department and head of faculty. It’s not a graded obs, and is simply about providing constructive feedback. There are 3 of these interspersed with observations with the overall NQT Tutor/Mentor person, so 6 obs in all. Those 3 go towards assessment points so track how I’m doing.
I won’t go into much detail about the lesson itself. It was ok; some good, some could be improved. The summary of feedback and targets for me to aim for in my next observation and in all my teaching going forward are:
Firstly, to remove the ‘knowledgey’ side of my lessons, for want of a better phrase. The critique goes that I spent more time than is required getting them to do activities on ‘lower-order’ skills, recall and knowledge, that are worth relatively little in exam papers and this time could be more efficiently spent. An activity, to use todays example, that gives a series of sources about Mormons in the American West and asks students to a) identify Mormon beliefs and b) explain why they were so hated, could be redesigned by just *giving them* the Mormon beliefs (the lower-order knowledge) and getting them to focus on inferring and explaining why they were hated from the sources. This can then be extended into an evaluative task (Blooms) in which they rank, say, what they think the 3 most important reasons were for such hatred of the Mormons (a task I did, but after the observers had left).
This is an encouragement that has come a few times – to give the knowledge across, don’t make them work to identify/describe/recall or even explain (‘middling’), and try to move towards higher-order skills as quickly as possible, and the fact they will be working with the knowledge means it should go in anyway.
Secondly, to make better use of ‘whole class AfL’ rather than just sampling individual students when going over an activity or task. Furthermore, the AfL needs to be higher-order, evaluative or judgement based (ranking, continuum lines, prioritising etc) rather than explanation or factual understanding.
Finally, to institute a no-hands policy so I can broaden my questioning beyond the safety net of keen, enthusiastic students that I regularly go to when answering questions. This is something I want to do and have always been anxious/scared to do, for a couple of reasons. The safety net is safe for a reason, and I worry about forgetting student names. Neither excuses are good enough.
So it’s broadly clear to see where my department and faculty heads think where I can improve. They see clear evidence of engagement, my marking is good, I have a great rapport with my class, and my planning for the lesson was good with activities that built upon each other and made sense, following a logical structure, and so on. I am, however, stuck in a ‘knowledgey’ pit and need to grow a bit more.
I also need to follow a bit of internal target-setting. My own reflections about lessons I’ve had lately, especially today (not my observation lesson) have seen a couple of flaws crop up. On the one hand, I expect a certain degree of independence from students and the ability to get on with work given. On the other, I need to be more aware of where modelling or scaffolding is required and give the students the tools to complete the work given, rather than hack away ‘independently’ and give up because it’s hard to access. This is especially true with my setted year 9 classes.
Finally, I need to trust my own instinct more about when group-work is appropriate. My school and colleagues are enthusiastic about it but without clear, well defined parameters, it becomes a complete waste of time as one person does all the work and the rest chat. This then leads to my frustration and warning the class about needing to sort their ideas out and whatnot, when in fact, I’ve set them up this way – by not structuring it clearly enough, or not thinking of a more appropriate task.
I can’t give them only half the tools to succeed, or the wrong environment to succeed in, and not be surprised when many fail to clear the bar.
Half-term is at the end of this week and it gives me a chance to recharge and refocus.
Archived, for now.
Another post inspired by Embedded Formative Assessment by Dylan Wiliam. On p61:
“I often ask teachers, “What are your learning intentions for this period?” Many times, teachers respond by saying things like, “I’m going to have the students…” and then specify an activity. When I follow up by asking what the teacher expects the students to learn as a result of the activity, I am often met with a blank stare.” (p61)
I’ve had this question and it’s sometimes hard to find an answer when faced by such a blunt, and often glib, request, framed as if you’re stupid or clearly in the wrong for designing a task which a colleague/head of department/mentor clearly sees no worth in.
What are they learning?
To me this question has a rather large built-in assumption: that every activity should progress the learning in some way. Some new skill, piece of knowledge, evaluation, etc. But what about consolidation? What about going over old material to practice recall? As the leaked Assessment Without Levels report stated, sometimes consolidation itself is progress. Listening to explanation or exposition of difficult concepts was critiqued to me as ‘they’re listening, they’re not learning’, but who is to say what is going on inside the head? Who is to say the two are mutually exclusive?
It’s a question with an agenda, and usually asked by someone with one too. In my limited experience, the person asking it is not just keen for ‘learning’ to be happening, but to be happening in a specific way, and some ways are ‘not learning’, as previously mentioned.
…what the teacher expects the students to learn as a result of the activity…
Does every activity have an answer to this? Can it be accurately measured and understood at the end of the activity? Is ‘learning’ something that can actually be assessed at the end of an activity or are we really just looking at instant recall and performance data?
These are all questions I don’t really know the answer to yet but I think they are valuable things to ask, to challenge the underlying assumption inherent in the question and to consider whether it’s a question really worth asking.
I feel that, early in my career, I’ve been caught off guard by it. Posed as a challenge, the question undermines tasks I’ve designed for a lesson and I approach the answer like I’ve already failed: the task isn’t promoting something new, it isn’t equipping them with a new skill or piece of content. In fact, the answer is readily available: they’re learning to apply themselves and improve their answers, they’re learning to summarise and explain, they’re learning to listen and comprehend the information that’s being given. Sometimes these things are not active. Or engaging. Or exciting. But sometimes listening can be learning. Reading and summarising, reworking, comprehension questions, memorising – learning is occurring, but without bells and whistles.
I’ve been using some time this summer to get on with some reading that I wasn’t able to do during my PGCE year, and one of the books I am working through is Embedded Formative Assessment by Dylan Wiliam. This post is a reflection on Chapter 3 of that book, in which I’m trying to reconcile the advice given by Wiliam with what, in the early stage of my career, I feel good practice looks like in History.
While reading this chapter, the following quote stood out.
‘As teachers, we are not interested in our students’ ability to do what we have taught. We are only interested in their ability to apply their newly acquired knowledge to a similar but different context.’ (p60)
How accurate is this for history teaching? How realistic is it?
History is a knowledge-based subject. While second order concepts have importance in understanding students’ ability to understand the domain, they aren’t what the curriculum is based around. Otherwise the National Curriculum would have periods set aside for Interpretations or Change and Continuity or Cause and Consequence, rather than, say, 1066-1509. History is defined by knowing stuff and being able to do things with that stuff. But the stuff is central. Wiliam states that success criteria need to be separated from the context of the learning: they should be about the doing things with the stuff. And being able to apply the stuff to ‘similar but different’ contexts.
This leads me to two thoughts.
One, it lends itself towards activity based method of planning. Thinking about what they will be doing in ways that can be formatively assessed, and not what the content is that they need to know. I don’t think the two are mutually exclusive, and it might be acceptable at KS3 where the curriculum is vague, but it certainly isn’t applicable at GCSE or A-Level where content is king. If they have to know how the Gulf of Tonkin Incident was manipulated into a cause for military action, that’s what you’ll teach. And that’s what you’ll assess. The success criteria for that is understanding that piece of content, not whether or not it can be presented in different ways, or if it then sheds light on the causes of war somewhere else. Contexts are always unique, and trying to apply it elsewhere takes valuable time from the body of knowledge that is required by the exam. You can’t have a contextless question in history. At least, not one that is answered properly.
Second, it suggests that second order concepts are what history should be teaching. If you can learn about cause in the French Revolution, you can then apply the principles to the First World War. To me, this turns a method of measuring understanding (the concepts) into the aim of teaching itself.
Ultimately history comes back to being a content-based subject. Substantive knowledge is the blood that pumps through it. So what does this actually mean for learning intentions and success criteria in a historical context, if the criteria and context of learning are part of the same whole.
The intention/objective: this will almost always be content based, and best framed as an enquiry. How far were religious reasons the primary cause of the Spanish Armada? How did King Harold die? This might change if you’re doing a specific skills unit, say with new Year 7’s, or teaching exam skills to GCSE or A-Level students (yet in both cases, the skills being taught will not be taught independent of content).
The success criteria: this has the scope to be broader and skill based, such as source analysis or writing a written answer or improving an example piece – but it has to go alongside the substantive. I cannot judge the success of the first question above if they don’t know of other reasons, their relative importance, the context of the time – and this would go hand in hand with success criteria that measures, say, evaluating competing factors against each other, or using evidence in support of a claim. Both are transferable history skills that don’t exist in a vacuum and I can’t then measure whether, say, the 11 Years Tyranny was the primary cause of the English Civil War through their ability to use evidence in support of a claim because they don’t know the content of that.
This brings me back to the root of my difficulty with the original claim – that we are not interested in their ability to do what we have taught. Surely that is what we are interested in – but maybe not the only thing we are interested in? In history, surely success criteria has a substantive role to play, not just a conceptual one? And is this just stating the obvious?