Blog Carnival: Influential Education Research

Blog Carnival 2: “What education research has most influenced your practice?”

 

I was a bright-tailed and bushy-eyed first timer at ViCE/PHEC 2015, my first national education conference. Chemistry and physics brought together in a wonderful exchange. And exchange they did. I first encountered Peer Instruction in a workshop by Anna Wood and Ross Galloway, where they justified its existence and (this is key) used it on the audience. An audience that contained me, a student educated in a time and a place where educational innovation meant using pictures.

 

Mind.

Blown.

 

My irrational, emotional enthusiasm for the technique lead me to the evidence for its effectiveness. I started my first ever teaching post at Glasgow University in with enough autonomy to use Peer Instruction as a drop-in enhancement in the middle five minutes of my lectures. And the results were pleasing. An increase in PI quiz scores doesn’t always correlate with learning gain, but the final exam results (and student feedback) supported my gut. Trust, but verify.

Because of my exposure to the results of that educational research, I have never delivered a full lecture without a peer instruction component. I’d call that an influence of practice.

 

(honourable mention for pre-labs and Jen Evans!)

Advertisements

Blog Carnival – Memorable Teaching?

The excellent Katherine Haxton recently issued a call for something I’d heard about in other contexts, but never here: a blog carnival. A perfunctory glance at my posting history will reveal that I’m not a natural blogger, or even really a micro-blogger. I think it’s the motivation rather than the medium, so I’m glad for an excuse to tell a story about My Most Memorable Tea-ching Session So Far. Truth may be assumed, or hoped for, or found in spirit but not in substance.

On one occasion, I was facilitating a jigsaw group-work session that involved student discussion, digestion, and presentation of material. Breakout groups would look at a chunk of a problem then present back to each other – their first experience of public speaking. The only possible way I’d have any unexpected surprises would be the ludicrously unlikely joint circumstances that a) students didn’t know in advance they’d be giving a presentation, and b) one of them had an extreme reaction to unexpected public speaking.

Turns out, nobody told them they’d be presenting, and someone in my group had an extreme reaction to unexpected public speaking. The first and hopefully last time that someone’s run away while I’ve been teaching.

Some of my earliest teaching experiences were in small group tutorials, and I firmly believed then in the power of hospitality. I had a tutorial box, full of mugs, coffee, teabags, a kettle, and the occasional fresh milk. I’m not an authoritarian; approachability isn’t a USP but it’s all I got. Plus, the literature has my back: people are more receptive when holding a warm drink than cold. I cater any tutorial with less than 6 scheduled attendees, and it really helped to get students onside in the early days of my career, largely as a substitute for experience.

Back to the teaching session: This was an emergency, and I needed the best tool I had: tea. Delegating one student to rescue their friend from a nearby bathroom, and another to put a table near a power socket, the hospitali-tea box was deployed and the entire group shortly fed and watered (there are bourbons as well, I’m not a monster). Hearts settled, exemption to speaking issued, and we carried on without further incident.

So when you look for a learning technology. It’s not a VLE, or a smartboard, or even an electricity-free pedagogical framework. Technology could be something as simple as hospitality.

ViCEPHEC17 roundup!

Another summer, another fantastic ViCEPHEC to provide a vital burst of excitement and ideas for the new semester. My third such conference, I’m starting to feel like I might be finding my feet a little – to echo Michael’s recent blog, I’m starting to put down the overwhelming feeling of wanting to implement everything I see.

Immediately before the conference, there were two great events – Labsolutely Fabulous, a showcase of laboratory experiments and practical work, which I was woefully late for but still managed to whoosh around a few demonstrations, including some great microscale work from Bob Worley of CLEAPSS, much discussed but never before met. I also came away with a pipe-cleaner molecular model of water from Kristy Turner, that graced my lanyard for the rest of the conference and my living-room from now on. We also had a satellite meeting of the Teaching Fellows Network, much-renamed and invaluable.

One theme that stood out for me at the Teaching Fellows Network, and across the whole conference, was difficulty in challenging signature pedagogies. I saw case after case where introducing too much innovative practice, too quickly, would result in student rebellion or poor satisfaction scores. Despite research indicating that teaching quality is unrelated to student satisfaction, we saw multiple cases of academics being punished and in some cases denied promotion on the basis of poor reception. Simon Lancaster even found himself in the position of potentially having to advise himself to reduce the extent of flipping in a course, pitting a direct inverse correlation of learning gain with student satisfaction – the peril of being a director of teaching which includes your own…

it’s become increasingly clear that setting the tone of the culture is important to elicit change – both within a department, but also in the places our students come from. It’s easy to blame “the system” for unhelpful student preconceptions, but when that’s a code for blaming secondary education, then it’s even more vital to listen to the intersecting experience of teacher-lecturers like Kristy Turner, David Read, and Sir John Holman. It’s hardly a problem unique to our little corner of humanity that we have a need of casting less blame and building more bridges.

My main new piece of good practice came right at the start of the conference, from Suzanne Fergus, who gives voice to a habit I’ve used haphazardly and accidentally – Put the why first. Give your lecture a context in day 1, minute 1. I’ve long argued that it’s far more important to spend time making your subject relevant and engaging than it is to cram another sliver of content in, and it’s great to have a voice with some weight that I can cite. Suzanne also spoke about Miller’s pyramid of competency in lab skills – things I’ll be looking into locally in the next year.

Continuing the lab skills theme, Robin Stoodley of UBC presented work they had done to categorise the cognitive tasks of the undergraduate teaching laboratory – revealing a real narrowness of experience, with many of the tasks being repeated across many experiments, and many only appearing in a single experiment – organic chemistry being a particular culprit for narrowness of experience. This framework would, I think, be useful to categorise experimental work from all families of Domin’s descriptors, useful to me as I begin to add elements of inquiry to my first year curriculum (very much following some unpublished work of Jenny Burnham in this area).

Finally in the lab theme, Jenny Slaughter presented an important observation that echoes what we already know to be true: student retention is directly linked to interaction with graduate teaching assistants! It’s a powerful reminder not to neglect the training of these students, as they represent most of the direct staff contact between students and the university, certainly in first year. also vitally important in the lab is safety education. Both Liverpool and Bristol deny student entry to the lab without a passing mark on a pre-lab safety quiz, and James Gaynor of Liverpool spoke of a robust and integrated approach to H&S that involves giving students access to official COSHH forms directly, as part of lab preparation.

Hopefully, in editing down my 10,000 words of conference notes into this single blog post, I’ve also managed to reduce my cache of new ideas for implementation down into something small enough to tackle before #vicephec18…

Bring on the next semester!

Career progression in UK Chemistry HE

I’ve been working full-time in Higher Education for about two years now, and the precursor scramble of postdocs, contracts, and CV buffing has left imprints of an interest in what makes a person appealing – initially to interview panels, but latterly transferred to the internal promotional power structures of universities.

At last year’s ViCEPHEC16 in Southampton, Jenny Burnham lead a satellite pre-meeting of chemistry teaching fellows (and other early-career teachers), where the focus was career progression (there is some scholarship on this from outwith chemistry, but not much.). As part of the discussion, we were tasked with identifying our own institution’s career progression criteria. A theme emerged of a balance between leadership and scholarship. Based on this and other discussions around career tracks within peer support groups at Strathclyde, I’ve added a third vertex of good practice; neglected though it may be in many institutions, I started my career at Glasgow, loosely under the guidance of Bob Hill, who I believe (though I may be wrong) was prof’d on the basis of just being a damn fine teacher.

Anyway, the three points of the “promotion triangle” I’ve identified are:

  1. Pedagogical research
  2. Influence
  3. Good practice

A university’s promotion criteria will usually favour one or two of these over another, and a mismatch between these and personal strengths can be as frustrating as it is prevalent – for every good practitioner bemoaning the need to publish, there’s a pedagogical researcher being told to stop writing and start teaching. Defining them is also fuzzy:

  • Elements 1 and 2 are related – high-impact publications of research could be taken as influence, but publishing an account of practice would probably not be recognised as an academic, scholarly, (REFable…) work.
  • Element 2 is probably the hardest to pin down – external examination, conference presentations, textbook authorship, institutional education policy – these can all contribute.
  • Element 3 is probably the hardest to evidence, and a lot more has been written, and better than I can, on the tyranny of student awards, and the role of likability in the TEF.

Rather than try to pretend I’m particularly well-read in this area, I instead want to bring some questions out of this: what would it look like to become a professor in these areas? Are there more? Has anyone ever been promoted for administrative excellence? Do you think this paradigm should be de-emphasised or dismantled? Does it work for anyone? Is it still sexist? Have I asked too many questions?

Answers on a tweetcard, discussion needed and valuable!

MICER17 Reflection 6: Georgios Tsaparlis

This is a reflection on a specific MICER17 conference session; for an overview of the conference, start reading here.

Prof. Georgios Tsaparlis finished up the day with the RSC Education award lecture on problem-solving. My takeaways from this session were to do with the long-lasting problem of … problems! Dorothy Gabel wrote in 1984 (the year of my birth) observed that students will frequently attempt to use algorithmic approaches without understanding the underlying problem – it seems that students never change.

Students are also adept that turning problems into exercises – using familiarity to drop the cognitive load of the task at a hand undernear their own working memory capacity and in so doing become adept at that which was once challenging, but without understanding the problem – it reminds me of the “novice, expert, strategic” approaches to problem solving, where we all collectively attempt to reduce complexity and our cognitive load.

MICER17 Reflection 5: Keith Taber

This is a reflection on a specific MICER17 conference session; for an overview of the conference, start reading here.

Keith S Taber (editor of CERP) gave a fantastic double session on research ethics, and the importance of having a widely-known middle initial. The pre-reading for this session inspired thought, once more, around what really constitutes educational research. Keith has a number of editorials on this, with the opinion that studying a local implementation of a generally effective pedagogical technique is not really research. In this case, to be research it should have control data, and unless the control data is from previous years, splitting into cohorts and running a control in a way known to be disengaging is potentially unethical unless the technique is legitimately novel; in which case, it should be studied alongside best practice, rather than placebo (The reference escapes me but it puts me in mind of a flaw in medicinal chemistry statistics where a new intervention is significant against placebo, but not significant against Existing Best Practice (which itself is not significant against placebo), leading to inappropriate conclusions)

What are some of the reasons these studies happen anyway? Perhaps institutional resistance (Does it work here? Prove it before you change something properly), and perhaps personal doubt (I know it works, but will it work in my hands?). Do I, as a physical scientist, simply trust educational research findings less? Does the increased variation of human research scare me? I would suggest framing both of these issues the same way: We have to put the onus on the person resisting change, whether ourselves or our institution, to prove that the literature supporting change is flawed beyond simply saying “It might not work in our context”.

My takeaway from Keith’s talk was his walk through notable failures of ethics in the history of medicine and psychology: Although the Stanford prison experiment wasn’t on the agenda, we looked at Milgram and Tuskegee, and discussed of the factors that can lead researchers into a situation that is grossly unethical when observed externally. Milgram tells us that people will follow the suggestions of authority into deeply uncomfortable places – deferring our moral judgement in the process. Do we as experimenters (or interviewers) risk accidentally expressing our authority in inappropriate ways? Or can we collectively deceive ourselves that the course of action we are on is justified by the tenets of utilitarianism, as in the extreme example of the Tuskegee incident?

My table had a particularly insightful discussion around the purpose of the debrief – voluntary consent that only becomes informed at the conclusion of the experiment, lest the information affect the outcome. In the Stanford Prison Experiment, trauma is inflicted that goes beyond a simple debrief or disclaimer – and has left people deeply affected, even @decades later. Is it ethical to traumatise someone if it’s all explained later as a fakery? We thought probably not.

All this might seem like a far cry from educational ethics, but badly-implemented research could see students subject to inappropriately difficult tests, potentially harming their self-efficacy and even challenging self-belief. Poorly designed studies can also waste valuable donated time. We also risk a lack of oversight if we are the gatekeepers of our own students – departmental or faculty ethics boards are meant to provide this oversight, but it often amounts to nothing more than a rubber stamp. If we run an experiment with students that view us as a lecturer or leader, can we be sure they feel no implicit coercion? No link between participation and good grades?

We then had an extensive discussion of ethics in publication. Pointing out the limitations of your findings. Not mis-citing sources. When and when not to reveal personally-identifying information. Keith identified a number of “cargo cult behaviours” (my own words), which were seen as making research ethical. Destroying research data and anonymising participants were two given examples and I would add university ethics boards to this under certain circumstances – in that it is possible for a group of people used to assessing medical interventions to rubber-stamp an educational ethics application, but that does not prevent the possibility of straying into subtly coercive behaviour as an interviewer/experimenter. I have no oversight just because my forms are in order!

For a far more elegant summary of the talk, Dr Kristy Turner was also at the conference and sketched several of the talks; her tweet is embedded below with permission, gratefully received!

MICER17 Reflection 4: Graham Scott

This is a reflection on a specific MICER17 conference session; for an overview of the conference, start reading here.

Drilling down into data collection methods more was Graham Scott, talking about interviews for data collection. Many useful do’s and don’ts were shared, such as the importance of interviewing in a neutral distraction-free environment, without a strong power imbalance between the interviewer and interviewee. Lecturers interviewing their students, and vice versa, were both problematic!

The importance of testing out your data collection was re-iterated (an emergent theme for this conference) . Pilot your interview on a single participant, as you may discover whole questions and subject areas that deserve an entry in your rota. The idea of open questions to prompt discussion in focus groups was also raised, with interviewees provided with lists of prompt questions to speed up dry spells. It also helps speed up transcription of group conversation if the mediator addresses people by name!

Our table picked as a group to interview, those students who don’t turn up for lectures. Conversation largely focused around getting people to engage with the interview itself, with the possibility of telephone interview or even instant messaging. Telephone interviews are both hindered and helped by the lack of body language, either in reading student emotions or in avoiding prejudicing the conversation with body language of your own.

Some other takeaways from this session were references: Firstly, a paper from Graham exploring the motivations to share educational practice, in biology educators. Why do we bother publishing, or talking, or attending conferences?

Secondly, a look at barriers to the adoption of fieldwork, where teachers were given a presentation of exemplar good practice, followed by a single question: “Why won’t this work in your context?” It’s something of a personal weakness (using my local context as a reason not to trust education research findings) so I imagine some of the findings will dovetail nicely with Terry McGlynn’s outstanding blog piece from last year, Education research denialism in STEM faculty. We are all in the thrall of pragmatic teaching factors, but perhaps part of the reason we get stuck in a loop of “can’t fix the leak, too busy bailing” is because we just don’t trust the sealant?

For a far more elegant summary of the talk, Dr Kristy Turner was also at the conference and sketched several of the talks; her tweet is embedded below with permission, gratefully received!