This is a reflection on a specific MICER17 conference session; for an overview of the conference, start reading here.
Dr Orla Kelly talked about evaluating classroom practice, how we do it, and why we do it. The session focused a lot on action research, and then on the different ways we can collect data.
As a novice educator and mostly-practicioner, I still have very little idea about exactly what Action Research is – having had several cracks at understanding it at previous conferences, I think I’m finally starting to get it. I think it simply refers to any systematic evaluation of one’s own performance, during teaching, and the immediate feedback. Last year, I ran a lecturer evaluation survey mid-way through my lecture and changed my practice as a result – could this be action research?
Regardless of what it is, action research needs to have both research and action! As an example from her own practice, Orla talked about one of her papers where she collected student data about a scheme to introduce problem-based learning into recipe-based labs, an endeavour which was only partially successful, for a number of reasons.
A chunk of the session was spent discussing the pros and cons of different types of data gathering and handling – my group looked at focus groups specifically. Our table had a divergent conversation about grounded theory, and the idea that if you bring any pre-conceived notions into the interview or the data analysis, it’s not grounded theory. While the meat of this session was extremely useful to all, I am not really in a position to conduct data collection outside of survey design for now.
As I am not well-versed enough in the topic of Orla’s talk, I highly recommend you check out her sessional pre-reading – lest my absence of words be seen as problematic.