Survey creation #1

Fair to say that I was very concerned and frustrated when I realised I had about 2 weeks to create my survey but I am feeling better about it now. I’m also thankful that I spent hours and hours a while back playing around with Qualtrics, the survey creating software the university uses, learning how it works and which it can do. It has been a pretty rocky journey creating the survey over the last few days.

Stock image from Microsoft Office

Anna had been talking about using an instrument that someone else had already validated and I spent a long time looking at various self-efficacy scales. That included sort of general ones as well as more specific ones for maths, writing or something like that. It took me over a day to realise that NONE of them were ever going to be suitable because when it comes down to it self-efficacy scales are there to measure someone’s self-efficacy (high, low etc). But for my project I don’t actually care if their self-efficacy is high, low or somewhere in the middle. I only want to know if it has changed, yes or no, because of the enabling program that they did and then, most importantly, specifically WHAT CAUSED it to change?????

I probably should have realised that at the start but I got there in the end! So my next problem was breaking down academic self-efficacy into specific skills or tasks because Bandura says that it is much better to look at specific things, for example, writing an essay in the correct structure, rather than a general thing such as “writing skills.” So what are the specific things that students do in enabling programs? Well they are all different so that is a tough one! In my own experience, going from one university to another, I can confidently say that there are lots of things that are covered by both programs. But they are still very different.

So I thought I would have another look at the benchmarking project which one of my bosses Chris Cook was a part of. It was led by Suzie Syme who I have met online at a couple of the NAEEA events. They compared 3 different enabling programs and have moved on and now are doing the same thing with 12 programs. Their main findings were that although the assessment tasks were different the learning outcomes (or learning objectives if you prefer that terminology) were very similar. The did a thematic analysis of some kind I think and came up with 11 common learning outcomes.

So, knowing that Suzie Syme is going to be on my confirmation of candidature panel I took those 11 learning outcomes and “translated” them into tasks or skills. For example, the first learning outcome is “Knowledge of and ability to engage appropriately with university systems, expectations, academic conventions” and from my experience the main “university system” that students need to learn is the Learning Management System (LMS) such as Moodle, Blackboard or Canvas.

So from the 11 learning outcomes I ended up with 21 skills or tasks that I could ask students about. So I need to know a few different things. Did their confidence in each thing go up? Down? Sideways? Then if it did change (either up or down) what caused that? Was it one event, or a lightbulb moment? Or did it happen slowly over time? Really I want to know what situations/opportunities should we re-create to help improve students confidence with each of the skills they need to be able to do?   

So I threw together the demographic questions without too many problems. I copied them from a previous survey I had done and just adjusted them slightly for my purposes here. Then I set up a table with the 21 skills/tasks down one side and 4 columns. First column is asking for their confidence levels when the first started in the enabling program. Next column is their confidence at the end. Then the next one asks what changed the confidence level and that is based on the four methods that Bandura proposes. Was it practice (something you did), Feedback (something someone said), Observation (watching someone else do it) or Feelings (the way you felt about it changed). They might need to be re-worded or altered but I think the idea is there. Then the last column asks if it happened in a lightbulb moment or over time.

I showed Stuart yesterday morning and he said he really liked my approach, which translates to “he loves it.” So that has really taken the pressure off but now I have to validate this darn survey I have created and that might be easier said that done!

Leave a comment