Module Evaluation Questionnaires provide a way of measuring students’ opinions of teaching quality. However, a raw score, without context, is a fairly meaningless indicator of quality. This recipe card show how you can provide analysis of contextual information and qualitative data to provide a richer picture and identify areas for development.
Recipe Card Detail
|Module Evaluation Questionnaire scores can be a useful way of evaluating our teaching, but it is important that we provide context for them to be meaningful.|
WE WILL LOOK AT: Additional data about your sample that you need to provide context Comparator data that can be used to analyse feedback relative to other courses
PREPARATION: You will need current MEQ reports
TIMING: 30-60 minutes to analyse and contextualise
|1. SELECT QUESTIONS|
Consider what aspects of your teaching and learning practice you are interested in and highlight those questions for analysis. You may be interested in how students view the course as a whole, including organisation and assessment, or more interested in questions that look at your individual feedback as a teacher.
|2. GET CONTEXT DATA |
The MEQ reports are provided for each module. Each report highlights the number of students who responded from the total invited, and the response ratio. It is important that you consider and report these because small sample sizes (n<20) and low response rates (<50%) mean that the data have less validity.
|3. EXAMINE COMPARATOR DATA |
Each question provides comparative data showing the median scores for other contexts where that question was used (department, faculty, and university for all core questions, department for additional questions). It is important that you consider your score in relation to these comparators : a median score of 4 out of 5 might sound good, until you realise that the faculty average is 4.5.
|4. LOOK AT THE SPREAD |
As well as looking at the measures of central tendency (e.g. median) it is worth considering the spread of answers, as given by the standard deviation. For example, feedback might be highly polarised (students either love it or hate it) and this will not be captured in the central figure.
|5. ANALYSE CONTEXTUAL COMMENTS |
The numerical feedback from an MEQ can only tell some of the story. It is important also to explore the textual comments to analyse the ‘why’ as well as the ‘what’. The simplest approach is basic coding, where you read through first to identify common themes and secondly to count the number of times that theme was mentioned. You may need to contact your module leader to request this information.
|6. ADD NARRATIVE |
This is your opportunity to provide context to the feedback, identify good practice and consider opportunities to address weaker areas. This brings together the numerical and textual data to gain clearer insights into what we can learn from the MEQ data.
|7. EVIDENCE |
When reporting MEQ data make sure that you are clear which questions you are referencing, make sure to include sample size, response rate, and comparator data. Use the textual data to add colour but make sure that you are clear about how representative views presented are.