Stage 4. Monitoring and Evaluation: results and feedback

This relates to the outcomes and impact of the learning/development experience. It includes (and is sometimes synonymous with) the outcomes of formative and summative assessment) but also relates to the question of whether or not a particular teaching or learning activity/module/programme etc. or CPD initiative has been successful. The monitoring and evaluation process feeds into the adaptation of teaching/learning activities and informs larger‐scale review (including formal quality assurance) which takes place after the teaching/development has been implemented and assessed.

Thus not only will there be a consideration of whether learning outcomes have been achieved but also feedback will be reviewed from teachers and learners on levels of satisfaction with the overall experience and its broader outcomes such as greater confidence, employment prospects, value for money, access to further study etc. All of these will inform a formal or informal quality evaluation and form a vital part of the quality cycle described here, feeding in a continual loop into the adaptation (Stage 5) and future planning of learning, teaching and related activities. More importantly it is here that formal quality processes are most closely engaged and where data (qualitative and quantitative) on quality will be collated.

QA question: How will you know that it works?

Areas to consider

Quality questions

Learning Outcomes

What were the expected outcomes when planning the course?
What were the real outcomes?
Have the learning outcomes been achieved / did the real outcomes match the expected outcomes?
What were the reasons for achieving / not achieving the learning outcomes? Reflect on the possible reasons at various stages of the course: planning, implementation, materials, etc.

Feedback

What feedback from learners has been collected, e.g. assessment scores of learners, attendance, engagement with learning, satisfaction survey?
What are you going to do with learner feedback, e.g. course modification/review?
What institutional feedback and follow-up systems are required?

Quality measures

How do you recognise good quality?
What formal mechanisms and/or bodies for quality assurance are involved?
What is the employability rate of graduates and what jobs have they gone into?

Reporting

Who needs to know the outcomes – internal and external stakeholders?
How should the results be communicated and what is the impact e.g. inclusion in league tables, recruitment, funding?

Progression

Have learners/participants been attending regularly (face-to-face or online/out of class sessions)?
Were tasks being completed and submitted on time, if not why not?
Is there evidence of problems with knowledge, understanding and skills (competencies) which may affect learning outcomes, participation etc.?

Attitudes

Is there evidence that learners are/have been motivated and engaged in the learning activity e.g. are they preparing and participating appropriately?
Have good relationships been established between learners and teacher/facilitator, and between learners?
How have learners responded to learning methods/activities (do they appear to be working/have worked for this group)?

Adaptation

Is additional support required e.g. additional skills development, conversation practice, revision tasks?
Do teaching/learning methods need to be revised or reviewed now, e.g. do you need to make changes before/after the activity/module etc. has been completed?

Advice

There are a range of factors to which contribute to the evaluation of a learning activity; test scores do not necessarily give information on the quality of the learning experience. Both qualitative and quantitative data are needed to evaluate whether the learning outcomes have been achieved and whether the programme and its practical implementation have been successful. Review and reflection are best fostered as a collaborative and dialogical relationship between teachers and learners, teachers and their colleagues and between learners and their peers.

Additional note: There should be clear internal coherence / alignment of teaching and learning objectives, methods of delivery, tasks and activities, materials, evaluation of the process and the products. When evaluating outcomes look at the whole system and the whole teaching / learning process.

Case study example: Content and Language Integrated Learning (CLIL)

Language support system for International (English-medium) Master’s Programmes, University of Jyväskylä , Finland

This extract from a case study illustrates a range of ways in which evaluation can be used to inform planning and to review a programme of study.
The support system was originally set up after an institutional evaluation conducted in 2000 on the English-medium teaching offered at the University of Jyväskylä to exchange students. Through the evaluation some key problem areas for both students and teaching staff were identified and support courses started in academic study skills, writing, and presentation, as well as in pedagogical and communication issues involved in teaching multilingual and multicultural groups in English.
On the basis of the feedback from both students and programme staff the support system works well and contributes greatly to the discourse competence that students demonstrate in their writing and presentation assignments for the subject studies. Also, a collaborative institutional evaluation was done in 2007 to review all international Master’s programmes. On the basis of its findings, new areas of development were also identified and are now being addressed. This kind of work is never completed, but through systematic reviewing it is possible to enhance the quality of these programmes continuously.

Read the full case study: Download (pdf)

This project has been funded with support from the European Commission. This communication reflects the views only of the author, and the Commission cannot be held responsible for any use which may be made of the information contained therein.

european project logos