The National Student Survey is under pressure. Universities spend a lot of time circulating statistics, but I wonder who submits these – it probably tends to be the students who are most impressed or least impressed with their courses – and do these students always appreciate what they’ve been given. I certainly didn’t realise some of what I learnt until after I finished my degree, and really started to value some of what I hadn’t understood at the time:
The National Student Survey puts pressure on lecturers to provide ‘enhanced’ experiences. But, argues Frank Furedi, the results do not measure educational quality and the process infantilises students and corrodes academic integrity
One of the striking features of a highly centralised system of higher education, such as that of the UK, is that the introduction of new targets and modifications to the quality assurance framework can have a dramatic impact in a very short space of time. When the National Student Survey was introduced in 2005, few colleagues imagined that, just several years down the road, finessing and managing its implementation would require the employment of an entirely new group of quality-assurance operatives. At the time, the NSS was seen by many as a relatively pointless public-relations exercise that would have only a minimal effect on academics’ lives. It is unlikely that even its advocates would have expected the NSS to acquire a life of its own and become one of the most powerful influences on the form and nature of the work done in universities.