On rankings methodologies : Phil Baty, why not taking example on Malaysia ?

Posted on 14/06/2012


In front of the multiplicity of rankings, and all the passionate debates on their values, I would like to share briefly the example of the SETARA rating system implemented in Malaysia by the Ministry of Higher Education since 2009. It is a very original system, which could be usefully replicated at a higher level by our preferred providers of rankings…
As mentioned, SETARA is not a ranking but a rating. Institutions get stars, from 1 (weak) to 6 (outstanding). The methodology is very interesting and up to my knowledge, rather unique. As for many rankings, it is using all kinds of statistics provided by the higher education institutions on academics, staff students ratio, facilities,… But the methodology is also relying on 2 other sets of data which are gathered by asking the graduates and the employers.
For the graduates, any graduating student has to fill an online questionnaire and it is even compulsory to fill it to graduate at some universities. Among the questions asked, the graduate satisfaction with the facilities, the academic staff, if the curriculum is comprehensive, challenging,… the extent of student’s participation in extracurricular activities but also the employment rate six months of graduation, the average monthly starting pay, the average satisfaction. Another section of the data collection is based on an employer satisfaction survey and this is also a very unique approach for an official rating system. Employers are rating their overall satisfaction with the graduates, the intention to employ graduates from the institution and finally the evaluation of the graduates against seven generic students attributes (communication skills, critical thinking and problem solving, team work skills, continuous learning and information management, entrepreneurial skills, ethical and professional skills and leadership skills).
Of course, implementing such a comprehensive survey at the international level could be difficult and costly…unless you take example on other multinational surveys. For example, in entrepreneurship, the Global Entrepreneurship Monitor, developed by Babson and London Business School in 1999, is now the largest annual assessment of entrepreneurial activities, with 85 countries involved. GEM is built through partnerships with one academic institution in each country, which will take charge of administering annual questionnaires and which, in exchange, get access to the latest results for their researchers and are also able to promote themselves to students as being at the forefront in entrepreneurship studies. It is seen as prestigious to be one of the countries correspondents. Such partnership could be developed for higher education rankings and if not as comprehensive as the Malaysian data collection, would certainly generate for the first time useful primary data to input into a ranking system…