NGO rankings: a bad idea poorly implemented

The Global Journal has published an inaugural list of the “Top 100 Best NGOs.” David Algoso has a critique on the methodology (or better, lack thereof) and the failure to recognize the immense sectoral differences. At best, one hopes the editors wanted to give us a list of a few NGOs they really liked and the ranking part was a mere ‘accident.’ But this is likely not the case, considering the response posted in the comments section by Jean-Christophe Nothias, the editor of the journal.

Unfortunately, such rankings matter, in particular since there are thousands of NGOs always looking for donations, and there are millions of donors trying to find a good investment for their good intentions. The Global Journal does a big disservice to both. First, it picks only one hundred among the many deserving organizations, focusing not on matching donors with appropriate ’causes,’ but further contributing to high inequality and a concentration of wealth at the top of the nonprofit sector. Second, the ranking provides no meaningful information about the organizations and their effectiveness. Charity watchdogs have for some time struggled to rate not-for-profits and have for many years only focused on overhead spending. As we have argued elsewhere, financial efficiency is, at best, a weak proxy for effectiveness and, at worst, weakens the nonprofit sector significantly by forcing organizations to neglect organizational growth and engage in a ‘race to the bottom‘ (.pdf; Nonprofit Overhead Cost Project).

Charity watchdogs, including Charity Navigator, are currently changing their ratings to include more meaningful information about individual organizations. This will still include financial data, but also provide more information about what organizations actually do. This is a step in the right direction. Unfortunately, the Global Journal seems to have missed this whole discussion.