ODI Logo ODI

Trending

Our Programmes

Search

Newsletter

Sign up to our newsletter.

Follow ODI

Obama and USAID: the need for genuine evaluation

Written by Ajoy Datta

What comes first for USAID – evidence or policy?
Over the last decade, many would argue that the United States Agency for International Development (USAID) has increasingly focused on the US state department goal of transformational diplomacy, with an emphasis on countries that are politically important. It’s top five recipients, for example, are Iraq, Afghanistan, Sudan, Colombia and Egypt. This is neither regressive, as governance is clearly a key issue on the development agenda, nor a new phenomenon:  Andrew Natsios, former administrator of USAID argued that foreign aid has risen with the urgency of national security threats such as in Post-war Europe – the Marshall Plan; and during the Cold War in The Alliance for Progress. But with development implicitly, as many would believe, tied to foreign policy objectives programme evaluation has increasingly focused on the reporting of activities and outputs for budgeting and accountability purposes, rather than changes in welfare of the poor. For example, the USAID clearinghouse contained only 31 impact evaluations (which assess how an intervention affects final welfare outcomes of beneficiaries) a year between 2004-6. This is a small number considering the several hundred projects that USAID fund every year. Further, fear that negative evaluations would play into the hands of foreign aid critics in Congress and the State Department has meant that many evaluations have been hidden, limiting the chances of learning from either successes or failures. Global indicators under common objectives and cross cutting themes have been favoured over country specific monitoring frameworks enabling easier aggregation and accountability to US stakeholders, namely Congress. One could argue that this has been, in essence, less about evaluation and more about information systems management. There is fear amongst some that policy drives evidence, rather than evidence driving policy.

 

A big spender, but where and how?
In 2007, the USA spent almost $22 billion on development aid, 90% of which was channelled through its bilateral operations such as the Millennium Challenge Corporation (MCC) and the US President’s Emergency Plan for AIDS Relief (PEPFAR). The latter – launched in 2003 and the largest international health initiative in history dedicated to a single disease – has committed $18.8 billion of which almost 60% was spent by 2007.  And over $7 billion has been given to the MCC by the US Congress since 2004. These mammoth amounts often dwarf the national budgets of developing countries. Questions remain though, about whether the money spent achieved the hoped-for changes in people’s lives. What impacts have HIV and AIDS control efforts had on the health of populations, for example? What has changed as a result of democracy and governance assistance? What are the underlying factors that determine success or failure? Is USAID improving its performance as a result of learning?

PEPFAR, for instance, was pioneered by the Bush administration in a perception of  HIV and AIDS in sub-Saharan Africa as a threat to national security. According to mainstream public opinion, it has been a success, both at home and abroad. This success, however, is based mainly on statistics like the number of newly infected people receiving treatment, which tells us little about the quality of the treatment, or whether this treatment has reduced AIDS-related deaths. The Institute of Medicine has been critical of this approach as have several experts within USAID itself. Some within USAID feel that it is that it may be too early to document impact such as prevention, and that PEPFAR is, as its name suggests, an ‘emergency’ programme. Nevertheless, PEPFAR has been framed as a success story in a context in which US foreign policy, especially with regards to their efforts in Iraq and Afghanistan has been heavily criticised. A case then of ‘policy-based evidence making’.  

Promoting cooperation not fear
Things are beginning to change though. An Executive Order – a directive issued by the US president - was published shortly after Barack Obama was elected, after being held in draft for more than a year. It documents weaknesses over the last decade, and aims to strengthen evaluation in the interests of impact, transparency and learning.

So what does a learning culture look like? It is culture in which an organisation engages in self-examination and learning that is based on real evidence. It is a culture in which experimentation and change are encouraged. 

To foster such a culture, first, senior management need to demonstrate leadership and commitment to create a management regime based on outcomes and impact using appropriate, and not necessarily experimental methods– in other words, results that respond to country needs. Second, organisational support structures need to be resurrected, including a responsive knowledge and documentation centre to meet the needs of USAID for information, analysis, evaluation and decision making, backed by proper incentives to ensure rigorous rather than positive evaluations. Third, capacity building, professional development and training guided by best practices in monitoring and evaluation (M&E) must be directed towards both programme and M&E staff with USAID and its partners. Finally, mechanisms should be established to help USAID absorb and disseminate the results of its work and evaluation, as well as its own research and the research of others. While many believe that a heavy focus on accountability in USAID may have promoted an evaluation culture of fear , it is hoped that the Obama administration can step forward to promote a culture of learning and cooperation.

"