Latest News

Constructing the IIAG

The IIAG is the most accurate picture of what is going on in Africa, based on data, not personal views or political bias.

-Mo Ibrahim

How the Ibrahim Index of African Governance is built

The Ibrahim Index of African Governance (IIAG) measures the quality of governance in every African country on an annual basis. It does this by compiling data from diverse sources to build an accurate and detailed picture of governance performance.

The broad aim of the IIAG is to better inform and sustain the debate on African governance by providing a transparent and user-friendly resource to:

  • support citizens, governments, institutions and the private sector to accurately assess the delivery of public goods, services and policy outcomes
  • encourage data-driven narratives on governance issues
  • help determine, debate and strengthen government performance.

MIF defines governance as the provision of the political, social and economic public goods and services that every citizen has the right to expect from their state, and that a state has the responsibility to deliver to its citizens.

The IIAG focuses on outputs and outcomes of policy, rather than declarations of intent, de jure statutes and levels of expenditure.

10 years of the Index

The IIAG was launched in 2007 and has evolved to be the most comprehensive assessment on African governance. The 2016 IIAG is the tenth iteration and builds on the work of the previous nine years.

This annual refinement means that the IIAG data set is updated when practical improvements are identified. Whenever new historical data are made available, or the structure of the IIAG is strengthened, the entire data set is updated back to 2000. We therefore ask everyone to throw away old copies of the index as soon as a new version is released.

Framework of the IIAG

The IIAG is composed of four overarching categories: Safety & Rule of Law, Participation & Human Rights, Sustainable Economic Opportunity and Human Development. These categories are made up of a total of 14 sub-categories. The 14 sub-categories are populated with a number of indicators, which measure a narrow governance concept. Each of these indicators captures an aspect of the sub-category topic.

This framework allows the user to analyse governance performance within both distinctive and broad governance concepts by different results tiers: at the Overall Governance level, category and sub-category level, and indicator level.

Construction of the IIAG – a few technical explanations

As governance is not measurable directly it is necessary to determine the most suitable set of proxy indicators. The IIAG takes into account potentially diverse viewpoints by making use of a variety of data sources and indicators.

The Foundation does not collect primary data, but rather collates data provided by respected external sources. The 2016 IIAG consists of 95 indicators from 34 data providers. There are several types of data used: Official Data, Expert Assessments, Opinion Surveys and Public Attitude Surveys, which are new for 2016.

To ensure a robust and comprehensive analysis, the data are required to cover at least 33 of the 54 countries on the continent and provide at least two years’ worth of data for these countries since 2000. The most recent data for these 33 countries can be no more than three years old.

Clustered indicators

Indicators measuring a specific governance concept are sometimes available from multiple sources. For example, data measuring political violence are available from two different data providers: Armed Conflict Location & Event Data Project and Political Terror Scale. To improve the accuracy of the indicator measurement and avoid double counting, these measures are combined into a single clustered indicator, which is the average of its underlying sub-indicators.

Certain indicators measure a governance concept, which is too narrow for inclusion. For example, the measures IT Infrastructure from the Economist Intelligence Unit, Mobile Phone Subscribers, Household Computers and Household Internet Access, all taken from the International Telecommunications Union, are clustered together to become sub-indicators of the indicator Digital & IT Infrastructure. The same inclusion criteria are applied to sub-indicators and stand-alone indicators.

Handling missing data

Most indicators included in the IIAG have missing data points over the time series. As this can have an effect on a country’s aggregate scores, estimates are provided for missing data, following a statistical process called imputation.

According to this process, if data is missing outside the time series, it is replaced by an existing data point. When data is missing inside the time series, these are replaced with numbers incrementally higher or lower than the neighbouring data points.

Normalisation

Given that the data utilised in the construction of the IIAG come from 35 separate data providers that present their data on different scales, it is necessary to standardise all data. This is done through a statistical process called normalisation whereby raw data for each indicator are transformed by the min-max normalisation method. This process allows all scores to be published in common units and within the same bounds of 0-100, where 100 is always the best possible score.

The application of this normalisation method means that a score of 100 relates to the best possible score within the group of 54 African countries between 2000 and the latest data year.

Data aggregation

The IIAG uses a transparent, simple and replicable method of data aggregation. A simple average is calculated using the structure of the Index to arrive at the Overall Governance scores.

Analysis of the Index: measurement errors and uncertainty

The Foundation publishes standard errors and confidence intervals alongside the composite IIAG and category scores to reflect degrees of uncertainty, which will be available on our website after launch on 3 October.

The standard errors and confidence intervals allow users of the IIAG to discriminate, to a certain degree, between changes in the value of the IIAG that can be confidently treated as actual changes in the state of governance and changes that might be due to noise, or are at least insufficiently sizeable to be able to ascribe a high likelihood to such change being significant. This allows users of the IIAG to make more sophisticated use of the governance information provided by the IIAG.

Building the IIAG is a rigorous process that is being constantly refined. We hope you will find the results stimulating and challenging, and welcome your feedback.