Democratic Design: The Comparative Study of Electoral Systems Project

By W. Phillips Shively

This article appeared in the October 2005 issue of Public Opinion Pros and has been posted here with the permission of the publisher. It is the third in a series of planned Public Opinion Pros articles that make use of the Comparative Study of Electoral Systems (CSES).


The Comparative Study of Electoral Systems (CSES) is a collaboration of about fifty national election studies from around the world. The project creates a dataset of individuals from countries with varying political institutions, answering a common set of political questions. Participating countries collect a common dataset comprising both a nationally representative postelection survey module and a module of system-level institutional data designed to answer questions about how political institutions affect citizens’ political behavior and their perspectives on democracy. Given the wide variation among these countries with regard to electoral rules, presidential or parliamentary government, systems of federalism or central control, and lines of political conflict, among other things, our combination of institutional and mass-level survey data provides the first really good opportunity to examine a number of classic, critical questions about democratic design.

Here are a few examples of specific avenues of inquiry that guided the original development of the project, forming the core of research that is now beginning:

  • Does satisfaction with democracy vary with the degree to which power is located in the regions, as compared with the capital? And does this relationship depend on the structure of social divisions in the country?
  • Do voters vote more strategically under some electoral systems than others?
  • Are voters more likely to judge the government on the basis of the economy’s performance in a presidential system (where responsibility focuses on a single individual) than in a parliamentary system based on coalitions?

Each national study agrees to donate approximately fifteen minutes out of a national postelection survey, and drop into this segment a module designed by the CSES planning committee, an international committee of scholars involved in election surveys. The end result is a dataset of perhaps seventy or eighty thousand respondents, drawn from a number of countries, that includes a set of commonly coded demographic background variables, the module of survey questions, and a detailed mass of information about each country’s democratic institutions.

Actually, the term “electoral studies” in the title of the project is a bit of a misnomer. The project was originally conceived to allow investigation of how citizens respond in their voting behavior to variations in electoral systems, but it has broadened also to include variation in other institutions, such as federalism and presidential or parliamentary democracy. And in practice, of course, once the survey data have been gathered across a set of countries, investigators can insert into the dataset as contextual variables anything that varies across the countries—economic performance, cultural variables, ethnic or religious mixes of populations, and so on.

CSES was initially fostered by the International Committee for Research into Elections and Representative Democracy. The first survey module went into the field in 1996, and ran for the period 1996 through 2001; a window this large was required in order to accommodate the varying schedules of national elections across so many countries. A second module is now in the field and will be completed in 2006. Planning is nearing completion for a third module to run from 2007 through 2012.

Every survey module covers certain basic political attitudes and perceptions such as party identification, a left-right placement of each major political party, and satisfaction with the workings of democracy; these are repeated in each.

In addition, each module addresses some particular theoretical question. The first module addressed especially questions of strategic voting behavior; the second focused on the distinction between majoritarian and consensus-based democratic institutions; and the third will feature the nature of the political choices offered to individuals, and how those choices affect individual decisions.

The CSES datasets are publicly available, with no embargo, on the project website. A list of all collaborators is also available there, as is the membership of the current planning committee, contact information, and other information about the project. The project bibliography, available on the site, shows a considerable body of work published already from what is still a young project.

Collaborations like this—not just CSES, but also the World Values Surveys and the various Barometer series—have been increasing in number in recent years. As these endeavors become more widespread, it may be useful to look back at our experience and note some of the issues we have had to address in setting up the CSES project:

  • Importance of central infrastructure. Though the bulk of the work is done in the participating surveys, some central infrastructure is necessary in order to process the donated surveys, check them for internal consistency, add contextual variables on national political institutions, work with less-experienced survey teams, maintain the central repository and website, and so forth. Our project suffered for the first few years from an underfunded, and therefore less than fully satisfactory, central support structure. From the start we maintained a staff for support at the National Election Studies project at the University of Michigan, but for the first few years it was maintained on a hand-to-mouth basis by small grants from Michigan and the University of Minnesota. Since 2001, with funding from the National Science Foundation, things have gone well.
  • Keeping the drop-in module short. Since we rely on the generosity (and enlightened self-interest) of participants to provide space in postelection surveys for our module, we must not abuse their hospitality. It is hard to squeeze multiple research concerns into a fifteen-minute segment, but we have been pretty ruthless in avoiding question-creep.
  • Mode.  Inevitably, the participating studies vary as to interviewing mode, from face-to-face interviews to mailed questionnaires. All we can really do about this is to document survey administration in a good deal of detail, and include that information, coded in usable ways, in the dataset.
  • Standardized questions. Although we design a uniform module and ask participants to use it as a drop-in in their surveys, a number of participants choose to modify certain questions. Just as one example, left-right placement of parties is nearly meaningless for respondents in Korea and Taiwan, and though the investigators there have generously included the question in their surveys, they have reported that it increases respondents’ fatigue and hostility. In several instances, participating surveys have unilaterally made changes in questions. (For this reason, among others, the notes to the CSES codebook are a good deal longer than the codebook itself.)
  • Missing variables. In a number of instances, participants have also simply dropped certain questions from the module, which creates missing values for those variables.
  • Language and comparability. Crossnational research always must deal with problems of translation and comparability. We do not have funds to provide central translation facilities, so we depend on our collaborators to translate the planning committee’s module into their national language, and to provide us with an English-language version of their codebook. The Cadillac for translation, of course, is back-translation. In a few instances, participants have done this, but most of the time simple translation has been used. We do make available on the CSES website each collaborator's original survey instrument in the original language, so that those using the data can check directly on how a question was asked.

Except for the question of infrastructure, each of the points above is essentially some aspect of the central problem of projects such as the CSES. Because the project produces a public good (the dataset, which is made available immediately to all scientists in the world), the problem of incentives for participation obviously comes to the fore. What does a survey get in return for its expensive gift of a fifteen-minute segment in a national study? Or, to put it another way, what incentives does the project have to encourage participation, standardization of questions, and use of the full module rather than cherry-picked items?

One possible incentive would be to make the dataset less than a fully public good, by embargoing the data for some period, perhaps a year or two, during which only participating studies would be allowed to analyze it and publish work from it. We have avoided doing this, and are willing to tolerate a little less control in return for the benefits of a fully public good. Fortunately, most studies have found that participation in the project is viewed as a positive thing by their local funders, and this has given them a considerable incentive to participate in a way that at least does not deviate markedly from what the project needs. In a few cases the project has used the ultimate “incentive” by refusing to accept proffered contributions that departed too greatly from the common design. But this is always a bit of a balancing act.

The bottom line is that the CSES project has made available for the first time a body of data that allows us to address some very important questions about the interaction between citizens and their nations’ political institutions.


W. Phillips Shively is professor of political science at the University of Minnesota and chair of the CSES planning committee, 1997-2003.



Questions? E-mail cses@umich.edu Comparative Study of Electoral Systems, www.cses.org