Guest Post: Understanding Attitudes Toward Corruption

By Nara Pavão*

Brazil very recently hosted the trial of the century, in which important politicians were accused of, and convicted for, their involvement in a large-scale corruption scandal allegedly concocted by some of the most important politicians of the Workers’ Party (PT). The scheme supposedly consisted of the exchange of legislative support for a big monthly payment. This unprecedented event – the trial and conviction of the nation’s foremost politicians –  has brought the topic of corruption to the center of public debate. Although the Supreme Court has convicted these politicians, not all citizens who have thoughtfully assessed the case agree with the ruling. The unique nature of the judgment, a singular case in the juridical history of the country, has generated controversy and disagreement, a clear indication that interpretations of information about corruption are not as homogenous or simplistic as some would have us believe. While some Brazilians were persuaded by the information surrounding the scandal—accusations, media reports, and proper evidence—other citizens were skeptical of this information and continued to believe in the innocence of the politicians.

When investigating what leads voters to take action against corrupt politicians, the rather scarce literature on the topic, generated primarily from the field of political science, calls attention to the notion of a lack of information: voters support corrupt politicians because they lack sufficient information about these politicians’ misdeeds. The reality is, however, far more complex. From the standpoint of voting behavior, we should expect a more multifaceted account of attitude formation. Why should information about politicians’ involvement in corruption automatically translate into negative attitudes toward corrupt politicians? What if individuals have different levels of tolerance for corruption? What if they interpret information about corruption differently, becoming more or less likely to be persuaded to believe in the accusation?

Because ready availability of information is referred to as an “antidote” to political corruption (Winters, Testa, and Fredrickson 2012), little is known about what induces voters to make informed decisions to support corrupt politicians. Similarly, we know very little about how information about corruption translates into attitudes toward corruption. We know even less about the factors that may moderate the impact of information about corruption on individuals’ attitudes.

Opinion data from Brazil offers us some basis to begin thinking more comprehensively about the question of tolerance toward corruption and about the real role of information in leading citizens to adopt less tolerant attitudes toward corruption.

As we begin thinking in this vein, we should consider one surprising finding: the percentage of survey respondents who admit that they tolerate corruption is striking, particularly if we consider that social desirability bias is affecting their answers (the socially desirable attitude is to not tolerate corruption).

 Avaliação do Presidente Lula, do Congresso e outros assuntos, 2005. DATAFOLHA

Furthermore, according to an opinion poll conducted in 2005 in Brazil, those who sympathize with the Workers Party tend to be more informed about the corruption accusations against their preferred party; nevertheless, they are precisely the ones who believe that there is less corruption in Lula’s government than evidenced. According to the same data, information about corruption (both awareness about the corruption scandals and the extent to which the individual is informed about the scandals) does not predict citizens’ tolerance for corruption.

Avaliação do Presidente Lula, do Congresso e outros assuntos, 2005. DATAFOLHA

Inconclusive though it may be, this data should stimulate us to think more about attitudes toward corruption not only in Brazil but elsewhere. The challenge is to move the debate beyond lack of information to the problem of how citizens react to corruption and the extent to which they are willing to take action against corrupt politicians. Perhaps disseminating information about corruption and increasing transparency—initiatives perceived as essential to good governance—are not the sole antidotes to corruption in politics. Rather, the remedy may depend on something we still lack: a comprehensive understanding of citizens’ real attitudes toward corruption and how information about corruption scandals impact such attitudes.

Editor’s note: Nara Pavão is a PhD candidate in Political Science at the University of Notre Dame. She specializes in Comparative Politics and conducts research on public opinion, voting behavior, and corruption in Brazil, Argentina, and Colombia.
Advertisements

Guest Post: Obamacare and The Divided Welfare State

By Alex Armstrong*

Reading Jacob Hacker’s The Divided Welfare State: The Battle over Public and Private Social Benefits in the United States, I was struck by how accurately it illustrated the struggle for universal health insurance in America. Even though it was published in 2002, I believe the book offers important insights into the contemporary debate around health care reform, particularly why it took so long to happen and why it ultimately took the form it did.

Hacker begins with a pair of questions that have troubled social justice advocates for the last century. First, he asks: “Why are public social programs in the United States less generous, less complete, and less integrated into national economic policy than those typically found abroad?” And, additionally, “Why is the United States the only affluent capitalist country that does not guarantee universal or near-universal health insurance?”

Hacker critiques the way other scholars have answered this first question. Many of the techniques used to estimate the generosity and completeness of the American social programs are inaccurate, he says, because they ignore the structure of the U.S. welfare regime. Much of American welfare provision is actually conducted by the private sector, with the government encouraging the growth of private pensions and fringe benefits through tax expenditures and subsidies. Thus, simply comparing the percentage of GDP spent on social programs across countries will obscure the unusual American case. When these tax expenditures are considered alongside direct social spending, the U.S. rises to slightly above-average among developed nations.

But this unique structure poses a riddle of its own: why does the United States have a “divided welfare state?” Furthermore, the division isn’t uniform – why do we observe a universal federal program for retirement pensions but no equivalent for health insurance?

The answer is not simply American exceptionalism. “Let us cease to conceive of outcomes as rooted in national identities,” Hacker says. And indeed, this is not a story of rugged individuals without need or desire for government intervention.

Instead, the American state exists in its divided form as a result of the type of institutions which existed at the critical juncture when reforms were possible. According to Paul Pierson (2000), “Political development is punctuated by critical moments or junctures that shape the basic contours of social life.” Hacker’s book details these critical junctures to explain the divergent paths of pension plans and health insurance.

The critical juncture in American politics arrived with President Franklin D. Roosevelt and the New Deal. Social Security – direct government provision of retirement pensions – was possible because the private sector occupied only a “supplementary” role at the time. Before the New Deal, private pensions were rare and often restrictive (they were often, for example, revoked for workers who went on strike). These pensions were used chiefly as career incentives and funded solely by employers: workers had no “moral claim” to collect benefits, and at least one study suggests fewer than 10% of workers ever did. This supplementary role allowed the reformers to build a core role for government in the provision of retirement pensions. And because they had only provided fringe benefits, employers were able to accommodate themselves to the new government program: their private pensions continued to play a supplementary role after the introduction and expansion of Social Security.

In contrast, New Deal reformers were uneasy about any attempt to provide government health insurance. Blue Cross and Blue Shield had been expanding in the years prior to Roosevelt’s inauguration, and had already obtained favorable tax treatment in many states. The private sector was occupying a “core” role in the provision of health care insurance, and would not relinquish its position easily. An industry – complete with interest groups and entrenched institutions – had sprung up around insurance provision, and it left little room for the government to act. Additionally, the power of groups like the American Medical Association helped stymie any momentum for reform, and the critical juncture closed without any serious attempt at a government health insurance program.

In the years that followed the Great Depression, the relative positions of government and the private sector were reinforced through a process of increasing returns. Just as employers grew accustomed to providing fringe benefits above and beyond Social Security, the government learned to work at the edges of a private health insurance industry. Even Medicare, though it signified an expansion of federal authority, helped the private sector by removing high-risk individuals from private insurance pools.

In some ways, then, Obamacare represents the government’s final acceptance of its supplementary role. A true public option – “Medicare for all” – was an unrealistic goal, if Hacker’s analysis is correct. Instead, Obamacare provides government-backed incentives and penalties that bring Americans into the private insurance industry. Far from the government takeover of medicine that its critics claim it to be, President Obama’s signature legislative accomplishment may be the ultimate accommodation to America’s uniquely divided welfare state.

* Editor’s note: Alex Armstrong is a PhD student in Political Science at Yale University and a guest contributor to The Smoke-Filled Room.