Saturday, March 10, 2018

Technology lock-in accidents

image: diagram of molten salt reactor

Organizational and regulatory features are sometimes part of the causal background of important technology failures. This is particularly true in the history of nuclear power generation. The promise of peaceful uses of atomic energy was enormously attractive at the end of World War II. In abstract terms the possibility of generating useable power from atomic reactions was quite simple. What was needed was a controllable fission reaction in which the heat produced by fission could be captured to run a steam-powered electrical generator.

The technical challenges presented by harnessing nuclear fission in a power plant were large. Fissionable material needed to be produced as useable fuel sources. A control system needed to be designed to maintain the level of fission at a desired level. And, most critically, a system for removing heat from the fissioning fuel needed to be designed so that the reactor core would not overheat and melt down, releasing energy and radioactive materials into the environment.

Early reactor designs took different approaches to the heat-removal problem. Liquid metal reactors used a metal like sodium as the fluid that would run through the core removing heat to a heat sink for dispersal; and water reactors used pressurized water to serve that function. The sodium breeder reactor design appeared to be a viable approach, but incidents like the Fermi 1 disaster near Detroit cast doubt on the wisdom of using this approach. The reactor design that emerged as the dominant choice in civilian power production was the light water reactor. But light water reactors presented their own technological challenges, including most especially the risk of a massive steam explosion in the event of a power interruption to the cooling plant. In order to obviate this risk reactor designs involved multiple levels of redundancy to ensure that no such power interruption would occur. And much of the cost of construction of a modern light water power plant is dedicated to these systems -- containment vessels, redundant power supplies, etc. In spite of these design efforts, however, light water reactors at Three Mile Island and Fukushima did in fact melt down under unusual circumstances -- with particularly devastating results in Fukushima. The nuclear power industry in the United States essentially died as a result of public fears of the possibility of meltdown of nuclear reactors near populated areas -- fears that were validated by several large nuclear disasters.

What is interesting about this story is that there was an alternative reactor design that was developed by US nuclear scientists and engineers in the 1950s that involved a significantly different solution to the problem of harnessing the heat of a nuclear reaction and that posed a dramatically lower level of risk of meltdown and radioactive release. This is the molten salt reactor, first developed at the Oak Ridge National Laboratory facility in the 1950s. This was developed as part of the loopy idea of creating an atomic-powered aircraft that could remain aloft for months. This reactor design operates at atmospheric pressure, and the technological challenges of maintaining a molten salt cooling system are readily solved. The fact that there is no water involved in the cooling system means that the greatest danger in a nuclear power plant, a violent steam explosion, is eliminated entirely. Molten salt will not turn to steam, and the risk of a steam-based explosion is removed completely. Chinese nuclear energy researchers are currently developing a next generation of molten salt reactors, and there is a likelihood that they will be successful in designing a reactor system that is both more efficient in terms of cost and dramatically safer in terms of low-probability, high-cost accidents (link). This technology also has the advantage of making much more efficient use of the nuclear fuel, leaving a dramatically smaller amount of radioactive waste to dispose of.

So why did the US nuclear industry abandon the molten-salt reactor design? This seems to be a situation of lock-in by an industry and a regulatory system. Once the industry settled on the light water reactor design, it was implemented by the Nuclear Regulatory Commission in terms of the regulations and licensing requirements for new nuclear reactors. It was subsequently extremely difficult for a utility company or a private energy corporation to invest in the research and development and construction costs that would be associated with a radical change of design. There is currently an effort by an American company to develop a new-generation molten salt reactor, and the process is inhibited by the knowledge that it will take a minimum of ten years to gain certification and licensing for a possible commercial plant to be based on the new design (link).

This story illustrates the possibility that a process of technology development may get locked into a particular approach that embodies substantial public risk, and it may be all but impossible to subsequently adopt a different approach. In another context Thomas Hughes refers to this as technological momentum, and it is clear that there are commercial, institutional, and regulatory reasons for this "stickiness" of a major technology once it is designed and adopted. In the case of nuclear power the inertia associated with light water reactors is particularly unfortunate, given that it blocked other solutions that were both safer and more economical.

(Here is a valuable review of safety issues in the nuclear power industry; link.)

Saturday, March 3, 2018

Consensus and mutual understanding

Groups make decisions through processes of discussion aimed at framing a given problem, outlining the group's objectives, and arriving at a plan for how to achieve the objectives in an intelligent way. This is true at multiple levels, from neighborhood block associations to corporate executive teams to the President's cabinet meetings. However, collective decision-making through extended discussion faces more challenges than is generally recognized. Processes of collective deliberation are often haphazard, incomplete, and indeterminate.

What is collective deliberation about? It is often the case that a collaborative group or team has a generally agreed-upon set of goals -- let's say reducing the high school dropout rate in a city or improving morale on the plant floor or deterring North Korean nuclear expansion. The group comes together to develop a strategy and a plan for achieving the goal. Comments are offered about how to think about the problem, what factors may be relevant to bringing the problem about, what interventions might have a positive effect on the problem. After a reasonable range of conversation the group arrives at a strategy for how to proceed.

An idealized version of group problem-solving makes this process both simple and logical. The group canvases the primary facts available about the problem and its causes. The group recognized that there may be multiple goods involved in the situation, so the primary objective needs to be considered in the context of the other valuable goods that are part of the same bundle of activity. The group canvases these various goods as well. The group then canvases the range of interventions that are feasible in the existing situation, along with the costs and benefits of each strategy. Finally, the group arrives at a consensus about which strategy is best, given everything we know about the dynamics of the situation.

But anyone who has been part of a strategy-oriented discussion asking diverse parties to think carefully about a problem that all participants care about will realize that the process is rarely so amenable to simple logical development. Instead, almost every statement offered in the discussion is both ambiguous to some extent and factually contestable. Outcomes are sensitive to differences in the levels of assertiveness of various participants. Opinions are advanced as facts, and there is insufficient effort expended to validate the assumptions that are being made. Outcomes are also sensitive to the order and structure of the agenda for discussion. And finally, discussions need to be summarized; but there are always interpretive choices that need to be made in summarizing a complex discussion. Points need to be assigned priority and cogency; and different scribes will have different judgments about these matters.

Here is a problem of group decision-making that is rarely recognized but seems pervasive in the real world. This is the problem of recurring misunderstandings and ambiguities within the group of the various statements and observations that are made. The parties proceed on the basis of frameworks of assumptions that differ substantially from one person to the next but are never fully exposed. One person asserts that the school day should be lengthened, imagining a Japanese model of high school. Another thinks back to her own high school experience and agrees, thinking that five hours of instruction may well be more effective for learning than four hours. They agree about the statement but they are thinking of very different changes.

The bandwidth of a collective conversation about a complicated problem is simply too narrow to permit ambiguities and factually errors to be tracked down and sorted out. The conversation is invariably incomplete, and often takes shape because of entirely irrelevant factors like who speaks first or most forcefully. It is as if the space of the discussion is in two dimensions, whereas the complexity of the problem under review is in three dimensions.

The problem is exacerbated by the fact that participants sometimes have their own agendas and hobby horses that they continually re-inject into the discussion under varying pretexts. As the group fumbles towards possible consensus these fixed points coming from a few participants either need to be ruled out or incorporated -- and neither is a fully satisfactory result. If the point is ruled out some participants will believe their inputs are not respected, but if it is incorporated then the consensus has been deformed from a more balanced view of the issue.

A common solution to the problems of group deliberation mentioned here is to assign an expert facilitator or "muse" for the group who is tasked to build up a synthesis of the discussion as it proceeds. But it is evident that the synthesis is underdetermined by the discussion. Some points will be given emphasis over others, and a very different story line could have been reached that leads to different outcomes. This is the Rashomon effect applied to group discussions.

A different solution is to think of group discussion as simply an aid to a single decision maker -- a chief executive who listens to the various points of view and then arrives at her own formulation of the problem and a solution strategy. But of course this approach abandons the idea of reaching a group consensus in favor of the simpler problem of an individual reaching his or her own interpretation of the problem and possible solutions based on input from others.

This is a problem for organizations, both formal and informal, because every organization attempts to decide what to do through some kind of exploratory discussion. It is also a problem for the theory of deliberative democracy (link, link).

This suggests that there is an important problem of collective rationality that has not been addressed either by philosophy or management studies: the problem of aggregating beliefs, perceptions, and values held by diverse members of a group onto a coherent statement of the problem, causes, and solutions for the issue under deliberation. We would like to be able to establish processes that lead to rational and effective solutions to problems that incorporate available facts and judgments. Further we would like the outcomes to be non-arbitrary -- that is, given an antecedent set of factual and normative beliefs by the participants, we would like to imagine that there is a relatively narrow band of policy solutions that will emerge as the consensus or decision. We have theories of social choice -- aggregation of fixed preferences. And we have theories of rational decision-making and planning. But a deliberative group discussion of an important problem is substantially more complex. We need a philosophy of the meeting!

Tuesday, February 27, 2018

Computational social science

Is it possible to elucidate complex social outcomes using computational tools? Can we overcome some of the issues for social explanation posed by the fact of heterogeneous actors and changing social environments by making use of increasingly powerful computational tools for modeling the social world? Ken Kollman, John Miller, and Scott Page make the affirmative case to this question in their 2003 volume, Computational Models in Political Economy. The book focuses on computational approaches to political economy and social choice. Their introduction provides an excellent overview of the methodological and philosophical issues that arise in computational social science.
The subject of this book, political economy, naturally lends itself to a computational methodology. Much of political economy concerns institutions that aggregate the behavior of multiple actors, such as voters, politicians, organizations, consumers, and firms. Even when the interactions within and rules of a political or economic institution tion are relatively simple, the aggregate patterns that emerge can be difficult to predict and understand, particularly when there is no equilibrium. It is even more difficult to understand overlapping and interdependent institutions.... Computational methods hold the promise of enabling scholars to integrate aspects of both political and economic institutions without compromising fundamental features of either. (kl 27)
The most interesting of the approaches that they describe is the method of agent-based models (linklink, link). They summarize the approach in these terms:
The models typically have four characteristics, or methodological primitives: agents are diverse, agents interact with each other in a decentralized manner, agents are boundedly rational and adaptive, and the resulting patterns of outcomes comes often do not settle into equilibria.... The purpose of using computer programs in this second role is to study the aggregate patterns that emerge from the "bottom up." (kl 51)
Here is how the editors summarize the strengths of computational approaches to social science.
First, computational models are flexible in their ability to encode a wide range of behaviors and institutions. Any set of assumptions about agent behavior or institutional constraints that can be encoded can be analyzed. 
Second, as stated, computational models are rigorous in that conclusions follow from computer code that forces researchers to be explicit about assumptions. 
Third, while most mathematical models include assumptions so that an equilibrium exists, a system of interacting political actors need not settle into an equilibrium point. It can also cycle, or it can traverse an unpredictable path of outcomes. 
The great strength of computational models is their ability to uncover dynamic patterns. (kl 116)
And they offer a set of criteria of adequacy for ABM models. The model should explain the results; the researcher should check robustness; the model should build upon the past; the researcher should justify the use of the computer; and the researcher should question assumptions (kl 131).
To summarize, models should be evaluated based on their ability to give insight and understanding into old and new phenomena in the simplest way possible. Good, simple models, such as the Prisoner's Dilemma or Nash bargaining, with their ability to frame and shed light on important questions, outlast any particular tool or technique. (kl 139)
A good illustration of a computational approach to problems of political economy is the editors' own contribution to the volume, "Political institutions and sorting in a Tiebout model". A Tiebout configuration is a construct within public choice theory where citizens are permitted to choose among jurisdictions providing different bundles of goods.
In a Tiebout model, local jurisdictions compete for citizens by offering bundles of public goods. Citizens then sort themselves among jurisdictions according to their preferences. Charles M. Tiebout's (1956) original hypothesis challenged Paul Samuelson's (1954) conjecture that public goods could not be allocated efficiently. The Tiebout hypothesis has since been extended to include additional propositions. (kl 2012)
Using an agent-based model they compare different sets of political institutions at the jurisdiction level through which policy choices are made; and they find that there are unexpected outcomes at the population level that derive from differences in the institutions embodied at the jurisdiction level.
Our model departs from previous approaches in several important respects. First, with a few exceptions, our primary interest in comparing paring the performance of political institutions has been largely neglected in the Tiebout literature. A typical Tiebout model takes the political institution, usually majority rule, as constant. Here we vary institutions and measure performance, an approach more consistent with the literature on mechanism design. Second, aside from an example used to demonstrate the annealing phenomenon, we do not explicitly compare equilibria. (kl 2210)
And they find significant differences in collective behavior in different institutional settings.

ABM methodology is well suited to the kind of research problem the authors have posed here. The computational method permits intuitive illustration of the ways that individual preferences in specific settings aggregate to distinctive collective behaviors at the group level. But the approach is not so suitable to the analysis of social behavior that involves a higher degree of hierarchical coordination of individual behavior -- for example, in an army, a religious institution, or a business firm. Furthermore, the advantage of abstractness in ABM formulations is also a disadvantage, in that it leads researchers to ignore some of the complexity and nuance of local circumstances of action that lead to significant differences in outcome.

Saturday, February 24, 2018

Nuclear accidents

diagrams: Chernobyl reactor before and after

Nuclear fission is one of the world-changing discoveries of the mid-twentieth century. The atomic bomb projects of the United States led to the atomic bombing of Japan in August 1945, and the hope for limitless electricity brought about the proliferation of a variety of nuclear reactors around the world in the decades following World War II. And, of course, nuclear weapons proliferated to other countries beyond the original circle of atomic powers.

Given the enormous energies associated with fission and the dangerous and toxic properties of radioactive components of fission processes, the possibility of a nuclear accident is a particularly frightening one for the modern public. The world has seen the results of several massive nuclear accidents -- Chernobyl and Fukushima in particular -- and the devastating results they have had on human populations and the social and economic wellbeing of the regions in which they occurred.

Safety is therefore a paramount priority in the nuclear industry, both in research labs and military and civilian applications. So what is the situation of safety in the nuclear sector? Jim Mahaffey's Atomic Accidents: A History of Nuclear Meltdowns and Disasters: From the Ozark Mountains to Fukushima is a detailed and carefully researched attempt to answer this question. And the information he provides is not reassuring. Beyond the celebrated and well-known disasters at nuclear power plants (Three Mile Island, Chernobyl, Fukushima), Mahaffey refers to hundreds of accidents involving reactors, research laboratories, weapons plants, and deployed nuclear weapons that have had less public awareness. These accidents resulted in a very low number of lives lost, but their frequency is alarming. They are indeed "normal accidents" (Perrow, Normal Accidents: Living with High-Risk Technologies. For example:
  • a Japanese fishing boat is contaminated by fallout from Castle Bravo test of hydrogen bomb; lots of radioactive fish at the markets in Japan (March 1, 1954) (kl 1706)
  • one MK-6 atomic bomb is dropped on Mars Bluff, South Carolina, after a crew member accidentally pulled the emergency bomb release handle (February 5, 1958) (kl 5774)
  • Fermi 1 liquid sodium plutonium breeder reactor experiences fuel meltdown during startup trials near Detroit (October 4, 1966) (kl 4127)
Mahaffey also provides detailed accounts of the most serious nuclear accidents and meltdowns during the past forty years, Three Mile Island, Chernobyl, and Fukushima.

The safety and control of nuclear weapons is of particular interest. Here is Mahaffey's summary of "Broken Arrow" events -- the loss of atomic and fusion weapons:
Did the Air Force ever lose an A-bomb, or did they just misplace a few of them for a short time? Did they ever drop anything that could be picked up by someone else and used against us? Is humanity going to perish because of poisonous plutonium spread that was snapped up by the wrong people after being somehow misplaced? Several examples will follow. You be the judge. 
Chuck Hansen [U.S. Nuclear Weapons - The Secret History] was wrong about one thing. He counted thirty-two “Broken Arrow” accidents. There are now sixty-five documented incidents in which nuclear weapons owned by the United States were lost, destroyed, or damaged between 1945 and 1989. These bombs and warheads, which contain hundreds of pounds of high explosive, have been abused in a wide range of unfortunate events. They have been accidentally dropped from high altitude, dropped from low altitude, crashed through the bomb bay doors while standing on the runway, tumbled off a fork lift, escaped from a chain hoist, and rolled off an aircraft carrier into the ocean. Bombs have been abandoned at the bottom of a test shaft, left buried in a crater, and lost in the mud off the coast of Georgia. Nuclear devices have been pounded with artillery of a foreign nature, struck by lightning, smashed to pieces, scorched, toasted, and burned beyond recognition. Incredibly, in all this mayhem, not a single nuclear weapon has gone off accidentally, anywhere in the world. If it had, the public would know about it. That type of accident would be almost impossible to conceal. (kl 5527)
There are a few common threads in the stories of accident and malfunction that Mahaffey provides. First, there are failures of training and knowledge on the part of front-line workers. The physics of nuclear fission are often counter-intuitive, and the idea of critical mass does not fully capture the danger of a quantity of fissionable material. The geometry of the storage of the material makes a critical difference in going critical. Fissionable material is often transported and manipulated in liquid solution; and the shape and configuration of the vessel in which the solution is held makes a difference to the probability of exponential growth of neutron emission -- leading to runaway fission of the material. Mahaffey documents accidents that occurred in nuclear materials processing plants that resulted from plant workers applying what they knew from industrial plumbing to their efforts to solve basic shop-floor problems. All too often the result was a flash of blue light and the release of a great deal of heat and radioactive material.

Second, there is a fault at the opposite end of the knowledge spectrum -- the tendency of expert engineers and scientists to believe that they can solve complicated reactor problems on the fly. This turned out to be a critical problem at Chernobyl (kl 6859).
The most difficult problem to handle is that the reactor operator, highly trained and educated with an active and disciplined mind, is liable to think beyond the rote procedures and carefully scheduled tasks. The operator is not a computer, and he or she cannot think like a machine. When the operator at NRX saw some untidy valve handles in the basement, he stepped outside the procedures and straightened them out, so that they were all facing the same way. (kl 2057)
There are also clear examples of inappropriate supervision in the accounts shared by Mahaffey. Here is an example from Chernobyl.
[Deputy chief engineer] Dyatlov was enraged. He paced up and down the control panel, berating the operators, cursing, spitting, threatening, and waving his arms. He demanded that the power be brought back up to 1,500 megawatts, where it was supposed to be for the test. The operators, Toptunov and Akimov, refused on grounds that it was against the rules to do so, even if they were not sure why. 
Dyatlov turned on Toptunov. “You lying idiot! If you don’t increase power, Tregub will!”  
Tregub, the Shift Foreman from the previous shift, was officially off the clock, but he had stayed around just to see the test. He tried to stay out of it. 
Toptunov, in fear of losing his job, started pulling rods. By the time he had wrestled it back to 200 megawatts, 205 of the 211 control rods were all the way out. In this unusual condition, there was danger of an emergency shutdown causing prompt supercriticality and a resulting steam explosion. At 1: 22: 30 a.m., a read-out from the operations computer advised that the reserve reactivity was too low for controlling the reactor, and it should be shut down immediately. Dyatlov was not worried. “Another two or three minutes, and it will be all over. Get moving, boys! (kl 6887)
This was the turning point in the disaster.

A related fault is the intrusion of political and business interests into the design and conduct of high-risk nuclear actions. Leaders want a given outcome without understanding the technical details of the processes they are demanding; subordinates like Toptunov are eventually cajoled or coerced into taking the problematic actions. The persistence of advocates for liquid sodium breeder reactors represents a higher-level example of the same fault. Associated with this role of political and business interests is an impulse towards secrecy and concealment when accidents occur and deliberate understatement of the public dangers created by an accident -- a fault amply demonstrated in the Fukushima disaster.

Atomic Accidents provides a fascinating history of events of which most of us are unaware. The book is not primarily intended to offer an account of the causes of these accidents, but rather the ways in which they unfolded and the consequences they had for human welfare. (Generally speaking his view is that nuclear accidents in North America and Western Europe have had remarkably few human casualties.) And many of the accidents he describes are exactly the sorts of failures that are common in all largescale industrial and military processes.

(Largescale technology failure has come up frequently here. See these posts for analysis of some of the organizational causes of technology failure (link, link, link).)

Sunday, February 11, 2018

Folk psychology and Alexa

Paul Churchland made a large splash in the philosophy of mind and cognitive science several decades ago when he cast doubt on the categories of "folk psychology" -- the ordinary and commonsensical concepts we use to describe and understand each other's mental lives. In Paul Churchland and Patricia Churchland, On the Contrary: Critical Essays, 1987-1997, Paul Churchland writes:
"Folk psychology" denotes the prescientific, commonsense conceptual framework that all normally socialized humans deploy in order to comprehend, predict, explain, and manipulate the behavior of . humans and the higher animals. This framework includes concepts such as belief, desire, pain pleasure, love, hate, joy, fear, suspicion, memory, recognition, anger, sympathy, intention, and so forth.... Considered as a whole, it constitutes our conception of what a person is. (3)
Churchland does not doubt that we ordinary human beings make use of these concepts in everyday life, and that we could not dispense with them. But he is not convinced that they have a scientifically useful role to play in scientific psychology or cognitive science.

In our ordinary dealings with other human beings it is both important and plausible that the framework of folk psychology is approximately true. Our fellow human beings really do have beliefs, desires, fears, and other mental capacities, and these capacities are in fact the correct explanation of their behavior. How these capacities are realized in the central nervous system is largely unknown, though as materialists we are committed to the belief that there are such underlying neurological functionings. But eliminative materialism doesn't have a lot of credibility, and the treatment of mental states as epiphenoma to the neurological machinery isn't convincing either.

These issues had the effect of creating a great deal of discussion in the philosophy of psychology since the 1980s (link). But the topic seems all the more interesting now that tens of millions of people are interacting with Alexa, Siri, and the Google Assistant, and are often led to treat the voice as emanating from an intelligent (if not very intelligent) entity. I presume that it is clear that Alexa and her counterparts are currently "question bots" with fairly simple algorithms underlying their capabilities. But how will we think about the AI agent when the algorithms are not simple; when the agents can sustain lengthy conversations; and when the interactions give the appearance of novelty and creativity?

It turns out that this is a topic that AI researchers have thought about quite a bit. Here is the abstract of "Understanding Socially Intelligent Agents—A Multilayered Phenomenon", a fascinating 2001 article in IEEE by Perrson, Laaksolahti, and Lonnqvist (link):
The ultimate purpose with socially intelligent agent (SIA) technology is not to simulate social intelligence per se, but to let an agent give an impression of social intelligence. Such user-centred SIA technology, must consider the everyday knowledge and expectations by which users make sense of real, fictive, or artificial social beings. This folk-theoretical understanding of other social beings involves several, rather independent levels such as expectations on behavior, expectations on primitive psychology, models of folk-psychology, understanding of traits, social roles, and empathy. The framework presented here allows one to analyze and reconstruct users' understanding of existing and future SIAs, as well as specifying the levels SIA technology models in order to achieve an impression of social intelligence.
The emphasis here is clearly on the semblance of intelligence in interaction with the AI agent, not the construction of a genuinely intelligent system capable of intentionality and desire. Early in the article they write:
As agents get more complex, they will land in the twilight zone between mechanistic and living, between dead objects and live beings. In their understanding of the system, users will be tempted to employ an intentional stance, rather than a mechanistic one.. Computer scientists may choose system designs that encourage or discourage such anthropomorphism. Irrespective of which, we need to understand how and under what conditions it works.
But the key point here is that the authors favor an approach in which the user is strongly led to apply the concepts of folk psychology to the AI agent; and yet in which the underlying mechanisms generating the AI's behavior completely invalidate the application of these concepts. (This approach brings to mind Searle's Chinese room example concerning "intelligent" behavior; link.) This is clearly the approach taken by current designs of AI agents like Siri; the design of the program emphasizes ordinary language interaction in ways that lead the user to interact with the agent as an intentional "person".

The authors directly confront the likelihood of "folk-psychology" interactions elicited in users by the behavior of AI agents:
When people are trying to understand the behaviors of others, they often use the framework of folk-psychology. Moreover, people expect others to act according to it. If a person’s behavior blatantly falls out of this framework, the person would probably be judged “other” in some, e.g., children, “crazies,” “psychopaths,” and “foreigners.” In order for SIAs to appear socially intelligent, it is important that their behavior is understandable in term of the folk-psychological framework. People will project these expectations on SIA technology and will try to attribute mental states and processes according to it. (354)
And the authors make reference to several AI constructs that are specifically designed to elicit a folk-psychological response from the users:
In all of these cases, the autonomous agents have some model of the world, mind, emotions, and of their present internal state. This does not mean that users automatically infer the “correct” mental state of the agent or attribute the same emotion that the system wants to convey. However, with these background models regulating the agent’s behavior the system will support and encourage the user to employ her faculty of folk-psychology reasoning onto the agent. Hopefully, the models generate consistently enough behavior to make folk-psychology a framework within which to understand and act upon the interactive characters. (355)
The authors emphasize the instrumentalism of their recommended approach to SIA capacities from beginning to end:
In order to develop believable SIAs we do not have to know how beliefs-desires and intentions actually relate to each other in the real minds of real people. If we want to create the impression of an artificial social agent driven by beliefs and desires, it is enough to draw on investigations on how people in different cultures develop and use theories of mind to understand the behaviors of others. SIAs need to model the folk-theory reasoning, not the real thing. To a shallow AI approach, a model of mind based on folk-psychology is as valid as one based on cognitive theory. (349)
This way of approaching the design of AI agents suggests that the "folk psychology" interpretation of Alexa's more capable successors will be fundamentally wrong. The agent will not be conscious, intentional, or mental; but it will behave in ways that make it almost impossible not to fall into the trap of anthropomorphism. And this in turn brings us back to Churchland and the critique of folk psychology in the human-human cases. If computer-assisted AI agents can be completely persuasive as mentally structured actors, then why are we so confident that this is not the case for fellow humans as well?

Friday, February 9, 2018

Cold war history from an IR perspective

Odd Arne Westad's The Cold War: A World History is a fascinating counterpoint to Tony Judt's Postwar: A History of Europe Since 1945. There are some obvious differences -- notably, Westad takes a global approach to the Cold War, with substantial attention to the dynamics of Cold War competition in Asia, Africa, Latin America, and the Middle East, as well as Europe, whereas Judt's book is primarily focused on the politics and bi-polar competition of Communism and liberal democratic capitalism in Europe. Westad is a real expert on East Asia, so his global perspectives on the period are very well informed. Both books provide closely reasoned and authoritative interpretations of the large events of the 1950s through the 1990s. So it is very interesting to compare them from an historiographic point of view.

The feature that I'd like to focus on here is Westad's perspective on these historical developments from the point of view of an international-relations conceptual framework. Westad pays attention to the economic and social developments that were underway in the West and the Eastern bloc; but his most frequent analytical question is, what were the intentions, beliefs, and strategies of the nations which were involved in competition throughout the world in this crucial period of world history? Ideology and social philosophy play a large role in his treatment. Judt too offers interpretations of what leaders like Truman, Gorbachev, or Thatcher were trying to accomplish; but the focus of his historiographical thinking is more on the circumstances of ordinary life and the social, economic, and political changes through which ordinary people shaped their political identities across Europe. In Westad's framework there is an underlying emphasis on strategic rationality -- and failures of rationality -- by leaders and national governments that is more muted in Judt's analysis. The two perspectives are not incompatible; but they are significantly different.

Here are a few illustrative passages from Westad's book revealing the orientation of his interpretation around interest and ideology:
The Cold War originated in two processes that took place around the turn of the twentieth century. One was the transformation of the United States and Russia into two supercharged empires with a growing sense of international mission. The other was the sharpening of the ideological divide between capitalism and its critics. These came together with the American entry into World War I and with the Russian Revolution of 1917, and the creation of a Soviet state as an alternative vision to capitalism. (19)
The contest between the US and the USSR over the future of Germany is a good example.
The reasons why Stalin wanted a united Germany were exactly the same reasons why the United States, by 1947, did not. A functional German state would have to be integrated with western Europe in order to succeed, Washington found. And that could not be achieved if Soviet influence grew throughout the country. This was not only a point about security. It was also about economic progress. The Marshall Plan was intended to stimulate western European growth through market integration, and the western occupation zones in Germany were crucial for this project to succeed. Better, then, to keep the eastern zone (and thereby Soviet pressure) out of the equation. After two meetings of the allied foreign ministers in 1947 had failed to agree on the principles for a peace treaty with Germany (and thereby German reunification), the Americans called a conference in London in February 1948 to which the Soviets were not invited.(109)
And the use of development aid during reconstruction was equally strategic:
For Americans and western European governments alike, a major part of the Marshall Plan was combatting local Communist parties. Some of it was done directly, through propaganda. Other effects on the political balance were secondary or even coincidental. A main reason why Soviet-style Communism lost out in France or Italy was simply that their working classes began to have a better life, at first more through government social schemes than through salary increases. The political miscalculations of the Communist parties and the pressure they were under from Moscow to disregard the local political situation in order to support the Soviet Union also contributed. When even the self-inflicted damage was not enough, such as in Italy, the United States experimented with covert operations to break Communist influence. (112)
Soviet miscalculations were critical in the development of east-west power relations. Westad treats the Berlin blockade in these terms:
The Berlin blockade, which lasted for almost a year, was a Soviet political failure from start to finish. It failed to make west Berlin destitute; a US and British air-bridge provided enough supplies to keep the western sectors going. On some days aircraft landed at Tempelhof Airport at three minute intervals. Moscow did not take the risk of ordering them to be shot down. But worse for Stalin: the long-drawn-out standoff confirmed even to those Germans who had previously been in doubt that the Soviet Union could not be a vehicle for their betterment. The perception was that Stalin was trying to starve the Berliners, while the Americans were trying to save them. On the streets of Berlin more than half a million protested Soviet policies. (116)
I don't want to give the impression that Westad's book ignores non-strategic aspects of the period. His treatment of McCarthyism, for example, is quite astute:
The series of hearings and investigations, which accusations such as McCarthy’s gave rise to, destroyed people’s lives and careers. Even for those who were cleared, such as the famous central Asia scholar Owen Lattimore, some of the accusations stuck and made it difficult to find employment. It was, as Lattimore said in his book title from 1950, Ordeal by Slander. For many of the lesser known who were targeted—workers, actors, teachers, lawyers—it was a Kafkaesque world, where their words were twisted and used against them during public hearings by people who had no knowledge of the victims or their activities. Behind all of it was the political purpose of harming the Administration, though even some Democrats were caught up in the frenzy and the president himself straddled the issue instead of publicly confronting McCarthy. McCarthyism, as it was soon called, reduced the US standing in the world and greatly helped Soviet propaganda, especially in western Europe. (120)
It is interesting too to find areas of disagreement between the two historians. Westad's treatment of Leonid Brezhnev is sympathetic:
Brezhnev and his colleagues’ mandate was therefore quite clear. Those who had helped put them in power wanted more emphasis on planning, productivity growth, and welfare. They wanted a leadership that avoided unnecessary crises with the West, but also stood up for Soviet gains and those of Communism globally. Brezhnev was the ideal man for the purpose. As a leader, he liked to consult with others, even if only to bring them onboard with decisions already taken. After the menacing Stalin and the volatile Khrushchev, Brezhnev was likeable and “comradely”; he remembered colleagues’ birthdays and the names of their wives and children. His favorite phrases were “normal development” and “according to plan.” And the new leader was easily forgiven a certain vagueness in terms of overall reform plans as long as he emphasized stability and year-on-year growth in the Soviet economy.... Contrary to what is often believed, the Soviet economy was not a disaster zone during the long reign of Leonid Brezhnev and the leadership cohort who came into power with him. The evidence points to slow and limited but continuous growth, within the framework provided by the planned economy system. The best estimates that we have is that the Soviet economy as a whole grew on average 2.5 to 3 percent per year during the 1960s and ’70s. (367)
By contrast, Judt treats Brezhnev less sympathetically and as a more minor figure:
The economic reforms of the fifties and sixties were from the start a fitful attempt to patch up a structurally dysfunctional system. To the extent that they implied a half-hearted willingness to decentralize economic decisions or authorize de facto private production, they were offensive to hardliners among the old guard. But otherwise the liberalizations undertaken by Khrushchev, and after him Brezhnev, presented no immediate threat to the network of power and patronage on which the Soviet system depended. Indeed, it was just because economic improvements in the Soviet bloc were always subordinate to political priorities that they achieved so very little. (Judt, 424)
Perhaps the most striking contrast between these two books is the scope that each provides. Judt is focused on the development of postwar Europe, and he does an unparalleled job of providing both detail and interpretation of the developments over these decades in well over a dozen countries. Westad is interested in providing a global history of the Cold War, and his expertise on Asian history and politics during this period, as well as his wide-ranging knowledge of developments in Africa, the Middle East, and Latin America, permits him to succeed in this goal. His representation of this history is nuanced and insightful at every turn. The Cold War unavoidably involves a focus on the USSR and the US and their blocs as central players; but Westad's account is by no means eurocentric. His treatments of India, China, and Southeast Asia are particularly excellent, and his account of turbulence and faulty diplomacy in the Middle East is particularly timely for the challenges we face today.

*        *         *

Here are a couple of interesting video lectures by Westad and Judt.

Tuesday, January 30, 2018

Declining industries

Why is it so difficult for leaders in various industries and sectors to seriously address the existential threats that sometimes arise? Planning for marginal changes in the business environment is fairly simple; problems can be solved, costs can be cut, and the firm can stay in the black. But how about more radical but distant threats? What about the grocery sector when confronted by Amazon's radical steps in food selling? What about Polaroid or Kodak when confronted by the rise of digital photography in the 1990s? What about the US steel industry in the 1960s when confronted with rising Asian competition and declining manufacturing facilities?

From the outside these companies and sectors seem like dodos incapable of confronting the threats that imperil them. They seem to be ignoring oncoming train wrecks simply because these catastrophes are still in the distant future. And yet the leaders in these companies were generally speaking talented, motivated men and women. So what are the organizational or cognitive barriers that arise to make it difficult for leaders to successfully confront the biggest threats they face?

Part of the answer seems to be the fact that distant hazards seem smaller than the more immediate and near-term challenges that an organization must face; so there is a systematic bias towards myopic decision-making. This sounds like a Kahneman-Tversky kind of cognitive shortcoming.

A second possible explanation is that it is easy enough to persuade oneself that distant threats will either resolve themselves organically or that the organization will discover novel solutions in the future. This seems to be part of the reason that climate-change foot-draggers take the position they do: that "things will sort out", "new technologies will help solve the problems in the future." This sounds like a classic example of weakness of the will -- an unwillingness to rationally confront hard truths about the future that ought to influence choices today but often don't.

Then there is the timeframe of accountability that is in place in government, business, and non-profit organizations alike. Leaders are rewarded and punished for short-term successes and failures, not prudent longterm planning and preparation. This is clearly true for term-limited elected officials, but it is equally true for executives whose stakeholders evaluate performance based on quarterly profits rather than longterm objectives and threats.

We judge harshly those leaders who allow their firms or organizations to perish because of a chronic failure to plan for substantial change in the environments in which they will need to operate in the future. Nero is not remembered kindly for his dedication to his fiddle. And yet at any given time, many industries are in precisely that situation. What kind of discipline and commitment can protect organizations against this risk?

This is an interesting question in the abstract. But it is also a challenging question for people who care about the longterm viability of colleges and universities. Are there forces at work today that will bring about existential crisis for universities in twenty years (enrollments, tuition pressure, technology change)? Are there technological or organizational choices that should be made today that would help to avert those crises in the future? And are university leaders taking the right steps to prepare their institutions for the futures they will face in several decades?

Thursday, January 25, 2018

Gaining compliance

Organizations always involve numerous staff members whose behavior has the potential for creating significant risk for individuals and the organization but who are only loosely supervised. This situation unavoidably raises principal-agent problems. Let's assume that the great majority of staff members are motivated by good intentions and ethical standards. That means that there are a small number of individuals whose behavior is not ethical and well intentioned. What arrangements can an organization put in place to prevent bad behavior and protect individuals and the integrity of the organization?

For certain kinds of bad behavior there are well understood institutional arrangements that work well to detect and deter the wrong actions. This is especially true for business transactions, purchasing, control of cash, expense reporting and reimbursement, and other financial processes within the organization. The audit and accounting functions within almost every sophisticated organization permit a reasonably high level of confidence in the likelihood of detection of fraud, theft, and misreporting. This doesn't mean that corrupt financial behavior does not occur; but audits make it much more difficult to succeed in persistent dishonest behavior. So an organization with an effective audit function is likely to have a reasonably high level of compliance in the areas where standard audits can be effectively conducted.

A second kind of compliance effort has to do with the culture and practice of observer reporting of misbehavior. Compliance hotlines allow individuals who have observed (or suspected) bad behavior to report that behavior to responsible agents who are obligated to investigate these allegations. Policies that require reporting of certain kinds of bad behavior to responsible officers of the organization -- sexual harassment, racial discrimination, or fraudulent actions, for example -- should have the effect of revealing some kinds of misbehavior, and deterring others from engaging in bad behavior. So a culture and expectation of reporting is helpful in controlling bad behavior.

A third approach that some organizations take to compliance is to place a great deal of emphasis the moral culture of the organization -- shared values, professional duty, and role responsibilities. Leaders can support and facilitate a culture of voluntary adherence to the values and policies of the organization, so that virtually all members of the organization fall in the "well-intentioned" category. The thrust off this approach is to make large efforts at eliciting voluntary good behavior. Business professor David Hess has done a substantial amount of research on these final two topics (link, link).

Each of these organizational mechanisms has some efficacy. But unfortunately they do not suffice to create an environment where we can be highly confident that serious forms of misconduct do not occur. In particular, reporting and culture are only partially efficacious when it comes to private and covert behavior like sexual assault, bullying, and discriminatory speech and behavior in the workplace. This leads to an important question: are there more intrusive mechanisms of supervision and observation that would permit organizations to discover patterns of misconduct even if they remain unreported by observers and victims? Are there better ways for an organization to ensure that no one is subject to the harmful actions of a predator or harasser?

A more active strategy for an organization committed to eliminating sexual assault is to attempt to predict the environments where inappropriate interpersonal behavior is possible and to redesign the setting so the behavior is substantially less likely. For example, a hospital may require that any physical examinations of minors must be conducted in the presence of a chaperone or other health professional. A school of music or art may require that after-hours private lessons are conducted in semi-public locations. These rules would deprive a potential predator of the seclusion needed for the bad behavior. And the practitioner who is observed violating the rule would then be suspect and subject to further investigation and disciplinary action.

Here is perhaps a farfetched idea: a "behavior audit" that is periodically performed in settings where inappropriate covert behavior is possible. Here we might imagine a process in which a random set of people are periodically selected for interview who might have been in a position to have been subject to inappropriate behavior. These individuals would then be interviewed with an eye to helping to surface possible negative or harmful experiences that they have had. This process might be carried out for groups of patients, students, athletes, performers, or auditioners in the entertainment industry. And the goal would be to uncover traces of the kinds of behavior involving sexual harassment and assault that are at the heart of recent revelations in a myriad of industries and organizations. The results of such an audit would occasionally reveal a pattern of previously unknown behavior requiring additional investigation, while the more frequent results would be negative. This process would lead to a higher level of confidence that the organization has reasonably good knowledge of the frequency and scope of bad behavior and a better system for putting in place a plan of remediation.

All of these organizational strategies serve fundamentally as attempts to solve principal-agent problems within the organization. The principals of the organization have expectations about the norms that ought to govern behavior within the organization. These mechanisms are intended to increase the likelihood that there is conformance between the principal's expectations and the agent's behavior. And, when they fail, several of these mechanisms are intended to make it more likely that bad behavior is identified and corrected.

(Here is an earlier post treating scientific misconduct as a principal-agent problem; link.)

Saturday, January 20, 2018

Actors in historical epochs

I've argued often for the idea that social science and historical explanations need to be "actor-centered" -- we need to ground our hypotheses about social and historical causation in theories of the pathways through which actors embody those causal processes. Actors in relation to each other constitute the "substrate" of social causation. Actors make up the microfoundations of social causes and processes. Actors constitute the causal necessity of social mechanisms.

In its abstract formulation this is little more than an expression of ontological individualism (link). But in application it represents a highly substantive research challenge. In order to provide concrete accounts of social processes in various cultural and historical settings, we need to have fairly specific theories of the actor in those settings (link): what motivates actors, what knowledge do they have of their environment, what cognitive and practical frameworks do they bring to their experiences of the world, what do they want, how do they reason, how do they relate to other actors, what norms and values are embedded in their action principles?

Rational choice theory and its cousins (desire-belief-opportunity theory, for example) provide what is intended to be a universal framework for understanding action. But as has been argued frequently here, these schemes are reductive and inadequate as a general basis for understanding action (link). It has also been argued here that the recent efforts to formulate a "new pragmatist" theory of the actor represent useful steps forward (link).

A very specific concern arises when we think carefully about the variety of actors found in diverse historical and cultural settings. It is obvious that actors in specific cultures have different belief systems and different cognitive frameworks; it is equally apparent that there are important and culture-specific differences across actors when it comes to normative and value commitments. So what is needed in order to investigate social causation in significantly different cultural and historical settings? Suppose we want to conduct research on social contention along the lines of work by Charles Tilly, with respect to communities with widely different cultural assumptions and frameworks. How should we attempt to understand basic elements of contention such as resistance, mobilization, and passivity if we accept the premise that French artisans in Paris in 1760, Vietnamese villagers in 1950, and Iranian professionals in 2018 have very substantial differences in their action principles and cognitive-practical frameworks?

There seem to be several different approaches we might take. One is to minimize the impact of cultural differences when it comes to material deprivation and oppression. Whatever else human actors want, they want material wellbeing and security. And when political or social conditions place great pressure on those goods, human actors will experience "grievance" and will have motives leading them to mobilize together in support of collective efforts to ameliorate the causes of those grievances.

Another possibility is to conclude that collective action and group behavior are substantially underdetermined by material factors, and that we should expect as much diversity in collective behavior as we observe in individual motivation and mental frameworks. So the study of contention is still about conflicts among individuals and groups; but the conflicts that motivate individuals to collective action may be ideological, religious, culinary, symbolic, moral -- or material. Moreover, differences in the ways that actors frame their understandings of their situation may lead to very different patterns of the dynamics of contention -- the outbreak and pace of mobilization, the resolution of conflict, the possibility of compromise.

Putting the point in terms of models and simulations, we might think of the actors as a set of cognitive and practical processing algorithms and who decide what to do based on their beliefs and these decision algorithms. It seems unavoidable that tweaking the parameters of the algorithms and beliefs will lead to very different patterns of behavior within the simulation. Putting the point the other way around, the successful mobilization of Vietnamese peasants in resistance to the French and the US depended on a particular setting of the cognitive-practical variables in these individual actors. Change those settings and, perhaps, you change the dynamics of the process and you change history.

*         *         *

Clifford Geertz is one of the people who has taken a fairly radical view on the topic of the constituents of the actor. In "Person, Time, and Conduct in Bali" in The Interpretation Of Cultures he argues that Balinese culture conceives of the individual person in radically unfamiliar ways:
One of these pervasive orientational necessities is surely the charac­terization of individual human beings. Peoples everywhere have devel­oped symbolic structures in terms of which persons are perceived not baldly as such, as mere unadorned members of the human race, but as representatives of certain distinct categories of persons, specific sorts of individuals. In any given case, there are inevitably a plurality of such structures. Some, for example kinship terminologies, are ego entered: that is, they denote the status of an individual in terms of his relation­ ship to a specific social actor. Others are centered on one or another subsystem or aspect of society and are invariant with respect to the perspectives of individual actors: noble ranks, age-group statuses, occu­pational categories. Some-personal names and sobriquets-are infor­mal and particularizing; others-bureaucratic titles and caste desig­nations-are formal and standardizing. The everyday world in which the members of any community move, their taken-for-granted field of social action, is populated not by anonymous, faceless men with­ out qualities, but by somebodies, concrete classes of determinate per­sons positively characterized and appropriately labeled. And the symbol systems which denote these classes are not given in the nature of things --they are historically constructed, socially maintained, and individu­ally applied. (363-364)
In Bali, there are six sorts of labels which one person can apply to an­other in order to identify him as a unique individual and which I want to consider against this general conceptual background: ( I ) personal names; (2) birth order names; (3) kinship terms; (4) teknonyms; (5) sta­tus titles (usually called "caste names" in the literature on Bali); and (6) public titles, by which I mean quasi-occupational titles borne by chiefs, rulers, priests, and gods. These various labels are not, in most cases, employed simultaneously, but alternatively, depending upon the situa­tion and sometimes the individual. They are not, also, all the sorts of such labels ever used; but they are the only ones which are generally recognized and regularly applied. And as each sort consists not of a mere collection of useful tags but of a distinct and bounded terminologi­cal system, I shall refer to them as "symbolic orders of person-defini­tion" and consider them first serially, only later as a more or less coher­ent cluster. (368)
Also outstanding in this field is Robert Darnton's effort to reconstruct the forms of agency underlying the "great cat massacre" in The Great Cat Massacre: And Other Episodes in French Cultural History; link.

Monday, January 15, 2018

The second American revolution

The first American Revolution broke the bonds of control exercised by a colonial power over the actions and aspirations of a relatively small number of people in North America in 1776 -- about 2.5 million people. The second American Revolution promises to affect vastly larger numbers of Americans and their freedom, and it is not yet complete. (There were about 19 million African-Americans in the United States in 1960.)

This is the Civil Rights revolution, which has been underway since 1865 (the end of the Civil War); which took increased urgency in the 1930s through the 1950s (the period of Jim Crow laws and a coercive, violent form of white supremacy); and which came to fruition in the 1960s with collective action by thousands of ordinary people and the courageous, wise leadership of men and women like Dr. Martin Luther King, Jr. When we celebrate the life and legacy of MLK, it is this second American revolution that is the most important piece of his legacy.

And this is indeed a revolution. It requires a sustained and vigilant struggle against a powerful status quo; it requires gaining political power and exercising political power; and it promises to enhance the lives, dignity, and freedoms of millions of Americans.

This revolution is not complete. The assault on voting rights that we have seen in the past decade, the persistent gaps that exist in income, health, and education between white Americans and black Americans, the ever-more-blatant expressions of racist ideas at the highest level -- all these unmistakeable social facts establish that the struggle for racial equality is not finished.

Dr. King's genius was his understanding from early in his vocation that change would require courage and sacrifice, and that it would also require great political wisdom. It was Dr. King's genius to realize that enduring social change requires changing the way that people think; it requires moral change as well as structural change. This is why Dr. King's profoundly persuasive rhetoric was so important; he was able to express through his speeches and his teaching a set of moral values that almost all Americans could embrace. And by embracing these values they themselves changed.

The struggle in South Africa against apartheid combined both aspects of this story -- anti-colonialism and anti-racism. The American civil rights movement focused on uprooting the system of racial oppression and discrimination this country had created since Reconstruction. It focused on creating the space necessary for African-American men and women, boys and girls, to engage in their own struggles for freedom and for personal growth. It insisted upon the same opportunities for black children that were enjoyed by the children of the majority population.

Will the values of racial equality and opportunity prevail? Will American democracy finally embrace and make real the values of equality, dignity, and opportunity that Dr. King expressed so eloquently? Will the second American revolution finally erase the institutions and behaviors of several centuries of oppression?

Dr. King had a fundamental optimism that was grounded in his faith: "the arc of the moral universe is long, but it bends toward justice." But of course we understand that only long, sustained commitment to justice can bring about this arc of change. And the forces of reaction are particularly strong in the current epoch of political struggle. So it will require the courage and persistence of millions of Americans to these ideals if racial justice is finally to prevail.

Here is an impromptu example of King's passionate commitment to social change through non-violence. This was recorded in Yazoo City, Mississippi in 1966, during James Meredith's March against Fear.

Populism's base

Steve Bannon may have lost his perch in the White House and Breitbart; but the themes of white supremacy, intolerance, bigotry, and anti-government extremism that drive radical nationalist populism survive his fall. In The New Minority: White Working Class Politics in an Age of Immigration and Inequality Justin Gest attempts to explain how this movement has been able to draw support from white working class men and women -- often in support of policies that are objectively harmful to them. Here is how he describes his central concern:
In this book, I suggest that these trends [towards polarization] intensify an underlying demographic phenomenon: the communities of white working class people who once occupied the political middle have decreased in size and moved to the fringes, and American and European societies are scrambling to recalibrate how they might rebuild the centrist coalitions that engender progress.
The book makes use of both ethnographic and survey research to attempt to understand the political psychology of these populations of men and women in Western Europe and the United States -- low-skilled workers with limited education beyond secondary school, and with shrinking opportunities in the economies of the 2000s.

A particularly interesting feature of the book is the ethnographic attempt Gest makes to understand the mechanisms and content of this migration of political identity. Gest conducted open-ended interviews with working class men and women in East London and Youngstown, Ohio in the United States -- both cities that were devastated by the loss of industrial jobs and the weakening of the social safety net in the 1970s and 1980s. He calls these "post-traumatic cities" (7). He addresses the fact that white working class people in those cities and elsewhere now portray themselves as a disadvantaged minority.
There and elsewhere, the white working class populations I consider are consumed by a nostalgia that expresses bitter resentment toward the big companies that abandoned their city, a government that did little to stop them from leaving, and a growing share of visible minorities who are altering their neighborhoods’ complexion. (10)
The political psychology of resentment plays a large role in the populations he studies -- resentment of government that fails to deliver, resentment of immigrants, resentment of affirmative action for racial minorities. The other large idea that Gest turns to is marginality -- the idea that these groups have that their voices will not be heard and that the powerful agents in society do not care about their fates.
Rather, this is to say that—across the postindustrial regions of Western Europe and North America—white working class people sense that they have been demoted from the center of their country’s consciousness to its fringe. And many feel powerless in their attempts to do something about it. (15)
And resentment and marginality lead for some individuals to a political stance of resistance:
Unimpressed with Labour’s priorities, profoundly distrustful of government, and unwilling to join forces with working class immigrants, Barking and Dagenham’s working class whites are now engaged in a largely unstructured, alternative form of minority politics. They tend to be focused on local affairs, fighting for scarce public resources and wary of institutionalized discrimination against them. The difficulty has been having their claims heard, and taken seriously. (71)
The resentments and expressions of marginality in Youngstown are similar, with an added measure of mistrust of large corporations like the steel companies that abandoned the city and a recognition of the pervasive corruption that permeates the city. Here is Evelyn on the everyday realities of political corruption in Youngstown:
The more I saw, the more I realized that money can buy your way out of anything. Then you see your sheriff get indicted, your congressman dishonored, our prosecutor in prison, and a mayoral nominee with a cloud over his head. The Valley has been embroiled in political corruption for a long time, and people just look out for themselves. It makes you sick. You don’t see it firsthand, the corruption, but you know it’s there. (128)
The overriding impression gained from these interviews and Gest's narrative is one of hopelessness. These men and women of Youngstown don't seem to see any way out for themselves or their children. The pathway of upward mobility through post-secondary education does not come up at all in these conversations. And, as Case and Deaton argue from US mortality statistics (link), social despair is associated with life-ending behaviors such as opioids, alcohol abuse, and suicide.

Gest's book lays the ground for thinking about a post-traumatic democratic politics -- a politics that is capable of drawing together the segments of American or British society who genuinely need progressive change and more egalitarian policies if they are to benefit from economic progress in the future. But given the cultural and political realities that Gest identifies among this "new minority", it is hard to avoid the conclusion that crafting such a political platform will be challenging.

Monday, January 8, 2018

Trust and organizational effectiveness

It is fairly well agreed that organizations require a degree of trust among the participants in order for the organization to function at all. But what does this mean? How much trust is needed? How is trust cultivated among participants? And what are the mechanisms through which trust enhances organizational effectiveness?

The minimal requirements of cooperation presuppose a certain level of trust. As A plans and undertakes a sequence of actions designed to bring about Y, his or her efforts must rely upon the coordination promised by other actors. If A does not have a sufficiently high level of confidence in B's assurances and compliance, then he will be rationally compelled to choose another series of actions. If Larry Bird didn't have trust in his teammate Dennis Johnson, the famous steal would not have happened.

First, what do we mean by trust in the current context? Each actor in an organization or group has intentions, engages in behavior, and communicates with other actors. Part of communication is often in the form of sharing information and agreeing upon a plan of coordinated action. Agreeing upon a plan in turn often requires statements and commitments from various actors about the future actions they will take. Trust is the circumstance that permits others to rely upon those statements and commitments. We might say, then, that A trusts B just in case --
  • A believes that when B asserts P, this is an honest expression of B's beliefs.
  • A believes that when B says he/she will do X, this is an honest commitment on B's part and B will carry it out (absent extraordinary reasons to the contrary).
  • A believes that when B asserts that his/her actions will be guided by his best understanding of the purposes and goals of the organization, this is a truthful expression.
  • A believes that B's future actions, observed and unobserved, will be consistent with his/her avowals of intentions, values, and commitments.
So what are some reasons why mistrust might rear its ugly head between actors in an organization? Why might A fail to trust B?
  • A may believe that B's private interests are driving B's actions (rather than adherence to prior commitments and values).
  • A may believe that B suffers from weakness of the will, an inability to carry out his honest intentions.
  • A may believe that B manipulates his statements of fact to suit his private interests.
  • Or less dramatically: A may not have high confidence in these features of B's behavior.
  • B may have no real interest or intention in behaving in a truthful way.
And what features of organizational life and practice might be expected to either enhance inter-personal trust or to undermine it?

Trust is enhanced by individuals having the opportunity to get acquainted with their collaborators in a more personal way -- to see from non-organizational contexts that they are generally well intentioned; that they make serious efforts to live up to their stated intentions and commitments; and that they are generally honest. So perhaps there is a rationale for the bonding exercises that many companies undertake for their workers.

Likewise, trust is enhanced by the presence of a shared and practiced commitment to the value of trustworthiness. An organization itself can enhance trust in its participants by performing the actions that its participants expect the organization to perform. For example, an organization that abruptly and without consultation ends an important employee benefit undermines trust in the employees that the organization has their best interests at heart. This abrogation of prior obligations may in turn lead individuals to behave in a less trustworthy way, and lead others to have lower levels of trust in each other.

How does enhancing trust have the promise of bringing about higher levels of organizational effectiveness? Fundamentally this comes down to the question of the value of teamwork and the burden of unnecessary transaction costs. If every expense report requires investigation, the amount of resources spent on accountants will be much greater than a situation where only the outlying reports are questioned. If each vice president needs to defend him or herself against the possibility that another vice president is conspiring against him, then less time and energy are available to do the work of the organization. If the CEO doesn't have high confidence that her executive team will work wholeheartedly to bring about a successful implementation of a risky investment, then the CEO will choose less risky investments.

In other words, trust is crucial for collaboration and teamwork. And an organization that manages to help to cultivate a high level of trust among its participants is likely to perform better than one that depends primarily on supervision and enforcement.

(See Fergus Lyon, Handbook of Research Methods on Trust: Second Edition (Handbooks of Research Methods in Management series) for recent empirical work on trust.)