Distributed moral responsibility: locating accountability in complex systems

When the screws come loose

The Francis Scott Key bridge in Baltimore collapses in March, causing the deaths of six construction workers, closing a key shipping port, and disrupting the regional economy on multiple levels. In January, a door panel falls off a plane during takeoff, leading to temporary grounding of all Boeing 737 Max planes, a cascade of investigations and lawsuits, and a decent SNL skit that I’m sure Alaska Airlines would sooner forget.

Alaska Airlines Flight 1282 on 5 January 2024
Investigation involving Alaska Airlines Flight 1282 on a Boeing 737-9 MAX in Portland, Oregon. By National Transportation Safety Board

These events captured our collective attention, disbelief, and outrage—and rightly so. They are visible and visceral examples of complex, technical systems failing, leading to distress, injury, and even death. These types of catastrophes are also followed by confusion and murkiness about who or what is at fault: who can we blame, and then hold accountable to make sure it doesn’t happen again. Locating responsibility is not always easy, be it responsibility in the legal, reputational, monetary, and/or moral sense. While there are lawsuits and the court of public opinion to sort out several aspects of responsibility, establishing moral responsibility can be particularly tricky. Here we can turn to moral philosophy and ethical frameworks to unpack who is responsible, and ideally point to some preventive solutions that go beyond just who pays for damages, which CEO is fired, or what company goes bankrupt.

Distributed systems

Planes and bridges are (sometimes literally) concrete examples of complex, distributed systems. They involve multiple layers of manufacturers, contractors, operators, regulators, and customers, all interacting to produce and/or use physical infrastructure that gets people and products from point A to point B. More abstract distributed systems are also quite salient in our day-to-day lives. Social media platforms are one example, with companies like Facebook or Twitter/X developing a technical platform that is run on hardware made by other tech companies (e.g. smartphones, tablets, computers), made available through another technical vendor (browsers or app stores), and serviced by different carriers (e.g., Verizon, T-Mobile, etc.). Users comingle to make and consume content, and advertising dollars provide the financial engine underneath it all. Social media companies rather famously don’t want their platforms to be considered “publishers” due to the extra liability and responsibility that would bring, in a pretty classic example of “don’t shoot the messenger” mentality.

A distributed system I’ve previously studied in-depth is that of direct-to-consumer genetic testing (DTC-GT) companies, such as 23andMe and Ancestry DNA, and third-party interpretation tools. DTC-GT companies are the ones who receive spit samples from customers, analyze subsets of their DNA (directly or via a contracting laboratory), and return interpreted reports that can run the gamut from health risks to genetic ancestry inference and relative finding. Third-party tools, on the other hand, take a file of DNA variants generated by the DTC-GT company and provided to the consumer, who then uploads the file for additional analyses or interpretation. In July 2018, the New York Times broke a story where a customer received false information that they were at increased risk for colon cancer (Lynch syndrome), after getting a 23andMe test and running their data through one of the first third-party tools created, Promethease. While 23andMe had actually incorrectly measured the customer’s DNA at that genetic variant in question, Promethease was instead widely blamed for providing an incorrect interpretation. Notably the critique of Promethease is not totally unwarranted, as the tool links users’ genetic information to a wide range of putative research results, rather than confirmed, clinical-grade findings. Through research interviews with developers of third-party tools, I’ve come to see them as similar to social media companies in that they don’t want to be blamed or held accountable for information they serve up  – they’re “platforms not publishers,” in some cases simply “bridging to the literature.”

Moral responsibility

Distributed systems, whether physical, abstract, or some combination of the two, seem inescapable. This brings us back to the question of how to determine who or what is responsible when the system fails. I recently came across the term “distributed moral responsibility” (DMR), in a philosophy article by Luciano Floridi at the Oxford Internet Institute. DMR is about locating responsibility for “distributed moral actions,” which are the end product of a series of actions that are each individually morally neutral, i.e. not good or bad when taken alone. Rather, it’s the sum of the parts that leads to a “morally loaded” outcome and thus the need to dissect the chain of events and actions that led to it. Floridi notes one of the challenges of DMR is that classically ethics is often focused on the intention of an actor; we see that in law as well, where establishing intent is key for the prosecution. Instead, in a distributed moral action, each actor along the chain of events might not even be aware of contributing to the ultimate outcome, much less intending to do so.

Without intention or awareness, how do we then assign responsibility? Floridi argues that “…a successful strategy to tackle the problem of DMR is to formulate a mechanism that, by default, back propagates all the responsibility for the good or evil caused by a whole causally relevant network to each agent in it, independently of the degrees of intentionality, informed-ness and risk aversion of such agents.” In other words, everyone in the distributed system is held responsible for the outcome. In fact, they are “equally and maximally” responsible, rather than trying to tease out who might be a little more or less so. The point with this approach, according to Floridi, is “if all the agents know that they will all be responsible for [the outcome], it is more likely that [the outcome] may not occur, as they may restrain themselves and each other,” i.e. due to social pressure.

Floridi addresses some anticipated objections to his argument: namely, that it is unfair to assign responsibility without intention, it’s unrealistic compared to an approach that just punishes system leaders, and that it will promote too much risk aversion. My main questions about how this approach would work in practice are (1) how to draw the boundaries of who/what is part of the system and thus responsible and, relatedly, (2) how those inside the boundary will know it. Systems operate in the messiness of the real world, and there might not be a bright line between who’s in versus who’s out when it comes time to cast the net of responsibility. Instead, it’s more like dropping a rock in a pond: there are a series of concentric circles rippling out, stark at first and then decreasingly perceptible once you get further from the center. But when do you truly go back to flat, unperturbed water?

Due to that lack of clarity, agents in the system might not know they’re in it. It might then only be after a distributed moral action occurs/“blows up,” and the responsibly is “back propagated,” that the participants become clear. So I’m not sure it could be a fully preventive (vs reactive) approach. That said, I like the collectivist approach aimed at pulling together to create good outcomes, rather than pointing fingers at a few individuals after bad ones.

In the meantime, we’ll continue to ride in airplanes and nervously joke about sitting in the aisle rather than window seat.

One thought on “Distributed moral responsibility: locating accountability in complex systems

  1. Good to hear from you again. It seems like system analysis and real or proposed outcomes could be analyzed to create more clarity about who is in and there for who shares responsibility. What about AI?

Leave a Reply

Your email address will not be published. Required fields are marked *