The Impact Edit: When Harm Looks Reasonable

How automated systems in buildings make consequential decisions about people while accountability disappears across the chain

In the 1930s, a federal agency produced a set of color-coded maps that sorted American neighborhoods into investment grades. Green meant safe. Red meant hazardous. This practice, which came to be known as redlining, seemed systematic from the outside; the outputs were widely adopted, and the logic moved through the system efficiently.

So efficiently, in fact, that no single actor ever had to reckon with what the aggregate of those decisions was producing. Mortgage lenders applied the grades. Underwriters executed against them. Developers followed the capital. Each step was locally reasonable. The cumulative outcome was the organized disinvestment of entire communities across generations, and accountability dissolved at every point where one actor handed the logic to the next.

The tools are different now

Across the built environment, automated systems are increasingly positioned at the exact decision points that determine how people experience a building, whether they can enter it, what it costs them to stay, and how they are monitored while they are there.

  • Tenant screening platforms can rank applicants before a human reviews the file.

  • Dynamic pricing engines can adjust renewal rates on signals that no lease officer has reviewed.

  • Maintenance triage tools can decide whose request gets attended to first.

  • Security systems can flag behaviour as anomalous and escalate, on their own.

These could be classified as efficiency tools, but in reality are consequential decisions about specific people, made at scale, and the chain of responsibility for them is at least as fragmented as it was when one agency drew a map and handed it to a thousand lenders.

When harm belongs to everyone and no one

There is a concept in law called diffuse causation that I keep returning to. It describes situations where harm is real and traceable in aggregate but impossible to assign to any single decision or actor. It is why it took decades to successfully litigate against tobacco companies, against leaded paint manufacturers, against the institutions that packaged mortgage products they knew were unsound.

Redlining was diffuse causation at urban scale. Gerrymandering is diffuse causation at democratic scale. In both cases, each actor in the chain made a locally reasonable decision, and the harm accumulated in the spaces between them, real and measurable but belonging, formally, to no one.

A Decision System Nobody Can Fully Explain

What strikes me in almost every conversation is not that people are unaware these systems exist. It is that almost nobody has been given the conditions to know what the system is actually optimizing for, or what would happen if its outcomes were challenged.

The individual responsible for monitoring it has rarely been named, because the question was never built into the procurement process. The accountability is distributed across the value chain in exactly the way it has always been distributed in the built environment, not through negligence, but through a set of organizational habits that predate the technology and were never designed to absorb it. Except now the decisions are faster, the logic is opaque, and the people affected by them have even less visibility than before.

The question behind Liveable’s next brief

Liveable's next brief lands in a couple of weeks, and it looks at this question from every angle of the value chain: who is making these decisions, who carries the exposure when something goes wrong, and what responsible governance actually looks like as a practical standard rather than an aspiration. But before it does, I want to leave the central question open, because I think it deserves more than a brief or a report can give it on its own.

When an automated system makes a consequential decision about a person inside a building, and that decision causes harm, who is responsible?

Not legally, necessarily, though that question is arriving faster than most of us realize. But institutionally. Culturally. In the room where the system was selected, and in the board meeting where the outcomes will eventually need to be explained.

We have built an entire industry around proving that buildings are good for people. Certifications, ratings, frameworks, reports, and a growing vocabulary of human impact that moves through specifications, procurement decks, and investor presentations with remarkable confidence. (I know because I helped build some of it)

What we have not built is any serious infrastructure for what happens when the building decides something about you.

More on this soon,

Gayathri


The Conversation Continues...

This post is part of our ongoing exploration into how automated systems and the concept of "diffuse causation" are quietly reshaping accountability in the built environment. As problem-solvers, we believe the best insights emerge when diverse perspectives meet. Have you encountered similar challenges or discovered different approaches? Share your story.

Connect with us as we continue to prototype, test, and learn: 

Subscribe to our newsletter 

Join us on Linkedin 

Explore our resources

We acknowledge that social sustainability is always a work in progress. These insights represent our current understanding, shaped by our partners, communities, and continuous learning.

Next
Next

The Impact Edit: Carbon has a number. People have a paragraph.