The Architecture of Unaccountability: How Impersonal Rules and Computer Algorithms Trigger the Agentic Shift


Imagine sitting at your desk on a random Tuesday, working, when a digital form appears on your screen. It asks you to click a button to deny someone’s health insurance claim flagged by the system as not meeting the expected standard. You know they desperately need it and that the system is not entirely fair. It’s biased towards the company’s income preservation, of course. What would you do?

Would you click the button and feel totally in control?

Would you think you’re just doing your job?

And most importantly, whose decision would it be? Yours? The system’s? Would the internal company procedures be responsible? Or maybe the people who created the procedures (and the system)? 

In case this “button clicking to do harm” rings the same bell as the infamous obedience and authority figure studies, you are right to spot this. This is the baseline. But nowadays we hardly ever experience someone in a lab coat giving orders, yet the mechanism is still in place. So what happened? Can the authority be distributed, systemic, artificial, and no longer dressed in a uniform?  

What if I told you that the moment you read the words ‘company policy’, your brain undergoes a physical neurological shift that’s basically designed to strip away your sense of free will, leaving you convinced that you didn’t really make the choice at all?

If you’re reading this and thinking, ‘not me; I’m an independent thinker. If my work required me to do something unethical, I’d just walk away,’ you are not alone. We all think it and really enjoy thinking that we are not only moral but also very agentic. 

We walk around believing we are the undisputed authors of our actions, but I want to show that the same way the human brain actively protects itself from the moral weight of the old-Yale-school obedience by just rewriting reality when placed in a hierarchy, it protects itself when meeting impersonal rules and decisions made by computer systems.

The survival of the obedient

First thing to do, though, is likely to bust myth to bust here is the assumption that obedience is just a character flaw. When we see someone blindly following a destructive order, our cultural narrative tells us that person is weak or cowardly or inherently malicious. But the anthropological and biological data points in a completely different direction.

Obedience is actually an evolutionary adaptation. It is a survival mechanism that is completely hardwired into our species. And what is even weirder is that it’s a feature – not a bug. 

To properly understand it, we have to look at the energy economics of a social group in the wild. If every single member of a group believed they were completely autonomous and had an equal right to a rare piece of food or the best sleeping spot, we would experience constant fighting and chaos. Such a state would make the whole group susceptible to external threats; the more energy would be spent on intergroup fights, the more vulnerable the group would be on the outside. Not to mention that by fighting they might also be risking injury, which does not improve survival rates either. 

So nature had to invent a mechanism to allow groups to stay together without killing each other over every minor dispute, and that mechanism is the social hierarchy and ranking to achieve group cohesion. (Sidanius & Pratto, 2012)

In groups, where we naturally self-organise into ranks, our natural submissive behaviour is essentially a social defence shield signalling that rank. By doing it, one could save oneself harm and still obtain some resources within the group. So this is a win, even if a minor one, from the survival perspective.

Over millennia, we developed a lot of related behaviours. These are specific physical gestures of submission that basically act as an off-ramp for aggression. It’s a topic that deserves a separate article (which may happen since it aligns nicely with the work that I do). Suffice to say that we evolved to recognise status signals and instinctively submit to it when still living in the old times. Unfortunately, it has not undergone any massive shift when we moved from small bands of hunter-gatherers to massive civilisations. This biological instinct still exists as a cognitive shortcut, which is the well-known authority heuristic. It serves two basic functions – it saves us from harm and it preserves our cognitive bandwidth. 

The moral GPS

We instinctively defer many decisions to the expertise of authority – or those posing as authority; think of celebrities and influencers or anyone and anything posing as knowledgeable. The signals may include evangelising others on a specific aspect of life, acting as if offering deep insight, setting up rules, and giving orders. We instinctively deem an authority legitimate if we perceive a person or – which is important here – an institution as wiser, more capable, or just possessing more vital information than we do. (Koski et al., 2015) The most crucial element here is that we judge those figures as competent, even when there is no true competence, only perceived competence. Because of that perceived status, we hand them over the right to make decisions for us. Almost as if we turned off our own internal compass and relied on the GPS signal they provide. 

Do you often follow the GPS in your car? I guess you do. It saves the mental effort of navigating. You just trust the satellite knows better than you do. But, and we all know this, sometimes the GPS drives us straight into a lake, and unless we pay attention to the decisions made by the machine, we may only realise it when our tyres hit the water.

The mere fact that we outsource our decisions using all kinds of heuristics (or outsource them to systems, like GPS) is not a problem on its own. However, it starts to be one when we offshore morality. 

In the real-life scenario, if you stop looking at the road signs and you start looking only at the screen, you may end up in the middle of a corn field. But in a social hierarchy and moral decision-making, if you stop looking at the moral road signs and look only at the authority figure, you may end up harming someone while thinking it is not your doing at all (just like you probably do not take responsibility for a faulty GPS route). 

The question that appears here is this: if this authority heuristic is an evolutionary shortcut built into our biology for survival, does that mean giving up our agency is entirely involuntary? What happens in our brains when we encounter an authority?  

To answer that, let’s examine the neural mechanics of what Stanley Milgram called the agentic shift.

The time machine

Milgram proposed that humans operate in two distinct psychological states. The first is the autonomous state. This is your default mode. But when you enter a hierarchical structure and accept the authority of someone above you, you undergo an agentic shift and view yourself merely as an agent, an instrument. You become a conduit for carrying out another person’s orders.

Your internal moral compass basically gets muted, and your primary focus becomes the technical execution of the task the authority assigned to you. It sounds pretty Orwell-ish, to be honest… 

Interestingly, this is where the modern neuroscience from our source stack comes into play. Specifically, the research led by Emilie Caspar and later replicated by Nil Akyüz measures ‘sense of agency’ by tracking how the brain perceives the flow of time. (Akyüz et al., 2024; Caspar et al., 2016)

The brain utilises a mechanism called intentional binding. In every action-reaction scenario, there is a fraction of time required for the action to create the reaction. When we are convinced that the action happens as a result of our own free will, our brain binds the action and the outcome closer together in our subjective experience of time. We literally experience the gap as shorter than it is.

This temporal binding is the implicit pre-reflective marker of our sense of agency. It is our brains’ invisible signature communicating, ‘I am the one who caused that to happen,’ and saving this data in memory.

In the above-mentioned studies, when participants were coerced into taking an action, that intentional binding weakened significantly. The perceived time between their actions and the reactions increased, even though the physical time delay was exactly the same as when they acted freely. It was the coerced brain that experienced a longer gap. That coerced action felt less connected to the outcome and more of a passive event rather than a voluntary one. The brain literally altered the subject’s perception of reality to distance them from the consequences of the act.

This means that our brains change the perception of time at the millisecond level before conscious thought even happens to strip away our feeling of responsibility. When the external GPS is turned on, we disengage from the actions we take. 

The researchers make a very strict distinction here between the feeling of agency and the judgement of agency. The judgement of agency is the conscious, explicit rationalisation which happens after the fact in the prefrontal cortex, when we explain what we did and why post hoc.

But the feeling of agency, the temporal binding, is implicit. It happens deep in the sensorimotor network. We don’t decide to alter our time perception, but our biology does it for us as an automatic defence mechanism against the massive cognitive dissonance and emotional burden of potentially inflicted harm.

The majority of us do not inherently want to hurt people and when we are forced to do it, our brains recognise the moral conflict of obeying the authority to survive in this hierarchy by taking the action violating our internal morals. To resolve that unbearable tension, our brains sever the neurological link between the proverbial finger on the button and the electric shock that happens to the victim. It is as if the mechanism, on its own, suggested that we are merely a biological extension of the commander.

What is fascinating here is that those who give the morally vague orders also do not feel the responsibility. Castor’s study included a condition to test it. They measured the commander’s temporal binding over the ultimate outcome. As you can likely guess, the commander’s sense of agency drops significantly compared to when they execute an action themselves. As a result, nobody feels responsible.

It’s the ultimate architecture of unaccountability.

The person giving orders, creating a procedure, defining the impersonal rules, or creating an unfair algorithm basically delegates the execution of the morally ambivalent work to someone else. They do not feel the neurological weight of the action because they do not physically press the button, do not respond to the claims, and do not deny people the resources they need. Their brain doesn’t bind the creation of the situation to the physical outcome.

Meanwhile, the worker executing the work does not feel responsible either because the action was dictated by someone or something else. 

Everyone’s brain is secretly telling them, ‘I didn’t really do this’. That is a chillingly accurate metaphor for modern organisational structures with loads of bureaucracy, rigid rules, and computer systems deciding things for us and allowing us to say, ‘I would love to help you, but I can’t; the systems do not allow me to approve the claim.’ 

In addition to the time trick our brains play on us, there is also a true time gap with bureaucracy which helps to strengthen the disengagement. It is the temporal distance of bureaucratic procedures, where an action taken today doesn’t impact the victim for, for example, six months. And there still is more – often it’s not only time but also geography at play. Bureaucracy and systems distance the decision-maker from the impacted individual physically. With modern remote working and computer algorithms deciding on a global scale, the person ‘clicking the button’ no longer needs to stay close to the one who gets hit by the decision. Physical and temporal distance is very in line with what another giant of the old times said – Albert Bandura. I guess that if he could have seen the current post-COVID world, he would have said that we have created environments perfectly calibrated to neutralise human empathy.

Eufemisation of harm

Bandura spent his career outlining the precise psychological strategies people use to turn off their internal moral control so they can perform or be complicit in harmful acts without feeling guilty. One of those was diffusion of responsibility. The same diffusion of responsibility which was visible in the Caspar et al. (2016) study’s temporal binding results suggesting that no one takes the blame across the hierarchy.

There are seven other strategies in Bandura’s work. One of the most insidious is the sanitisation of language – using euphemisms to minimise the results, harm, or injustice. 

Bureaucracy also runs on euphemisms. We instinctively know that language shapes reality. So an organisation will never use language that triggers moral scrutiny. A military bureaucracy doesn’t talk about bombing civilians. They discuss servicing the target and collateral damage. A corporate bureaucracy doesn’t talk about devastating families by firing breadwinners. They talk about right-sizing the organisation, synergistic restructuring or reducing redundancies. When we strip the blood and the humanity out of the vocabulary, suddenly the brain doesn’t trigger an emotional response. 

By increasing procedural distance, forcing you to navigate five screens of a software program, the bureaucracy is perfectly designed to reduce intentional binding. It is an engine for generating unaccountability.

We could even go further and combine sanitised language with the displacement of responsibility shown in Caspar et al. (2016) into, for example: ‘ the algorithm flagged your account for efficiency reasons; I can’t do anything,’ and we have a perfect phrase of moral disengagement. It is the ultimate institutional authority, the bureaucratic algorithm. The system possesses maximum structural power and minimal human empathy. If it sounds violent to you, you are not the only one. 

Bureaucratic violence & dead zones of imagination

The anthropologist David Graeber argued that bureaucracy itself is a form of distributed authority. Graeber’s work is essential for understanding modern obedience. He introduces the concept of bureaucratic violence in which bureaucracy acts as an impersonal, unyielding authority figure. (Graeber, 2015)

You do not need a human boss commanding you because the standard operating procedure, the drop-down menu on the software, the intake form, that is the authority. The human using it may be apologising for whatever happens to you, but the system is the dictator.

We defer to a piece of software or a printed policy manual the same way we defer to a human in a uniform. Bureaucracy perfectly hijacks the authority heuristic mentioned earlier. We perceive bureaucratic systems as proxies for a massive, wiser institution or a wise group of people who create the rules. The bureaucracy presents itself as objective, rational, and completely immune to human bias. It feels safer on the moral ground. And for many, psychological safety, even when offered via rigidity, is everything.

Plus, deferring to the rulebook massively reduces our cognitive load. If you are an HR manager processing a termination, you don’t have to evaluate the deep moral dilemma that could occur as a result of judging someone’s unique circumstances. You just follow the severance checklist. The procedure absorbs the moral weight.

Although utterly unjust and psychologically harmful, it looks like you can’t hurt people physically with a document. But according to Graeber, the paperwork is merely the polite, sanitised face of state or corporate force which does not accept disobedience. Even in a corporate environment, disobedience can result in the physical removal of a person from an office. Because that underlying threat of violence exists, the bureaucratic system does not have to debate you. It does not have to persuade you. It just issues the mandate and this dynamic creates what Graeber calls ‘dead zones of the imagination’. 

In any deeply unequal relationship, the person at the bottom of the hierarchy (which in this case is the petitioner) has to do an immense amount of imaginative, empathetic labour. They have to constantly anticipate what the person in power wants, how they’re feeling, and how to navigate their often unclear rules to make things happen. The people at the bottom are performing exhausting mental gymnastics, trying to fit into the expectations or leverage the situation that is vague. They are doing all the interpretive work, while the bureaucrat at the top doesn’t have to do any of that. The bureaucratic system doesn’t have to imagine anything. The clerk behind the glass doesn’t have to empathise with your unique, nuanced situation. They simply point to a sign that says, ‘Form 104B required’. The threat of force backing the bureaucracy allows the system to remain entirely rigid and unimaginative. (Graeber, 2012)

That lack of imagination, that refusal to engage with human complexity, is what Graeber means by structural violence. It forces the powerless to do all the empathetic work while the powerful remain wilfully blind. And this concerns both – the creators of the rules and the implementers of those – as neither of them needs to worry about the petitioner’s fate. Bureaucracy is the ultimate machine for generating the agentic shift.

That’s how we externalise the source of the rule to the system, which allows everyone inside the system to dodge moral responsibility. Or maybe some of us more than others… 

Another brick in the wall

Even though the environment is so good at pushing all of us to conform, there are still specific traits that make us more vulnerable to the agentic shift.

Paradoxically, traits that strongly correlate with higher obedience are high conscientiousness and high agreeableness. The the very traits we actively put on our resumes – ‘agreeable’, ‘highly conscientious’, ‘team player’, and ‘detail-orientated’ – are the exact same traits that make us most vulnerable to doing terrible things just because an authority figure told us to. (Bègue et al., 2015; Hwang, 2023)

Think about what a highly conscientious person values. They prioritise duty, order, structure, and technical accuracy. They want to do the job right, on time, and in a way that makes it useful to the person who requested it. In the context of a hierarchical shift, the conscientious individual is more prone to viewing the processes and rules as the defining parameters of their duty rather than to dwelling on their moral repercussions. If they focus their cognitive energy only on the technical execution of the standard operating procedure, which, in the case of highly conscientious individuals, may have an anxiety-reducing effect, they offshore the moral assessment of the task to the person who designed the system. Their virtue becomes their blind spot.

Highly agreeable people are characterised by a desire for social harmony, empathy, and a deep aversion to interpersonal conflict. In such case, pointing at a process while looking a boss in the eye, and saying, ‘no, I will not do this’ causes massive social friction. It is incredibly physically uncomfortable for an agreeable person to disrupt the harmony of the room. So to maintain social equilibrium with the authority figure standing right next to them, they comply with the destructive order affecting the anonymous victim. For them, being condemned in the group they are part of, i.e., the company, and becoming outcast because they did not follow a process, is almost lethal. For the agreeable, it also makes more sense from the psychological safety perspective to maintain the closer relationship intact for the ingroup harmony and peace, even when sacrificing external individuals. 

Last but not least, it is also easier to empathise with the person physically close and not some unknown victim (Schiano Lomoriello et al., 2018) so the remote situation and petitioner’s anonymity present an additional challenge. If that was not enough on its own, the coworkers and bosses are also part of the ingroup, and the victim is likely the outgroup, which skews empathy (Molenberghs et al., 2016).

Thank God for the whistleblowers… 

The remaining 35%

At this stage, we could easily conclude that the architecture of the modern world is designed, whether intentionally or accidentally, to make us obedient cogs who feel zero responsibility for the outcomes of our paperwork. If we combine that with the neurological reality of temporal binding and the psychological reality of engaged followership, we start to understand the absolute bane of our existence – bureaucracy.

But before we conclude that we are all completely helpless victims handing all our agency to our own neurology, we have to look closer at the historical data.

The original headline of Milgram’s study shocked people by saying something along the lines of ‘65% of people will electrocute a stranger to death.’ So the question remains: what happens inside the very important 35% of people who didn’t increase the voltage to 450V?

More than a third of the participants in the main study looked the authority figure in the eye and said, ‘No, I’m done.’ If the agentic shift were an automatic biological reflex, how could we explain the whistleblowers?

Surprisingly, we do not have to do it all on our own. Milgram did it, only a bit more quietly. He didn’t just run one experiment. He ran over two dozen variations, tweaking the variables to see what increased or decreased obedience. The results ranged wildly from 0% to 100%, depending on the specific context.

For example, if the experimenter gave the orders over a telephone and was distant, instead of standing in the room, obedience dropped to 23%. If two other teachers in the room who were secretly actors refused to continue, obedience plummeted to just 10%. Which brings us to the conclusion that we aren’t blindly obedient. It is much more complex than that.

Based on that, Haslam and Riker argue that the participants in Milgram’s lab were not falling into a passive, unthinking agentic state. They were making an active choice. They were actively identifying with the authority’s cause – in his case, the glorified ‘Science’. (Reicher et al., 2012)

The experimenter not only wore a formal lab coat and spoke with extreme technical precision but also came from a renowned institution and offered a slogan of ‘a very important study of learning mechanisms’. More importantly, the narrative in this research was that the continuation, i.e., increasing the voltage, was absolutely vital for the advancement of humans.

According to Haslam and Riker, obedience is a function of social identity. In that high-pressure moment, the one who serves the electric shocks, the teacher, is torn between two identities: to align with the victim, who is representing ordinary humanity, or to align with the experimenter, who is representing the noble pursuit of science. 

They convinced themselves that the short-term pain of this one individual was a necessary sacrifice for the greater good of human knowledge. So the participants were not acting out of mindless passivity, but out of a twisted sense of virtue. And the data supports this beautifully, even if we only analyse the language of the instructions.

Milgram had a script of four specific prods to use when a participant hesitated. When you look at the phrases the experimenter used, it becomes obvious what actually happened. They were: 

  1. ‘Please continue’ – neutral.
  2. ‘The experiment requires that you continue’ – evoking the ‘greater good’ narrative and a shared goal.
  3. ‘It is absolutely essential that you continue’ – poking at the individual’s importance.
  4. ‘You have no other choice. You must go on.’ – entrapment.

Haslam and Riker found that whenever the experimenter resorted to No. 4, obedience didn’t increase. It collapsed. Participants routinely responded to that direct order by saying, ‘yes, I do have a choice,’ and they walked out of the lab.

The moment the experimenter says ‘you have no choice,’ they stop being a partner in a shared scientific endeavour and become the participant’s adversary. The participant suddenly realises they are not on a team. They are being subjugated.

However, sentence two, ‘the experiment requires that you continue’, was incredibly effective in making people increase the voltage. It is no surprise – this statement is a true appeal to the shared mission. It is tantamount to saying ‘we are in this together and science needs you.’

The conclusion? The people who shocked the learner with 450 volts could simply be enthusiastic believers in the mission. Still offshoring moral responsibility, nevertheless, not to the authority figure – but to the abstract concept of the shared ‘greater good’. This is also supported by Birney et al. (2024).

So the 35% could simply be either not so much mission-oriented or simply better skilled at separating themselves from the narrative or goals when they stop being morally clear.

Which is doubly important with the paradigm shift in how we view obedience and authority. We don’t often have a malevolent dictator screaming at us or giving us direct commands. We have shared missions, company goals, rules and ‘objective’ algorithms (though I would argue that they are only as objective as the person designing them and, in case of AI, as the data they are taught on).

So how does it happen that some of us can individuate better in such morally ambivalent situations and stand their ground?

The biology of the whistleblowers

Even in the most rigid, terrifying corporate structures or under authoritarian regimes, some people refuse to comply. Some people rebel against the red tape. And there are a few elements that make them ‘built’ for it or less susceptible to the mainstream narrative. 

One of them is the developmental stage we reach, i.e., how well we grow out of the heteronomy towards autonomy in our emotional and cognitive structure. It has everything to do with how well we are able to separate ourselves not only from other people but also from the systems we are part of, countering the aforementioned identification with the ingroup or shared mission just for safety, peace, and belonging. 

It is not just a developmental psychology narrative. Nowadays, we can actually map this difference inside the physical brain. Advancements in functional MRI technology have allowed us to scan the brains of individuals while they’re subjected to coercive commands. The neuroimaging shows that people who successfully resist orders exhibit higher baseline activation in specific regions. There’s notably the temporal parietal junction, or TPJ, and the angular gyrus. (Tricoche et al., 2024, 2025)

The TPJ is a fascinating hub in the brain. It is heavily involved in complex cognitive processes like perspective-taking, empathy, and, crucially, maintaining the boundary between the self and others. Think of it as a cognitive firewall against someone or something else’s identity overriding your own. In an engaged followership scenario, the firewall drops and the obedient person merges their identity with the authority figure. 

But when the TPJ is highly active, the individual maintains a rigid, clear distinction between their own moral agency and the command being given by the external authority, recognising the suffering of the potential victim as simply wrong. They refuse to dissolve into the system. They keep their psychological distance. That explains the mental resilience.

But there is more…

Defying an authority figure induces what Milgram termed ‘moral strain’. It triggers an acute stress response, cortisol spikes, and the heart races, and this brings me – to my own great surprise experienced when I found the research – to a physiological marker. Namely, heart rate variability, or HRV. (Balconi & Fronda, 2019; Vezirian et al., 2025)

A high resting HRV is indicative of a highly adaptable, resilient, autonomic nervous system. It means your parasympathetic nervous system, the rest-and-digest mode, can effectively hit the brakes on your sympathetic nervous system, i.e., on the fight-or-flight mode. The latter indicates an ability to emotionally and physically regulate stress. In the context of obedience, the confrontation of saying ‘No’ can be terrifying. People with higher HRV are biologically better equipped to physically tolerate the acute stress of that moral conflict without breaking down; they are physically more resilient.

While someone with low basal HRV might mentally want to resist, their body panics; the physical anxiety of the confrontation is too overwhelming, so they comply simply to escape the physiological distress. They use the agentic shift as a subconscious coping mechanism to calm their racing heart. The social and physiological cost of defiance is simply too steep for them to pay.

A question needs to be asked here, though, because HRV is not something we are born with – it’s what we can change by exercising. So does exercising correlate with lower obedience and better conflict resilience? Food for thought (or for a tiny study). 

Can we counter the bureaucratic shift?

During Milgram times, no one was thinking about HRV. Not to mention fMRI and looking into people’s brains to see what the research was showing. I doubt that, although he overlapped with the development of fMRI, Bandura had an idea that what he was writing about could be observed physically. It is then interesting to see that what they studied can actually be really seen on computer screens. 

And although modern psychology tends to distance itself from the old-timers, almost as if it wanted to obliterate them by marking them as ‘not up to our brilliant new standards’, neuropsychology confirms that they had something important to say, and that it is still valid. 

Just take a look at the excerpt from Bandura’s paper:

Bureaucratization, automation, urbanization and high geographical mobility lead people to relate to each other in anonymous, impersonal ways. In addition, social practices that divide people into ingroup and outgroup members produce human estrangement that fosters dehumanization. Strangers can be more easily depersonalized than can acquaintances. (Bandura, 1999)

Sounds disturbingly current, doesn’t it. We know that the problem persists. The question that remains then is whether we want to solve it. 

Or whether we can solve it at all, given that the agentic shift is not merely a retrospective narrative used to avoid punishment. How to counter unaccountability when it is simply part of the brain’s neurophysiology?

The answer may actually lie in the very same neuroscience. Apparently, intentional binding is strongest when participants have a high number of action alternatives and lowest when they have no choice. The Barlas & Obhi (2013) study investigated the relationship between intentional binding and the number of action alternatives available to a person.

The researchers had participants press a button to cause a tone and measure their intention of binding, their perception of time. But they varied how many choices the participant had. In one condition, they had no choice. They were forced to press a single available button. In another, they could freely choose between three buttons. In the final condition, they could choose from seven different buttons.

The binding effect, the implicit neurological marker of feeling responsible, was significantly strongest when participants had the maximum number of alternatives, the seven buttons. The agency was somewhere in the middle with three choices, and it was at its absolute weakest when they were forced to press just one button. So freedom of choice isn’t just a philosophical concept – having options literally increases our brains’ physical connection to our actions.

Agency is inextricably intertwined with the presence of alternatives, even if the alternatives are what we experience internally only, e.g., walking away from the situation. Therefore, to combat the creeping passivity of bureaucracy, we must intentionally design systems that preserve and highlight choice.  

Bureaucracies often hide discretion to ensure uniformity. But, according to the studies, systems should be redesigned to explicitly show employees where they have the authority to make a judgement call. And crucially, the organisation should publicly reward responsible discretion, not just blind compliance. Otherwise, if an employee ethically bends a rigid rule to achieve a more humane outcome and the company punishes them for violating protocol, the company is training its workforce to become agentic zombies. If blind obedience pays, people learn blind obedience.

This does not mean giving people the illusion of choice by offering them two predefined processes to choose from. It is about empowering them to make the right decisions. It is also about providing them with the comfort of psychological safety to decide outside of the rules, even when imperfectly. (Edmondson, 2011; Mehrotra, 2024) 

Second part of the solution likely revolves around releasing the transparency of consequences. As Bandura said, bureaucracy dilutes responsibility through temporal and physical distance. To reclaim responsibility, we should shorten these feedback loops, which is in line with one of Milgram’s variations. When the teacher had to physically place the learner’s hand on the shock plate, obedience dropped. Proximity to the victim destroys the agentic shift. Modern systems therefore need to design mechanisms that bring the decision-maker face-to-face with the consequences of their administrative actions. Empathy is the antidote to the agentic shift, and empathy requires proximity. 

This also applies to modern politics. When keeping our country’s leaders safely stored in bunkers, only asking of them to do as little as to press a button to start the war, we follow Bandura’s moral disengagement script perfectly. How different would the world look if the very same leaders ‘Alexander The Great’-ed their people to the actual battlefield? I often wonder… 

The third strategy based on the research is to reframe identity. This goes back to Haslam and Riker’s engaged followership model. We know that people don’t obey blindly. They obey when they identify strongly with the cause. Therefore, we should actively move employees away from identifying with the rule book and pivot them toward identifying with the ultimate human outcome.

If an employee’s internal narrative is, ‘my identity is a diligent processor of these 50 forms’, they will shut down their empathy to get the paperwork done. But if their identity is anchored in ‘my mission is to ensure 50 vulnerable people receive the aid they need to survive’, their behaviour may change entirely.

And if you are thinking now, ‘but wait, we need those processes because people make silly mistakes when allowed to think independently,’ then I would challenge you on this. Maybe what you see is the effect of how you select the people (not only the effect of the lack of empowerment which we know is a self-fulfilling prophecy). Maybe the selection process should focus on their autonomy, the ability to follow values rather than procedures, and their identifications with the actual mission when the mission is right, not exploitative or harmful. Maybe we should move from ‘skills’ to ‘mindsets and values’ on our resumes…

Having said all that, I also want to leave you with one final, provocative thought. I have spent over 5000 words analysing human bureaucracy, human policies, and human-created static algorithms. But we are rapidly, aggressively accelerating into an era of automated decision-making and artificial intelligence, which we can perceive – and I think we are already falling into this trap – as the ultimate flawless, impersonal authority.

We already eagerly surrender our agency to paper forms and flawed rules just to avoid responsibility. What happens to the human brain when the ultimate mathematically wise, infallible algorithmic black box tells us what to do? Are we on the precipice of a mass global algorithmic agentic shift? If an AI system fires an employee, or denies a mortgage, or identifies a military target, will we happily blame the code for our moral failures? Do we just shrug and say that the neural network required it? Who will hold the power then? 

And the most important question might be this: are we creating AI to ultimately hand over the burden of decision-making and the related uncomfortable moral responsibility? It’s something for you to mull over.

Sources:

Akyüz, N., Marien, H., Stok, F. M., Driessen, J. M. A., de Wit, J., & Aarts, H. (2024). Obedience induces agentic shifts by increasing the perceived time between own action and results. Scientific Reports, 14(1), 16769. https://doi.org/10.1038/s41598-024-66499-8

Balconi, M., & Fronda, G. (2019). Physiological Correlates of Moral Decision-Making in the Professional Domain. Brain Sciences, 9(9), 229. https://doi.org/10.3390/brainsci9090229

Bandura, A. (1999). Moral Disengagement In The Perpetration Of Inhumanities.

Bandura, A., Barbaranelli, C., Caprara, G. V., & Pastorelli, C. (1996). Mechanisms of moral disengagement in the exercise of moral agency. Journal of Personality and Social Psychology, 71(2), 364–374. https://doi.org/10.1037/0022-3514.71.2.364

Barlas, Z., & Obhi, S. (2013). Freedom, choice, and the sense of agency. Frontiers in Human Neuroscience, 7. https://doi.org/10.3389/fnhum.2013.00514

Bègue, L., Beauvois, J.-L., Courbet, D., Oberlé, D., Lepage, J., & Duke, A. (2015). Personality Predicts Obedience in a Milgram Paradigm. Journal of Personality, 83, 299–306. https://doi.org/10.1111/jopy.12104

Birney, M. E., Reicher, S. D., & Haslam, S. A. (2024). Obedience as “Engaged Followership”: A Review and Research Agenda. Philosophia Scientiæ. Travaux d’histoire et de Philosophie Des Sciences, (28–2), Article 28–2. https://doi.org/10.4000/11ptx

Caspar, E. A., Christensen, J. F., Cleeremans, A., & Haggard, P. (2016). Coercion Changes the Sense of Agency in the Human Brain. Current Biology, 26(5), 585–592. https://doi.org/10.1016/j.cub.2015.12.067

Edmondson, A. (2011). Psychological Safety, Trust, and Learning in Organizations: A Group-level Lens. Trust and Distrust in Organizations: Dilemmas and Approaches.

Graeber, D. (2012). Dead zones of the imagination: On violence, bureaucracy, and interpretive labor: The Malinowski Memorial Lecture, 2006. HAU: Journal of Ethnographic Theory, 2(2), 105–128. https://doi.org/10.14318/hau2.2.007

Graeber, D. (2015). The Utopia of Rules. https://davidgraeber.org/books/the-utopia-of-rules/

Hwang, J. J. (2023a). The Impact of Individual Differences and Personality on Obedience. Journal of Education, Humanities and Social Sciences, 8, 1384–1389. https://doi.org/10.54097/ehss.v8i.4491

Koski, J., Xie, H., & Olson, I. R. (2015). Understanding Social Hierarchies: The Neural and Psychological Foundations of Status Perception. Social Neuroscience, 10(5), 527–550. https://doi.org/10.1080/17470919.2015.1013223

Mehrotra, R. (2024). PSYCHOLOGICAL SAFETY IN HIGH-PERFORMANCE TEAMS. International Journal of Advanced Research, 12(07), 1680–1687. https://doi.org/10.21474/IJAR01/19218

Meinert, J., & Krämer, N. C. (2022). How the expertise heuristic accelerates decision-making and credibility judgments in social media by means of effort reduction. PLoS ONE, 17(3), e0264428. https://doi.org/10.1371/journal.pone.0264428

Molenberghs, P., Gapp, J., Wang, B., Louis, W. R., & Decety, J. (2016). Increased Moral Sensitivity for Outgroup Perpetrators Harming Ingroup Members. Cerebral Cortex, 26(1), 225–233. https://doi.org/10.1093/cercor/bhu195

Reicher, S. D., Haslam, S. A., & Smith, J. R. (2012). Working Toward the Experimenter: Reconceptualizing Obedience Within the Milgram Paradigm as Identification-Based Followership. Perspectives on Psychological Science: A Journal of the Association for Psychological Science, 7(4), 315–324. https://doi.org/10.1177/1745691612448482

Schiano Lomoriello, A., Meconi, F., Rinaldi, I., & Sessa, P. (2018). Out of Sight Out of Mind: Perceived Physical Distance Between the Observer and Someone in Pain Shapes Observer’s Neural Empathic Reactions. Frontiers in Psychology, 9, 1824. https://doi.org/10.3389/fpsyg.2018.01824

Sidanius, J., & Pratto, F. (2012). Social dominance theory. In Handbook of theories of social psychology, Vol. 2 (pp. 418–438). Sage Publications Ltd. https://doi.org/10.4135/9781446249222.n47

Tricoche, L., Rovai, A., Bue, S. L., & Caspar, E. A. (2025). Neuro-cognitive specificities in prosocial disobedience: A comparative fMRI study of civilian and military populations. PLOS ONE, 20(7), e0328407. https://doi.org/10.1371/journal.pone.0328407

Tricoche, L., Rovai, A., & Caspar, E. (2024). When the brain says “No!”: An MRI study on the neural correlates of resistance to immoral orders. Imaging Neuroscience, 2, imag–2–00392. https://doi.org/10.1162/imag_a_00392

Vezirian, K., Beffara, B., & Bègue, L. (2025). Non-significant results as for the association between heart rate variability, personality, and the objectification of lab-animals into the conduct of animal testing. Social Neuroscience, 20(2), 53–66. https://doi.org/10.1080/17470919.2025.2486967

Back to Top
Back to Top
Context Menu is disabled by website settings.