I suppose the various rules of thumb/guidelines that people tend to use when making moral decisions don’t always mesh together entirely comfortably. One could probably set up an extreme hypothetical situation that will make just about anyone holding absolutely and steadfastly to any one such principle rather uncomfortable.
Most people have an idea that minimizing harm to the maximum number of people is generally good (or, alternatively, maximizing good to the many). If a plane flying above a city suddenly is about to crash, then generally everyone will agree that the pilot pointing it towards less populous areas (suburbs) is better than pointing it right at the city centre (minimizing loss of life). In this situation, there’s no real conflict with other principles. This situation has many similarities with the standard trolley problem, which is perhaps why many people tend to think it’s ok to flip the switch.
There’s usually an idea that actively stepping in to cause harm is worse than more passively not intervening, e.g. throwing someone into a river versus just being too afraid to jump in and rescue someone who is drowning. The standard trolley problem seems more non-interventionist. Of course, there are limits to this too, i.e. acts of commission versus acts of omission. If one is staying in one’s rich uncle’s house and you see he has slipped on some bathroom tiles, hit his head, become unconscious, is lying face down in the water, and one just looks the other way (thinking I’m now going to get all that money in the will much sooner than expected), well, the difference between that and murder is getting a bit narrow; ok, it’s a somewhat less actively planned decision (more opportunistic) but still a decision to leave someone die.
I suppose that’s why the idea of pushing the fat man onto the tracks does not seem like the right choice to most people, it’s an active act of harm (turning a switch seems a lot more passive and more like choosing between unalterable outcomes). There’s probably also the view that a line has been crossed. There also is always the option of throwing oneself on the tracks (gulp), not that even a fat man is ever likely to stop a real train!
The fat man trolley problem also makes a certain type of consequentialism/utilitarianism look really bad (adherents like psychopaths really ). Though perhaps it’s just that a psychopath would have no real issue with the active harm part, even if the associated concept of doing this for some overall good might not mean much to him. Though, there are versions of utilitarianism that lead to something more like deontological ethics.
Peter Singer is one of the most well-known utilitarians. IMO he is definitely not a psychopath (has had rather nice work on animal rights, effective altruism etc.) but some of his type of argumentation does end up in very discomfiting places, e.g. euthanasia for severely disabled newborns, i.e. society allows this for the unborn so he has asked why not for infants (he asks what’s the difference, other than in degree)? I was recently listening to a debate he took part in on assisted suicide. He definitely made some cogent arguments. However, I suppose my main issue with this approach is the feeling that there are lines that probably shouldn’t be crossed (a slippery slope argument). Such laws have existed in the Netherlands for around 20 years. There is a degree of evidence of slippery slope, initially were mostly less controversial hard cases, but has been extended to psychological and not just terminal illness, e.g. a woman who lost her sight got access to assisted suicide, to children with the consent of parents etc. Singer does argue that there hasn’t been that much slippery slope (am not sure I agree), though he admits this type of argument is the strongest counterargument against his positions. IMO it’ll probably take another 20 years before we get a fuller idea into where this leads. So within utilitarianism, I think there’s room for utilitarian approaches more like deontological ethics where there’s a view that sometimes there are hard, suboptimal but easily-arbitrated, lines/rules/boundaries that shouldn’t be crossed (because perhaps the longer-term corrosive effects on society might be worse).
Of course, there can be trolley problems to sort out the utilitarian pseudo-deontologists from the actual true hard deontologists too. What if one has to push a fat man off a height to stop a James Bond supervillian who is about to active a superweapon that will incinerate all human and other life on earth? I suppose a (at least skinny) pseudo-deontological utilitarian in this case should really push the fat man (assuming his fatness is crucial for some reason; if I was less lazy I could probably come up with a less ridiculous example ) given that the superweapon will burn away any slippery slopes along with the rest of the earth! I guess the true deontologist would just have to leave the Earth be incinerated?
I suppose also often we as mere humans just don’t really know the true rules of the game. If there was a Deity up in the sky who had laid down a fixed set of rules, with disobedience resulting in a long or infinite spell in some unpleasant place, then even a believing utilitarian might be end up following some rule-based ethics. Same goes for various understandings of rebirth or kamma. Members of some cults can end up with very contorted ethical systems. I suppose ultimately there must be some responsibility on people too for the belief system they use to frame their moral approach. Or if one is trying to choose in a situation to maximize some utility or maximize the good for all in the situation, that can be incredibly difficult without knowing all the facts and possible implications. That’s where handy access to some kind of pocket omniscient personal oracle would be mightily convenient (the rather odd Greek accounts of Socrates and his Daimonian, a version of that, come to mind), but utilitarians generally just have to muddle along with partial and limited facts and hope that what they think is help doesn’t actually ultimately harm. A major weakness of utilitarianism is that it is not some benign, wise, dispassionate, omniscient arbiter making utility calls but mere messy and rather fallible humans.
For the fat man-supervillian situation, belief might matter too. If one believed there were infinite worlds out there, into which everyone would be reborn somewhere, and likely everyone there had been through multiple civilizational/world destructions already over the course of some enormously long chain of transmigration, then perhaps a deontologist might cling to that. Though perhaps the idea that they, their families, friends and loved ones might be about to imminently get fried might be a more pressing consideration??!! But I suspect there are many people who just would not be able to do it even in extremis. I don’t think that’s wrong (the hypothetical supervillain would be the guilty party after all). Though, in an extreme enough situation (this being very extreme), many probably would push the fat man too. I’m not sure that would be wrong either.