„tROlleYOlOgY” And AUtOnOMOUS VehicleS – MORAl And legAl QUeStiOnS

The paper focuses on a classical problem of ethics and law: the doctrine of double effect (DDE). Nowadays the doctrine is more and more popular since AI-technology and superintelligent machines have been developing rapidly. Maybe autonomous vehicles’ most difficult dilemma is the following scenario: an autonomous car gets into an extreme road accident (collision) and the software should “decide” which direction-alternative to choose, but all of those possibilities end with death of human(s). This is a problem which requires a morally and legally justified answer. The principle emphasizes how to achieve a moral justification, what can be the classical cases of DDE and how to solve the most famous classical case, the trolley problem – which can be analogous to autonomous cars’ collisioncase. The paper also higlights whether the doctrine is relevant from the perspective of legal justification and legal solutions, too.

humans and their property, but legal solutions are not really satisfying in every case. In connection with autonomous vehicles, moral aspects are always coming into question. We can also say that artificial intelligence (AI) belongs to a sphere which is the common part of law and morals. This is why regulation of autonomous cars requires a complex answer -but currently, it seems to be rather problematic not calming. If a question concerns the edge of law and morals, it can easily takes its place in the focus of academic attention. The relevance and the actuality of analysing DDE should be explained briefly. The first question: what is the basic problem in connection with moral decisionmaking in transportation? A thought-experiment helps us to imagine the following scenario: there is an autonomous car, which gets into an extreme road accident. Just a second before the collision-moment, the software should "decide" which direction-alternative to choose, but all of them ends with serious loss, especially death of human(s). 4 Whether there is an existing right answer-alternative to calculate death as an acceptable consequence or not? The driver's power is assigned to the machine, namely to a software, so this entity will "decide" from case to case. Of course, technology can presume a lot of scenarios with different resulsts and these moral dilemmas will be "decided" in advance. Can we imagine the future transportation in this way? So, this paper's aim is to demonstrate what is the good of DDE, how this theory could be used in moral dilemmas and the most interesting question: could it be an adequate and useful way to solve extreme moral situations caused by autonomous vehicles or not? Does the doctrine may be relevant from the perspective of law?

Viewpoint of technology
At first, it is important to explain why representatives of technology and industry believe that autonomous vehicles do not raise any moral quaestion 5 -in other words, this point of view is called Amoral Machine-thesis. 6 Lawyers and technicians usually disagree when arguing about interdisciplinary relevant problems and this discrepancy is pretty much eye-catching in connection with this essay's subject. According to technicians, the dilemma is analogous to some old and eternal problems which has not been solved yet nor by philosophers, jurists or anyone. That is the cause why there is no morally right decision to the extreme collision-problem, so the moral aspect should be simply blocked. 7 As Héder writes: "(…) industry should (…) simply make proposals and ask for a compromise rather than chasing moral truths." 8 Transportation is based on a consensus made by society. It seems to be evident but people have forgotten about this because new types of machines have not been appeared for some time past -but a change would have called for a revision of the consensus. However, autonomous technology is something strange, something new, so sooner or later society should re-write this consensus. It means the following: reallocation of responsibility among manufacturers, designers, governments, owners of the vehicles and participants of transportation. So, the problem is based rather on calculation and not morals. All in all, industry's expectation is a much safer transportation, but first, pedestrians and other participants should accept the limitations of autonomous cars (for example eye-contact will be unnecessary to see into the "driver's" actions because intelligent vehicles do not understand this type of contact -even they do not have eyes). Algorithms are more intelligent than human drivers' brain: algorithms can calculate various scenarious and various results in advance, can make lots of plans and they also can "communicate" but in a different, newer way. 9 Of course, the ideal ambition would be to programize autonomous vehicles like this: vehicles should be constantly able to minimize the harm and number of deaths and to save the life of passengers, pedestrians and every living creature on the roads. Would it be a solvable expectation? 10 Of course, the essay does not accept this theory because we want to discover the nature of moral problems. That is why we should elaborate DDE's classical and modern readings and then we may realize how it could help (or not?) industry and legal regulation methods as well.
traditional and modern readings of dde. the birth of "trolleyology" When a problem (like the one described above) proposes a choice between human lives, it can sometimes be morally solved by double effect-theory 11 -if the human act/decision is well tested with DDE, the act/decision may have a morally justified solution. The doctrine of double effect is quite intriguing as it supposes a flabbergasting idea: an act causing serious harm (death of -at least! -one human being) could be permissible as this harm is just the act's side effect and the agent's original aim is to achieve some good end. So the act has two effects: the good result is the intended effect and the harm is the side effect and is inevitable. There are a lot of occasions when a choice with double effect could be the only one possibility in a serious dilemma. The agent does not want to cause harm, but unfortunately she cannot avoid it. 12 The doctrine also demonstrates how clash of moral and legal questions occurs. Originally, DDE is a principle of moral theology and was invented by Thomas Aquinas 13 in his famous work, Summa Theologiae. 14 Aquinas played a leading role in defining moral theology's main function and its dogmatics. 15 Generally, as he emphasized, human acts have a "golden rule": "Bonum ex integra causa, malum ex quovis defectu." 16 Besides this, the philosopher also concentrated on situations where there is no right answer as the problem always causes something wrong which is unavoidable. 17 At Aquinas, the original idea of DDE refers to the following case: killing in self-defense is sometimes permitted, more precisely, killing one's assailant is justified. This act is special because it has two effects, a prima facie good and a prima facie evil effect: one which the agent intents (saving his own life) and one which is not intended but inevitable -after Aquinas, we can also say "praeter intentionem" (killing the attacker). 18 Of course, " (…) if a man, in self-defense, uses more than necessary violence, it will be unlawful: whereas if he repel force with moderation his defense will be lawful, because according to the jurists (…), 'it is lawful to repel force by force, provided one does not exceed the limits of a blameless defense." 19 According to moral theology, the application of the principle has four conditions: 1. "the act itself must be morally good or at least indifferent; 2. the good effect and not the evil effect be intended; 3. the good effect be not produced by means of the evil effect; 4. there be a proportionately grave reason for permitting the evil effect." 20 These four conditions form a moral "test" and suppose that there are empirical examples which could be analysed with these conditions. In other words: casuistry means the practical aspect of theoretical dilemmas. Casuistry's 21 subjects are moral dilemmas, more precisely cases of conscience -it seems casuistry elaborated the practical side of moral questions. These kind of problems were also analysed by Aristotle and catholic theorists like Aquinas. Casuistry's vital questions are usually dilemmas connected to life and death, such as abortion or euthanasia. 22 The principle became popular in ethics and was adopted into philosophical mainstream as well. 23 What is more, it has some famous paradigmatic cases which are challenging for lawyers not just for philosophers. All of these cases show a serious problem: killing innocents. We can also say that the human act's non intended effect is other person's death. DDE has classical examples, more precisely paradigmatic cases, which are the following: killing in self-defense, abortion, euthanasia, bombing and the trolley problem. 24 Besides killing, the cases have one more thing in common: death could be morally (and legally) justified somehow in these extraordinary situations and death is the evil effect which is not intended at all but unavoidable. In literature, theorists elaborated the justified and non justified versions of these paradigmatic cases and have been arguing about "killing" and "letting die" for decades. I would like to demonstrate the brief descriptions and solutions of these cases in a tablet -it summarizes practicably and concisely the ethical, philosophical and legal aspects as well: 25

CASE
GOOd EFFECT BAd EFFECT JUSTiFiEd VERSiON

Self-defense
The agent is saved Attacker is killed Killing in self-defense 26 Nowadays it is still difficult to imagine a trolley problem-scenario with autonomous cars, because it is not a real dilemma for the majority of manual drivers. Johansson and Nilsson try to prove that the trolley-case could be interesting from the perspective of manual drivers, too. Johansson -Nilsson, 2016. one), three conditions must be met to consider the case as a trolley case. These conditions are: "First, in trolley cases a collision is imminent and unavoidable. Second, the agent is able to choose how to distribute the harms that ensue as a result of this collision. Third, the decision situation is one of certainty. Actions carry no risk so that the agent can choose between outcomes." 27 Trolley problem is usually labelled as a thought-experiment invented by philosopher Philippa Foot in 1967, in her famous essay, The Problem of Abortion and the Doctrine of Double Effect. Otherwise, thought-experiments are quite popular in speculative theorymaking, because they are cheap and with them, scientists can prepare for future problemsolving (through answering "what if…" questions) and can elaborate various answeralternatives. 28 Here are the basic thoughts of DDE and trolley problem by Foot, whose work means the birth of "trolleyology". Foot was arguing with Herbert Hart and they discussed the problem of abortion through the double effect-theory. 29 Foot believed that people could not use DDE to criticize abortion. 30 As Aquinas, she also made a distinction between twin effects: the one is aimed, the other is foreseen but not intended; in law, they are called direct intention and oblique intention. Her original example was about a fat man stuck in a cave. "A party of potholers have imprudently allowed the fat man to lead them as they make their way out of the cave, and he gets stuck, trapping the others behind him. Obviously the right thing to do is to sit down and wait until the fat man grows thin; but philosophers have arranged that flood waters should be rising within the cave. Luckily (luckily?) the trapped party have with them a stick of dynamite with which they can blast the fat man out of the mouth of the cave. Either they use the dynamite or they drown. (…) Problem: may they use the dynamite or not?" 31 She also writes (but briefly) about the famous trolley-scenario: imagine that " (…) a driver of a runaway tram which he can only steer from one narrow track onto another; five men are working on one track and one man on the other; anyone on the track he enters is bound to be killed. (…) why we should say, without hesitation, that the driver should steer for the less occupied track (…)?" 32 This so-called basic scenario is "Spur", as quoted in literature of philosophy. Foot has many other cases, and one of them, the "Transplant Case" is also very relevant. There are five patients who need organs and they will die if nothing happens. One day, a healthy and young man comes into the hospital; can the doctors sacrifice him as a donor to save five patients or not? The most troubling question is "why our moral reactions differ in these two kinds of cases -cases such as "Spur", where it seems morally acceptable to take a life to save five lives, and cases such as "Transplant", where it doesn't." 33 Here it is the answer: in "Spur", somebody is redirecting an already 27 Himmelreich, 2018, 671. 28 Kovács, 2015 29 From this view, topic of abortion is quite interesting. Foot, the English philosopher, published her essay in 1967, and in Britain, abortion was legalized by parliament in October 1967. Things were not the same in the U. S. where in 1973 a landmark case, Roe v. Wade made the same change as in Britain. Edmonds, 2014, 25. 30 Černý, 2020, 89. 31 Foot, 2013, 537. 32 Foot, 2013 existing threat, but in "Transplant", sacrificing the innocent man's life is a means to save five individuals. 34 Next to Foot, Judith Jarvis Thomson also explained the trolley case and she has more and more examples, indeed. The most famous one is the variant of the fat man case. Imagine the scenario: "George is on a footbridge over the trolley tracks (…) and can see a trolley approaching the bridge is out of control. On the track back of the bridge there are five people; the banks are so steep that they will not be able to get off the track in time. (...) the only way to stop the out-of-control trolley is to drop a very heavy weight into its path. But the only available, sufficiently heavy weight is a fat man, also watching the trolley from the footbridge. George can shove the fat man onto the track in the path of the trolley, killing the fat man; or he can refrain from doing this, letting the five die." 35 And to mention one more fat man-example, here it is "Loop": " (…) the trolley is heading toward five men who are all skinny. If the trolley were to collide into them they would die, but their combined bulk would stop the train. You could instead turn the trolley onto a loop. One fat man is tied onto the loop. His weight alone will stop the trolley, preventing it from continuing around the loop and killing the five. Should you turn the trolley down the loop?" 36 Philosophers say that nowadays we should imagine the classical thought-experiment not with trolleys, but with autonomous cars. The main quaestion is constant: what is the right thing to do? The difference between intelligent vehicles and traditional vehicles is that autonomous cars do not have to cooperate with humans, the machine "decides" on its own (in a way we can say: it "decides" in advance), according to algorhytms. When answering the problem, two main viewpoints are outlined: the consequentialist and the nonconsequentialist approach. The first one dictates that the less fatal end is better than the more; for example, if the car kills one person instead of five, this "decision" is okay. The other approach asserts that there is a moral distinction between killing and letting die, or as Foot said: negative duties and positive duties. 37 As for the problem of moral responsibility, letting die is somehow better than killing because it is a passive act. 38 By the way, these two viewpoints are inhered in the classical literature of trolley cases. How to test autonomous cars's "action" through DDE? What we should do is to explain the legal answer to the trolley cases: why is it allowed to sacrifice (kill) someone in these paradigmatic cases? Some of the paradigmatic cases of double effect are known in legal theory and legal practice as well. In a way, medical activity (such as abortion, euthanasia), transportation or war could be the rare "exceptions" of life-saving obligation of the states. Especially medical activity and transportation do not mean social danger in a proper sense (of course we know, killing counts as a criminal act). Human life and right to life is a supreme value in every legal system, which means that it must not be illimitable because life is an absolute 34 Edmond, 2014, 34.
35 Thomson, 2013, 545. 36 Edmonds, 2014. Critics say, "Loop case" is a very discriminating case-variant as it distinghuishes innocent people according to their size (fat or skinny), so decisions could not be proper and right in this example. 37 Černý, 202037 Černý, , 94. 38 Lin, 2016 value. But there are some cases when states turn a blind eye to killing -legal systems have special and divergent rules regarding these cases. 39 As for trolley dilemma (whether a trolley or an autonomous car we face with), it is not an exception yet but can be in the near future. Perharps everything depends on representatives of technology (designers, manufacturers, programmers), of industry and legislators.
trolley problem -pro and contra and its application in legal dilemmas In literature, there are a lot of different positions in connection with autonomous cars' trolley cases. In this section, I would like to summarize these viewpoints. Of course, theorists form two groups: lots of experts admit that trolley case is a good analogy and others think that is kind of a "dead end". Now, what we know is not much: trolley case is not a regulated exception of killing, so it would be interesting to analyze whether DDE (as a moral aspect) could help the legal solution-searching? What can we utilise from the application of the principle? At first, I would like to mention Di Nucci's position, who is a well-known theorist of double effect doctrine, and who offers eight general arguments against DDE. Originally, the doctrine wanted to show the difference between intended means and merely foreseen side effects, but it failed this task. The test pays attention to an unrealistic scenario where only bad effects are taken into account and there is no real good end at all. Moreover, Di Nucci made an empirical research; he wondered how people think about the trolley dilemma? Results were surprising: most of the participants did not opt to kill the one person to save five innocents -althought people usually sympathize with the utilitarist approach! There is a dilemma with the closeness problem as well because in fact, the distinction between intending and merely foreseen harm is unworkable. Fifth, Di Nucci argues with the example "Loop Variant" to prove that means are not necessarily intended. As we have seen above in the tablet, the bombing-case is a paradigmatic case and it has two versions: the terror bomber and the strategic bomber. This distinction highlights that there is no moral difference between these cases, but DDE wants to show a normative distinction with these examples. Furthermore, the doctrine is just a useless moral principle and unfortunately we will not know how to act morally right or morally permissibly. And last but not least, there is one more relevant comment from Di Nucci: people should be responsible for their nonintended acts such as their intended ones -and DDE may efface this essential obligation. 40 Regarding Di Nucci's arguments, we can also add some general objections againts the doctrine. It cannot be an appropriate solution in legal context, because it is a weak test as it is like an easy "geometrical test" and not more. Besides this counterargument, the principle is too formal and too abstract, therefore it ignores cases' uniqueness; just works as a hypotesis and that is why it can eliminate uncertainty so easily. 41 Actually, the circumstances do not exist in reality and sometimes are too extreme and unimaginable. 39 To check this statment, it is important to see and read states' regulations on abortion, euthanasia or war.
40 Di Nucci, 2014b, 6-12. From ethics's viewpoint, DDE would answer the problem of verifiability but from the perspective of law, it lacks of authorisation. 42 We can admit: these arguments are rationale. Hevelke and Nida-Rümelin assert that autonomous cars' extreme road accidents are differ from the paradigmatic trolley case. According to them, when autonomous vehicles come into question, we should not focus on the damage which appears in the end, "(…) when we try to determine if a decision in favour of autonomous vehicles is in the interest of one of the affected parties." 43 Nyholm and Smids summarize five points where the classical trolley cases and autonomous cars' trolley case differ. At first, we discuss the classical trolley problem's main features: 1) one single individual is faced by the decision; 2) the decision is an immadiate one, we can say a "here and now" decision; 3) answer-alternatives are restricted to a small number of considerations; 4) moral and legal responsibility not really matter, nor legal, nor moral responsibility is taken into account; 5) as for the modality of knowledge, the facts of the scenario are certain and unknown at the same time. If we imagine the traditional trolley-scenario with autonomous vehicles, the following factors are coming into question: 1) instead of only one person, groups of individuals face with the burden of moral decisionmaking; 2) decision-alternatives are "decided" in advance because the software is programmed with various outcomes; 3) consequently, the number of considerations is not limited; 4) both moral and legal responsibility matters; and 5) the software's "knowledge" is characterized by risk-estimation. 44 After pro and contra aspects, it would be useful to see how does law really think about the application of DDE. When law is coming to question, it is important to understand how does it see this problem. I have already mention that in some borderline cases killing could be justified. We can add: besides the paradigmatic cases, killing is also allowed when it is committed during self-defense and necessity -these are the special causes of decriminalization. As for killing, the statement of facts is rather simple and abstract in the criminal statutes -that is because the result of the act is the relevant fact. Sometimes the need of a criterion of killing come into view -usually when something extraordinary happens and there is no appropriate or right legal (and moral) solution. I can also mention some famous cases ("hard cases") as good examples: Regina vs Dudley and Stephens, Fuller's case of speluncean explorers, cases in connection with abortion, euthanasia or separation of siamese twins (Re A -conjoined twins case). States should not decide which are the socalled certain cases of killing, but in borderline cases states can "allow" the taking away of lives and of course there are situations where risking of life, offence of right to life can arise, too (think of abortion, euthanasia or usage of guns). Moreover, in these cases, law knows very well that agents (for example doctors) do not want to "kill" somebody -but death is an unavoidable scenario. And what to do with autonomous vehicles? Can these be borderline cases like the other classical paradigmatic cases? I think this question should be solved in the near future such as more moral dilemmas in connection with AI technology (for example the question about their "legal entity"). But now, I would like to discuss some 42 Keenan, 2015, 16, 28. 43 Hevelke -Nida-Rümelin, 2015, 622. 44 Nyholm -Smids, 2016, 1287 legal solutions as propositions on how to solve the question right. Probably, these answers could help a little to understand what kind of challenge legal systems have to face with. In fact, law does not apply DDE in the traditional reading. In the XXth century, there was an intense interest from legal theorists, especially from Herbert Hart who concentrated on the principle of double effect and critisized it. Some consequences of an action is so firmly linked to the action that is immediately and invariably accompanies it. Here Hart touches on a problem known thanks to Foot as the closeness problem. 45 The gist: there are cases where "(…) there is a non-contingent relationship between the action and its effect. In connection with the closeness problem the question is also discussed in ethics and bioethics whether an agent can legitimately claim that he did not intend a certain consequence of an action (death), althought he did intend something that is necessarily associated with it. Theorists of law frequently distinguish between direct intention and oblique intention. In Hart's view this distinction between that which is directly intended and that which is merely foreseen as a concomitant consequence of an action is inadequately employed in the traditional doctrine of the principle of double effect." 46 What is the problem with autonomous cars? They do not have responsibility or legal entity or cannot "decide" like a human being in the traditional reading, so now we cannot discuss their "intended and non intended decisions and actions". So what do we have? In a figurative sense, we can imagine a trolley-scenario with intelligent cars, but not a human being will drive it or decide -instead, a machine, more precisely, algorhytms will drive and "decide". The software will understand no end of inputs and outputs, but these "questions and answers" will definitely contain new formulations of intention -it may be called the "intention-doctrine of autonomous vehicles". Unfortunately, no one knows details of this new doctrine, so we should solve the trolley dilemma with our actual and wellknown solutions (and with some new theories). There are theorists who suggest necessity as a good direction. Personally I doubt in it; as we know it was not applicable in Regina vs Dudley and Stephens because of an important cause: necessity could not be a shelter of accused (who killed innocent people). And of course, necessity is always imagined as an extenuative, or as a last resort of human beings, not machines. Philippo Santoni de Sio emphasizes that some cases are not so complex as we think. For example, we know that human life is a supreme value, consequently it would be defended better than damage in property or loss of animals. Of coure, human life is incommensurable with an other human life, and when matching lives comes into question, it can be solved by a consensus which is made among manufacturers and designers. Santoni de Sio concentrates on a contractarian variant where all sort of damage have a type of compensation. It seems our question is rather a contractarian problem, not of a criminal one. 47 Geoff Keeling elaborated a system called moral-design problem: it would be able to solve legally and morally complex situations. He criticizes Santonio de Sio, and represents a kind of utilitarian-economical approach called restriced Pareto-principle (RPP). "In collisions where 1) harm to at least one person is unavoidable and 2) a choice about how to allocate harm between different persons is required, then if there exists a unique Pareto efficient allocation of harm across different persons, then other things being equal, programming a driverless car to bring about the Pareto efficient allocation of harm is justified." 48 In a way, RPP can be a plausible reading of necessity as well, moreover, according to the theorist, RPP is a special decision-rule which can be acceptable at least three moral theories (these three are: utilitarianism, contractualism and deontological aspect). 49

Conclusion
In this short essay I tried to represent our near future's most interesting problem. In a way, the dilemma is a classical one, because "morals vs law" question is an old and eternal topic -and autonomous vehicles provide a new reading. The new trolley problem should have a satisfying solution which should be accepted both by industry and jurisprudence.
Regarding the nature of the dilemma, it seems like our thought-experiment is restricted to a "mathematical example" because the question will be answered with algorithmsbut of course, it is not just an easy, mechanical question, because it raises very serious moral and legal aspects and nothing is black or white, but "grey". 50 I think this problem is in connection with the idea of law. Since thousands of years, philosophers and theorists of law have been arguing about whether mathematics or argumentation could be the most appropriate one. In the past few decades, argumentation was the greater value, but in the era of AI, mathematics may occupy again its throne. What if this old-new "legal trend" will not really think of the problem's moral aspect? How will law regulate these serious problems without ethics?