Who Acts When Autonomous Weapons Strike? The Act Requirement for Individual Criminal Responsibility and State Responsibility

Abstract

This essay examines the theories according to which ‘actions’ carried out by autonomous weapon systems enabled by strong artificial intelligence in detecting, tracking and engaging with the target (‘intelligent AWS’) may be seen as an ‘act’ of the weapon system for the purpose of legal responsibility. The essay focuses on the material act required for the commission of war crimes related to prohibited attacks in warfare. After briefly presenting the various conceptions of the act as an essential component of the material element of criminal offences, it argues that the material act of war crimes related to prohibited attacks is invariably carried out by the user of an ‘intelligent AWS’. This also holds true in the case of so-called ‘unintended engagements’ during the course of a military attack carried out with an intelligent AWS. The essay moves on to examine the question of whether, in the case of the use of intelligent AWS by the armed forces of a state, the ‘actions’ of intelligent AWS — including those not intended by the user — are attributable to the state. It demonstrates that under a correct understanding of the concept of ‘act of state’ for the purpose of attributing state responsibility under international law, such attribution is unquestionable. It underlines that, suggesting otherwise, would bring to a breaking point the possibility of establishing violations by states of international humanitarian law in the conduct of hostilities.

1. Introduction

Much has been written on the issue of the responsibility gap associated with the development of autonomous weapons systems (AWS). The debate, though presented in general terms,1 more specifically concerns weapon systems that are enabled by ‘strong’ artificial intelligence2 intended for targeting, i.e. detection, tracking and engaging with the target (hereinafter ‘intelligent AWS’). These weapon systems, once activated, operate (or can operate) without the supervision or control of the user in performing their assigned tasks and functions. Weapon systems of this type are still at an early stage of development, given the difficulty of ensuring that they can be used in a manner compliant with the relevant rules of international humanitarian law.3 Due to the specific characteristics of the algorithms, based on self-learning methods, the way in which the system performs its assigned tasks and functions cannot be fully predicted by the programmer or user. Intelligent AWS that present a high risk of unpredictability in the execution of crucial functions in the targeting cycle could therefore be indiscriminate, and thus prohibited by international humanitarian law.4 For instance, their use might not guarantee compliance with the principle of distinction, which in the international humanitarian rules relating to the conduct of hostilities requires that the parties to an armed conflict shall at all times distinguish between civilians and combatants, as well as the observance of other relevant rules on prohibited attacks.5

Currently, weapon systems enabled by ‘strong’ artificial intelligence include certain types of loitering munitions (also known as ‘suicide or kamikaze drones’), which have been used in many recent and ongoing conflicts.6 At present, further developments in the area of intelligent AWS cannot be ruled out, and these extend the scope and possibilities of systems like intelligent loitering munitions.

Generally speaking, the difficulties that may arise with regard to the attribution of responsibility for harm caused by systems enabled by strong artificial intelligence are manifold and concern many fields of law, due to the progressive development of such systems in numerous fields of human activity.7 The crux of the matter is the impossibility of assigning responsibility to the programmer or user, whether culpable or malicious, arising from harm caused by such systems, given the inherent unpredictability of the way in which the system performs the function and task assigned to it.8 As far as the arms industry is concerned, the issue has been addressed mainly with respect to the responsibility gap that would arise from so-called ‘unintended engagements’ in armed conflict,9 namely from military attacks carried out with AWS resulting in harm to persons or objects not intended by the human operator but caused by a failure of the system.10

Here the debate centres primarily on the possibility of criminal responsibility gaps in respect of war crimes, which, as is well known, consist of serious violations of international humanitarian law. Such crimes can only be committed during an armed conflict and must present a nexus with the conflict. In principle, therefore, if an intelligent AWS does not operate in compliance with international humanitarian law because of a failure of the system, the programmer of the AWS cannot be responsible for war crimes. The programmer’s activities usually take place in peacetime and are not linked to a specific armed conflict in which the weapon system could be used.11 A potential avenue for the criminal responsibility of the programmer when there is a failure of the system during use of an intelligent AWS in hostilities, might be to ‘individualise’ the obligation enshrined in Article 36 of the 1977 First Additional Protocol to the 1949 Geneva Conventions on the protection of the victims of warfare (hereinafter ‘First Additional Protocol’). That article states that ‘[i]n the study, development, acquisition or adoption of a new weapon, means or method of warfare’, States Parties to the First Additional Protocol have an obligation to determine ‘whether its employment would, in some or all circumstances, be prohibited by [the] Protocol or by any other rule of international law applicable to the High Contracting Party’. If one conceptualizes the obligation under Article 36 as an obligation that is also addressed to individuals involved in the process of studying, developing, acquiring, or adopting a new weapon, its serious violation could give rise to their criminal responsibility for the failure to determine whether the new weapon is illegal.

Leaving aside this hypothesis, which remains to be verified and explored, the problem of the responsibility gap with respect to war crimes due to unintended engagements of intelligent AWS has always focused on the user (be it the operator or the military commander who decides on the use of the weapon) for the war crimes related to prohibited attacks. The debate has focused particularly on the mens rea required for the commission of such crimes.12 However, given the nature and degree of autonomy of intelligent AWS, the question also arises as to whether prohibited attacks caused by a failure of the system can be considered as the act of the user for establishing the actus reus of a war crime. A similar issue arises if one moves up to the level of the international responsibility of the state party to the armed conflict using the intelligent AWS. Under the default regime, state responsibility under international law arises if the state has committed an internationally wrongful act. In turn, this requires that an act of the state is in breach of an international obligation incumbent on that state. Since there exists such an act of the state in cases where the conduct of a person or group of persons is attributable to that state, could one potentially attribute to the state prohibited attacks caused by a failure of the system and not by a human conduct?

This article will approach these two issues in turn.

2. ‘Actions’ of the Intelligent AWS and the Material Act of War Crimes

It is well known that, in modern criminal systems, the essential constituent elements of the offence are: the external, or objective element, called the actus reus in the Anglo-American tradition; and the subjective element, i.e. the mental state of the potentially responsible subject required by the criminal law. According to the conventional theory, the actus reus consists of an essential component, i.e. the act required to constitute the offence (and this act can also be an omission) and it may also require circumstances attending the act and the result of the act. This apparently simple assertion actually conceals very complex issues, as demonstrated by the long-standing debate in criminal doctrine on the general theory of the offence and, as far as we are concerned, on what is to be understood by an act.13

Without going into the details of this intricate debate, it is worth exploring a well-established aspect, or assumption, of it: that modern criminal law systems and concepts of criminal responsibility are built around human actions and volition.14 This is axiomatic with respect to the criminal responsibility of natural persons. However, it is also indirectly true in the case of the criminal responsibility of legal persons. The theoretical explanations put forward for the responsibility of the latter are in fact based on the imputability of acts and volitions of natural persons acting on behalf of the entity to the latter, or alternatively, for the criminogenic or improper organization of the entity, which is, in any case, the result of human acts and volitions that created it.15

A. Criminal Theories of the Act and Action of Artificial Intelligence Systems

In doctrine, there are those who theorize that artificial intelligence systems can be considered criminally responsible subjects,16 rejecting the thesis that is expressed in the maxim machina delinquere (et puniri) non potest (machines cannot commit crimes and cannot be punished).17 Regarding the actus reus, this doctrine does not hesitate in asserting that the tasks accomplished by such systems are comparable to a human act for the purposes of criminal responsibility.18 The premise underlying such assertions is a purely materialistic conception of the criminal act, which disregards any connection with its voluntariness. Indeed, it is argued that for the purposes of criminal responsibility, the act is simply the ‘material performance through factual-external presentation, whether willed or not’.19 Accordingly, it is affirmed that ‘artificial intelligence technology is capable of performing “acts”, which satisfy the conduct requirement’ and that ‘[t]his is true not only for strong artificial intelligence technology, but for much lower technologies as well’.20 Therefore:

[w]hen a machine, (e.g., robot equipped with artificial intelligence technology) moves its hydraulic arms or other devices of its, it is considered an act. That is correct when the movement is a result of inner calculations of the machine, but not only then. Even if the machine is fully operated by human operator through remote control, any movement of the machine is considered an act.21

The materialistic conception of the act stands in antithesis to the traditional theory according to which the criminally relevant act is that of bodily movement or non-movement based on the will of the person, also understood as an expression of the freedom of self-determination of that person and of their lordship over their body.22 Those who follow this traditional approach categorically deny that artificial intelligence systems, including those capable of acting in physical space based on self-learning algorithms, can conduct a criminally relevant act.23 This is because their actions would still originate from a self-learning algorithm established by the programmer and developer, who would therefore remain the ‘real’ drivers of the action.24 The unpredictability in the choices and actions of the intelligent system are therefore understood as an unpredictability necessitated by this algorithm and not a manifestation of intelligent action. In other words, an artificial intelligence system ‘does not act, but it is acted upon’.25

The impossibility of considering the actions of artificial intelligence systems as acts in the sense of criminal law is also reached if one adheres to the theory that an act is any conduct that has an impact in the social sphere.26 As has rightly been observed, current artificial intelligence systems ‘are still too young to have gathered … [the] “critical mass” of social meaning and importance’ necessary for the recognition of their own and independent action.27

In any case, the possibility of considering the action of the artificial intelligence system as an act of the system itself — an essential prerequisite for making the system subject to criminal law — seems to be confined only to theoretical debate. As things stand, when there is a fact of criminal relevance resulting from the action of intelligent systems, it is the user’s responsibility that is at stake — at least in situations where the user is required to exercise direct supervision over the system’s operation. For example, in the United States, the driver of a Tesla travelling in AutoPilot mode was convicted for the car accident that occurred in 2019 in a suburb of Los Angeles causing the death of two people in a Honda Civic.28 This is the first prosecution in the United States for vehicular homicide caused by a car travelling autonomously. It seems, however, that this circumstance was not relevant in determining the criminal responsibility of the Tesla driver, precisely because the driver is still required to be vigilant, and the AutoPilot self-driving system is marketed by Tesla as a driver assistance system.29 If one considers that AutoPilot technology is also the technology present in some types of autonomous weapon systems currently in use (e.g., loitering munitions such as Switchblade drones), one realizes that the use of the technology in question is not necessarily ‘safer’, but merely more convenient for the user (at least until something goes wrong).30

B. Unintended Engagements of Intelligent AWS and War Crimes Related to Prohibited Attacks

Excluding the thesis that the action of the intelligent system is an act of the system itself for criminal law purposes, one can now examine whether unintended engagements of the AWS resulting in prohibited attacks are acts of the user of the weapon system for the purpose of the actusreus of the relevant war crimes.

To this end, a few clarifications are in order. First, precisely because we are discussing the use of weapons in the conduct of hostilities, and assuming that these weapons do not by their nature operate indiscriminately, the potentially relevant war crimes are those relating to prohibited attacks. These crimes penalize attacks that are unlawful under international humanitarian law, such as attacking civilians (in violation of the principle of distinction) or attacking a military target causing civilian collateral damage disproportionate to the anticipated military advantage (in violation of the so-called principle of proportionality).31 The actus reus of these war crimes is differently formulated in the relevant international instruments. In particular, these crimes are formulated as crimes of conduct in Article 8 of the Rome Statute establishing the International Criminal Court,32 while Article 85(3) of the First Additional Protocol to the Geneva Conventions includes the occurrence of a harmful event (causing death or serious injury to body or health).33 Leaving aside this important difference,34 what is common to all descriptions of the actus reus of crimes of unlawful attacks is the act itself (directing an attack/making an attack/launching an attack)35 and the need for certain circumstances to be present. One circumstance common to all war crimes, as is well known, is that the act was committed in the course of an armed conflict and is associated with it (the so-called ‘nexus’).36 The other circumstances vary depending on the crime. In most cases, it is required that the objects of the attack are persons that in international humanitarian law enjoy immunity from military attack (such as civilians), in keeping with the principle of distinction. A more complex formulation is the one that requires the attack against a military target to have caused death or casualties among civilians, or damage to the natural environment, disproportionate to the expected military advantage. The latter war crime concerns attacks in violation of the principle of proportionality, which allows attacks against military targets that cause incidental harm among civilians and other persons and property immune from the attack. Military attacks in violation of this principle (i.e. attacks against military targets that cause incidental damage disproportionate to the anticipated military advantage) fall into the category of attacks of an indiscriminate nature.37

For criminal law purposes, when describing the material element of the offence, it is important to distinguish the act and the circumstances that may be required for its criminality because the subjective element required for the commission of the offence may differ in those regards.38 For example, in interpreting the war crime related to the prohibition of attacking civilians (as a serious breach of the First Additional Protocol), the International Criminal Tribunal for the former Yugoslavia (ICTY) seems to have required intentionality only with respect to the act (making an attack). In contrast, it held that recklessness is sufficient with respect to the circumstance (i.e. that civilians were the object of the attack).39 Other courts have followed this approach,40 deciding that attacks of an indiscriminate nature other than those consisting in disproportionate attacks are punishable as war crimes of directing an attack on civilians, and thus charged without intention as to making civilians the primary object of an attack.41 In contrast, the Elements of Crimes (to be applied by the International Criminal Court) state that intentionality is required with respect to both the act (directing/launching the attack) and the circumstance (the object of the attack, e.g. civilians).42 It thus appears that the International Criminal Court can prosecute as war crimes without intent to attack civilians (or other persons or obejects immune from attacks) only indiscriminate attacks consisting of a violation of the principle of proportionality, as they are expressly covered by its Statute if committed in the context of an international armed conflict.43 Other attacks of an indiscriminate nature, such as an attack not specifically directed against a military target, would not seem to be prosecutable as war crimes.44 They lack an express legal basis and cannot amount to directing an attack against civilian targets (or other persons or property immune from military attack).45

For our purposes, it is important to note that it is the one who employs an intelligent AWS for a military attack who performs the act (directing/making/launching an attack), even the identification, selection and engagement of the target is made by means of the algorithm embedded in the system. In the event that the circumstance necessary for the criminalization of the act materializes, i.e. for the war crime of attacking civilians the fact that the latter are targeted it is of no relevance to the actus reus that this is a consequence of a failure of the AWS.46 This applies to military attacks conducted with any kind of conventional weapon. If, due to a failure of the weapon, the attack does not target the intended military objective, but instead targets civilians or other persons and property protected by the military attack,47 the existence of the material act of the offence cannot be put in question. The crucial issue will mainly concern the presence of the required subjective element with respect to the materialization of the circumstance required for the criminality of the act48 and, more generally, the ‘culpability’ of the user.49

3. Use of Intelligent AWS as an Act of the State and an Element of an Internationally Wrongful Act

A. Attribution to the State of Unintended Engagements Caused by Intelligent AWS

Usually, the attribution to the state of unintended engagements resulting from military attacks carried out with intelligent AWS does not raise specific challenges. If members of the armed forces of a state employ a weapon system, the rule codified in Article 4 of the Articles of State Responsibility of the International Law Commission (‘ILC Articles on State Responsibility’) applies. For the purpose of identifying the existence of a wrongful act of the state, this rule permits attribution to the state of the acts (active or omissive) of persons or groups of persons who are organs (de jure or de facto) of the state.50 The armed forces of a state are typically de jure organs of that state. Thus, whatever the weapon employed by the armed forces, the military attack is attributable to the state. If the military attack is conducted in violation of the relevant rules of international humanitarian law, then there will be an internationally wrongful act of the state.51 The state will therefore be internationally responsible for the unlawful act committed,52 unless there exists a circumstance excluding unlawfulness.53

It should also be considered that the rule of attribution enshrined in Article 4 is ‘reinforced’ in international humanitarian law, even with regard to the responsibility of the state party to the conflict arising from the violation of rules on the conduct of hostilities. Indeed, Article 91 of the First Additional Protocol states that a party to the conflict ‘shall, if the case demands, be responsible to pay compensation’ and that ‘[i]t shall be responsible for all acts committed by persons forming part of its armed forces’.54 This rule, read in conjunction with Article 7 of the ILC Articles on State Responsibility, makes it possible to attribute to the state all acts committed by persons who are members of the armed forces of a state, including acts committed ultra vires as an organ of the state. According to some commentators, however, this rule would go even further than Article 7 and would also allow all ultra vires acts committed by members of the armed forces acting in their private capacity to be attributed to the state party to the conflict.55 The reason, as was well explained by Kalshoven, is that ‘members of an armed force at war stand a greater chance than do other State organs of becoming entangled in ambiguous situations where it may be unclear whether they are acting in their capacity as an organ of the State’.56 If one accepts this interpretation, it is clear that Article 91 of the First Additional Protocol broadens the sphere of attribution to the state of acts committed by members of the armed forces in their capacity as private individuals. This is also possible by virtue of the ILC Articles on State Responsibility, which in Article 55 provides for the applicability of lex specialis in respect of the responsibility of states.57

However, in a recent study, Boutin has claimed that the attribution to the state of military attacks carried out with intelligent AWS raises particular challenges.58 On the assumption that these systems are ‘independent and endowed with a degree of autonomous agency’, ‘the link to any human conduct [would be] too vague and weak to ground attribution of conduct’ to the state.59 Instead, it would be possible, for the purposes of attribution, to conceptualize these systems as operating under the direction and control of the state within the meaning of Article 8 of the ILC Articles on State Responsibility.60 This would also allow the attribution to the state of the ‘ultra vires’ acts of the system within the meaning of Article 7 of the ILC Articles.61

The approach followed by Boutin resonates with the writings of other commentators in distinct fields. For instance, regarding the attribution to the state of violations of jus ad bellum through cyber-attacks, it has been argued that the ‘advent of true autonomous agents could really require new interpretations or new formulations’ with respect to the question of ‘agency’, i.e. the attribution of individual conduct to the state.62 Accordingly, the development of autonomous agents would exacerbate the attribution issues already inherent in cyberattacks, due to the autonomy of the decisions of the systems employed, which would make command and control by the competent persons and bodies ‘hard to achieve’.63 Therefore, so the argument continues, ‘[t]heoretically, a true autonomous agent could exceed its assigned tasks and engage in what could legally be defined as “use of force”’, and therefore one may wonder whether ‘in this case, … the nation state behind the agent’s creation [should] be deemed responsible’.64 The solution suggested is to consider the possibility of recognizing autonomous agents per se as ‘state agents’ for the purpose of attribution and targeting (thus adopting criteria to distinguish whether they are ‘civil’ or ‘military’: e.g. for software bots, ‘through mandatory signatures or watermarks embedded in their codes’).65

However, these and similar propositions are not fully convincing. Let me first briefly summarize the view put forward by Boutin. Then I will show that it is based on an incorrect understanding of the notion of an act of the state for the purposes of international responsibility accepted by the ILC Articles on State Responsibility.

B. The Alleged Need for Causality between Human Conduct and Breaches of International Law

The assertion by Boutin that the use of intelligent AWS that results in breaches of international law may render inoperative for international state responsibility the rules of attribution to a state of acts of persons to a state66 is based on a twofold premise. The first is that the rules of attribution of a wrongful act to a state ‘unequivocally [hinge] upon actions or omissions by human beings’ and that ‘the existence of human conduct is therefore a precondition for state responsibility’.67 The second premise is that, for attribution to occur, there must be ‘a causal link between acts or omissions by a human being and the occurrence of a breach of international law’.68 Given the characteristics of systems that make use of artificial intelligence (autonomy, opacity, unpredictability), so the argument goes, it is therefore necessary to establish what human conduct is relevant for attributing to the state the wrongful act caused by the use of such systems.69

Accordingly, in discussing the issue of attribution in relation to the use of artificial intelligence systems in the military field, Boutin presents various scenarios to determine in which cases such a causal link exists. The first scenario is where the system operates under the direct and genuine control of a human operator at a tactical level. In this case, Boutin argues there would be no attribution problem: if the operator’s conduct is attributable to the state under one of the rules in the ILC Articles of State Responsibility, the operator’s action or omission would directly link the state to the occurrence of a breach of international law.70 Likewise, for Boutin there would be no attribution problem in a second scenario, namely in the case of an AWS that, once activated, operates autonomously but the operator can ‘override’ the system’s decision. Boutin argues that in this scenario, the human conduct relevant to attribution would not be that of the operator, since the latter would be only to a limited extent able to ‘abort’ the attack conducted by the system. The causal link with the occurence of a breach should instead be found in the conduct of the political and military decision-makers who authorized and established the parameters for the use of systems operating in autonomous mode once activated. This in turn would allow the breach of international law resulting from the use of these almost fully autonomous systems to be attributed to the state.71 According to Boutin, there would also be no attribution problems in a third scenario, which is when an autonomous system merely aids decision-making by a human operator. In this scenario, the system provides information and/or makes recommendations, but it is the operator who decides whether to act in accordance with the information or recommendations. However, so Boutin claims, it would be difficult — if not impossible — for the operator to decide differently from the information or recommendations acquired, which would imply a tenuous link between their conduct and the occurrence of a breach of international law. Instead, she claims, it would be possible to identify such a causal link to the conduct of those in the chain of command who decided to employ the decision-making support system in question, as these persons are in a position to assess the appropriateness of using the system, the degree of control by the operator necessary in the circumstances, and so on.72 As previously mentioned, according to Boutin problems of attribution would arise in a fourth scenario: the use of intelligent AWS capable of operating without the possibility of operator intervention. Boutin posits that in this scenario the causal link between the human conduct — evidently consisting only in having deployed the weapon system — and the occurrence of breach of international law, caused by an autonomous activity of the weapon system, would be too tenuous to allow attribution to the state.73 This is why the solution that she suggests for the purpose of attribution, consists in conceptualizing these systems as operating under the direction and control of the state within the meaning of Article 8 of the ILC Articles on State Responsibility.

Paradoxically, this line of reasoning reveals its weakness precisely because it succeeds in concluding in favour of attribution to the state of breaches of international law caused by the operation of the autonomous systems in all scenarios except the fourth. In particular, the attribution of the relevant human conduct to the state in the second and third scenarios seems to be based on a very broad understanding of causality (the human decision to use the intelligent system has brought about the result). This (implicit) understanding of causality, however, could also be applied to the fourth scenario, and one fails to understand why this should not be the case.74 However, if causation for the purpose of attribution is so broadly understood, it is an unnecessary criterion in all cases where there is an activity of a person whose acts are attributable to the state. Rather, the challenge would be to establish causation where a failure to act (omission) results in a violation of international law. Partly because of these challenges, the International Law Commission decided to exclude any requirement of causation between the conduct of persons and a breach of international law for determining the existence of an act of state in the area of state responsibility.75

C. The Act of the State in the ILC Articles

The work of the ILC, on the other hand, confirms that the attribution of acts of persons or groups of persons to a state does not require that there be causality between the act and the occurrence of a breach of international law. Indeed, the then Special Rapporteur Roberto Ago, on whose reports the formulation of the attribution rules was based, stated that the attribution of the act of persons or groups of persons to a state is a ‘normative’ operation, which has ‘nothing to do with a link of natural causality or with a link of “material” or “psychological” character’.76 In support of this assertion, Ago quoted in a footnote to his report the opinion of some scholars, including Anzilotti, according to whom: ‘Legal imputation is … clearly distinguishable from causal relationship; an act is legally deemed to be that of a subject of law not because it has been committed or willed by that subject in the physiological or psychological sense of those words, but because it is attributed to him by a rule of law’.77

The assumption underlying this theoretical approach is that the state, as a subject of international law, is not ‘merely an abstract idea or a figment of the imagination’ but is instead a ‘real entity, in municipal law as well as in international law’.78 At the same time, however, the state, ‘as a legal person, is not physically capable of conduct’, and ‘it is obvious that all that can be attributed to a State is the action or omission of an individual or a group of individuals, whatever their composition may be’.79 Hence ‘there are no activities of the State which can be called “its own” from the point of view of natural causality as distinct from that of legal attribution’.80

The position taken by Ago was accepted by the ILC. The commentary to Article 3 of the ILC Articles on State Responsibility establishes in fact that ‘[t]he attribution of conduct to the State as a subject of international law is based on criteria determined by international law and not on the mere recognition of a link of factual causality’.81 It goes on to clarify that the attribution rules formulated by the ILC only establish which conducts are to be considered ‘acts of the state’ for the purposes of its international responsibility, but in themselves have no relevance for establishing the unlawful nature of the conduct.82

In essence, the attribution rules formulated by the ILC, following the approach proposed by Roberto Ago, are not based on the assumption that the conduct by a person or group of persons must have caused a breach of international law for the conduct to be considered an act of the state. As one commentator has rightly pointed out, this would in fact introduce a distinction between an ‘event contrary to international law’ and an ‘internationally wrongful act’ of which there is no trace in the ILC Articles on State Responsibility.83 As has been observed in the doctrine, the basis of the attribution rules in the ILC Articles on State Responsibility is in fact ‘functional’ in nature (and not causal). In other words, the conduct of persons or groups of persons is attributed to the state when there is a connection between the conduct and the functions of the state, since the state — conceived as an organization — can only act through persons or groups of persons.84 For the purposes of attribution, therefore, it does not matter that the persons caused the breach, whereas it matters that they acted to perform a function of the state.

If one follows this approach, then it is clear that, with respect to the use of any intelligent system (including AWS), the question that some authors ask with respect to the problems that might arise regarding the attribution of actions of such systems to the state appears to be beside the point. If the systems in question are deployed by persons whose conduct is attributable to the state, according to the criteria formulated by the ILC, it is of no relevance whether the occurrence of the breach is attributable to a natural causality with an ‘autonomous’ action of the system.

4. Conclusions

This article has clarified that systems that are enabled by so-called ‘strong’ artificial intelligence, however capable they may be of operating autonomously from the user and of performing actions that the user cannot foresee, are still only ‘tools’ of the user. This is particularly true for intelligent AWS. However technologically advanced they may be, under international humanitarian law such weapons are still only means of warfare, namely means that the parties to an armed conflict may use (under certain conditions) to conduct hostilities of war.

With regard to criminal liability for war crimes in the case of ‘unintended engagements’ resulting from the use of intelligent AWS, we are still far from realistically foreseeing criminal liability for the systems in question at present. If this solution were to be reached in the future, it would not be because of the impossibility of regarding the user of the weapon system as the author of the material act constituting the relevant war crimes. These crimes are in fact those generically referred to as ‘unlawful attacks’, the material act of which is to direct, make or launch a military attack. Natural persons unquestionably carry out this act, whatever means of warfare they have chosen to use. Crimes of ‘unlawful attacks’ also invariably require various circumstances for the act to be criminal. The person who directed, made or launched the attack using an intelligent AWS may intend the materialization of such circumstances, or it may be the result of a failure of the weapon system. All this, however, relates to the sphere of the subjective element of the offence, which in some formulations consists of intent and in others includes recklessness. Even in the most demanding formulations of the subjective element on the circumstances, it would be too hasty, however, to claim that an ‘unintended engagement’ resulting from the use of intelligent AWS is always ‘unintended’ from a criminal law point of view. For instance, there could be intent on the part of one who conducts a military attack with an intelligent AWS that has previously produced ‘unintended engagements’. Finally, yet to be explored, is the possible criminal liability of the programmers and developers of intelligent AWS in light of the obligation to verify the potential unlawfulness of the new means and methods of conducting hostilities enshrined in Article 36 of the First Additional Protocol.

For the purposes of international state responsibility for violation of international humanitarian law, the question of attributing the ‘unintended engagement’ of the intelligent AWS to the state that used the weapon is a question that simply does not arise. One reaches this conclusion in light of the concept of ‘act of the state’ as it relates to the international responsibility of the state for internationally wrongful acts. Furthermore, it bears reiterating that under international humanitarian law the rules on prohibited attacks bind directly the belligerents and parties to the conflict.85 Belligerents and parties to an armed conflict must therefore comply with these prohibitions in all circumstances, irrespective of the type of weapon used in the course of an armed attack, including where the attack is carried out with an intelligent AWS.86 Unintended engagements in the course of the attack, whatever their cause, are therefore attributable to the state that directed and launched the attack. To doubt this by asserting that, in light of the specific technology built into a weapon system, the unintended engagement might not qualify as an act of the state that used the weapon for the purposes of its international responsibility is — in my opinion — a dangerous intellectual exercise.

The absence of precise international obligations of transparency and information on how states conduct their military operations is already a matter of serious concern for those who would like to ensure that violations of the applicable rules can be investigated.87 Adding to the ‘black box’ resulting from the lack of transparency and information on the conduct of hostilities the ‘black box’ of systems enabled by strong artificial intelligence88 would contribute to the further weakening of such possibilities. States that use intelligent AWS could argue that prohibited military attacks are the result of an operation by intelligent software, which cannot be explained or predicted by the user and therefore cannot be attributed to them. Even the hypothesis of such a possibility is unacceptable.

As is well-known, technological advances — including those in the field of armaments — risk making existing legal regulations obsolete. This is also true in the field of artificial intelligence. In the interpretation and application of existing law, and in devising future regulations, it is necessary to remain anchored in human intelligence.

Acknowledgement

This essay is published as part of the research project that I have led, entitled ‘Lethal Autonomous Weapon Systems and War Crimes: Who Is To Bear Criminal Responsibility for Commission?’ (Project 10001C_176435), and funded by the Swiss National Science Foundation (SNSF). I would like to thank Guido Acquaviva, Claus Kreß and Thomas Weigend for providing detailed and prompt feedback on an advanced draft of this paper.

© The Author(s) (2024). Published by Oxford University Press.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.

Logo-favicon

Sign up to receive the latest local, national & international Criminal Justice News in your inbox, everyday.

We don’t spam! Read our [link]privacy policy[/link] for more info.

Sign up today to receive the latest local, national & international Criminal Justice News in your inbox, everyday.

We don’t spam! Read our privacy policy for more info.

This post was originally published on this site be sure to check out more of their content.