Over the years, Hollywood has offered many movies depicting warfare between humans and Artificial Intelligence (AI), such as I Robot, The Matrix and The Terminator, with a host of them featuring killer robots and apocalyptic themes. It is a prescient warning, after Tesla’s Elon Musk recently called for the United Nations to ban the development and use of killer robots, emphasising just how dire the moral and ethical ramifications of developing such powerful technology are. In Michael Bay’s Transformers: Age of Extinction (2014), a multinational corporation – Kinetic Solutions Incorporated (KSI) – develops a new line of human-made, remote-controlled transformers that are built to replace humans in dangerous situations. These transformers gain sentience and start to act autonomously. They slice through cars, cause massive explosions which harm innocent civilians and ignore the commands of their human creators when they are put on the field.
This anticipates the question of who would take down these robots and save the day? The uneasy portrayal of robots who could potentially usurp humans, would naturally cause one to expect that the eventual hero who destroys these robots would be the other extreme – a human, as this would symbolically show how man achieves redemption by destroying his harmful creation. Conventionally, most action movies portray a fight between man and machine to depict how humanity heroically manages to regain control and restore the natural hierarchy where it remains on top. Furthermore, audiences typically would not expect AI to destroy itself. Yet, this film subverts such norms as the audience is presented with an unexpected hero – Optimus Prime, another AI system. Why then does the text portray a battle between two AI systems, a good and bad one, instead of a conventional battle between an AI system and a human? Through the portrayal of two distinct robot entities that are governed by strikingly different moral codes, it is evident that under the broad category of AI, it does not operate in a manner that is entirely evil. Instead, within this broad category there exists AI systems that have an affinity to do good, although there may be external factors that hinder such righteous acts. Thus, I posit that while AI has a predilection towards empathy and righteousness, there exists certain limitations that prevent it from truly understanding what the role of a “moral agent” entails. Hence, there exists an ethical dilemma in granting AI the rights to be “moral agents.” In a journal article entitled On the Morality of Artificial Agents, “An agent is said to be a moral agent if and only if it is capable of morally qualifiable action. An action is said to be morally qualifiable if and only if it can cause moral good or evil.” (Floridi, Sanders, 2004, p.364) As AI is starting to play a greater role in the working environment, especially in financing, medical science and heavy industries, it is important to determine whether we can trust AI to make nuanced, morally sound decisions in the midst of such volatile environments.
To understand how Bay highlights to the audience that there is inherent righteousness in AI systems as a whole, I will be using this journal article as my lens text, which provides a framework comprising of three conditions. The three main conditions proposed are:
- The ability to respond to environmental stimuli (interactivity)
- The ability to change their states according to their own transition rules and in a self-governed way, independently of environmental stimuli (autonomy)
- The ability to change according to the environment the transition rules by which their states are changed (adaptability)
This text suggests that if all three conditions are met, AI can be considered as moral agents who can distinguish between right and wrong and ultimately act rationally and virtuously. These conditions are important as they form the criteria in which we can understand AI and their unique characteristics. Once we can establish these key characteristics, we will be able to identify how inherent righteousness exists in AI. Specifically, inherent righteousness refers to how AI possess empathy towards humanity.
The AI systems in the movie are able to interact and subsequently respond to the environment in an appropriate manner. This is because they show human emotions which highlights that like humans whom they are modelled after, they possess inherent righteousness. When one of Cade Yaeger’s friends is killed in an explosion, Prime sorrowfully says “My deepest sympathies for the loss of your friend.” (Bay, 2014) Not only has Prime interacted with the environment (ie by dodging the explosion), but he does what any decent human being would do in such a situation, that is, to extend their condolences. By highlighting how Prime exhibits socially desirable conduct, it seems that Bay is blurring the line between robot and human, as robots are portrayed as not only capable of behaving like humans, but even feeling humans’ emotions like sympathy which is unconventional in a typical AI system. Using my lens text as reference, it is evident how Prime “acted interactively, responding to the new situation [he was] dealing with, on the basis of the information at [his] disposal.” (Floridi, Sanders, 2004, p.363) He witnessed the death of an innocent human and did the right thing by responding appropriately, which tugs at the heartstrings of the audience as they see AI systems in a new light – as one that is capable of compassion and understanding. In fact, throughout the Transformers franchise, Optimus is always portrayed as a heroic, fatherly figure.
However, there exists limitations in Prime’s ability to interact and act righteously in every circumstance. In the battlefield where there is the most interaction, it is almost impossible for even the most virtuous AI like Prime to make decisions that will be righteous in the long-run since he is so focused on winning the current battle between good and evil. For example, Prime may not necessarily be able to determine what constitutes an unnecessarily destructive and excessive use of force, the second circumstance in the jus in bello conventions. Knowing how to respond requires an AI system to possess extensive knowledge and understanding of humanity, including the ability to interpret and anticipate the actions of human beings. This would be impossible for Prime to do, given that he is an alien AI system and is unfamiliar with the norms of humanity. Furthermore, he would have to assess the extent to which such a battle will achieve a definite military advantage, such that we can conclude that he truly possesses inherent righteousness. This requires an understanding of the balance between opposing forces in a fight – the ability to anticipate the probable responses of the enemy in various threats and circumstances, and a greater awareness of the wider strategic and ethical consequences. Unfortunately, most AI systems are programmed to do the most logical, most expedient action without wavering, which causes them to deviate from the morally virtuous course of action. Hence, although there exists inherent righteousness in Prime, granting such AI the ability to interact freely in our human world is problematic as it is highly unlikely that Prime has the deep level of understanding that is required.
The autobots in the movie make the autonomous and independent decision to make morally virtuous choices. In the film, they do not succumb under the pressure of evil forces, which highlights their inherent need to act righteously. When Cemetery Wind manages to capture Ratchet, an autobot ally, and threatens him to “tell [them] where Optimus Prime is,” he remains resolute and loyal to his leader by exclaiming “Never!” (Bay, 2014), thereby refusing to disclose any information on Prime’s whereabouts. Ratchet is subsequently shot in the head by Lockdown, an evil alien bounty hunter that is also an AI system, as well as humans.
With reference to my lens text, Ratchet did act “autonomously: [he] could have taken different courses of actions, and in fact we may assume that [he] changed [his] behavior several times in the course of the action, on the basis of new available information.” (Floridi, Sanders, 2004, p.363) This is indeed true as the movie highlights how Ratchet’s perspective of humans shifts – he initially treats them as allies in the movie’s previous installment, yet, he now avoids them as he sees humanity as brutal monsters, crying out and begging them to “please hold [their] fire,” as he is their “friend.” (Bay, 2014) Although he would have felt betrayed that his human allies are now firing on him, he still chooses not to fight back, ultimately sacrificing his very own life in the interest of preserving the lives of his human murderers. In establishing such an emotional connection to good AI like Ratchet, the text highlights that there is inherent righteousness in AI, as there is always good AI that cherishes human lives and will stand up against those that have twisted, corrupted values.
However, despite the fact that AI can autonomously make and carry out the right action, other important factors come into play that influence the decision of what truly is the right course of action. Such factors include the ability to assess the likelihood of collateral damage and thus the extent to which a particular attack would satisfy the jus in bello requirement of proportionality. AI systems will need to be able to identify and separate civilian targets reliably, as well as potential military targets. Hence, it is not sufficient for AI systems to simply identify and track armed forces, but they must also be able to identify and track innocent civilians, such that there would not be an unacceptably high number of civilian casualties. This is an obvious flaw among the autobots in the film, as they are so preoccupied with taking down the evil decepticons that they engage in a battle in the midst of the bustling Hong Kong metropolis (Bay, 2014).
The outcome of this battle between good and evil although sees the good autobots come out victorious, is at the cost of human lives, infrastructure and the environment. This is a clear reason as to why it is ethically problematic to grant AI the right to be “moral agents”.
Optimus Prime shows inherent righteousness in him as he is capable of adapting to the hostile environment that he finds himself in. This is because he makes the decision to go into hiding instead of out rightly attacking humans which would have cost lives. Although he is a powerful AI system that has multiple rockets and canons that he could have used to exert his power, he honorably chooses to respect the decision of the US congress, where they demanded “an end to all joint operations between the military and the autobots.” Prime is devastated and laments that “after all [the autobots] have done, humans are still hunting us.” (Bay, 2014) Despite the fact that humanity betrays him even though he fought alongside them in the movie’s previous installments, he chooses not to engage in a fight with them. This is because he realizes the value and sanctity of human life, which highlights his inherent righteousness. Instead, Prime adapts to his current situation and goes into hiding, by camouflaging himself as an old dilapidated truck to avoid detection (Bay, 2014).
Drawing the connection to my lens text, we see how Prime has truly shown his adaptability. He was not “simply following orders or predetermined instructions; on the contrary, [he] had the possibility of changing the general heuristics that led [him] to take the decisions [he] took, and we may assume that [he] did take advantage of the available opportunities to improve [his] general behavior.” (Floridi, Sanders, 2004, p. 364) Hence, Prime truly shows his noble character when he decides not to retaliate against the injustice that he has to face, but instead humbly bears it in respect of the decision made by human leaders. In depicting Prime in such a heroic and noble way, the text is emphasises the inherent righteousness in AI. Despite the fact that humanity has ostracized the very AI system that fought for them, he still respects their wishes and does not harm them in return.
However, while Prime may in this instance be able to distinguish between right and wrong, he may not always be able to address the problem of identifying and subsequently classifying objects in complex environments. How he adapts in such environments may not always prove to be the most righteous. Hence, while AI systems may be able to adapt in a manner that will not cost lives, this may not always be in case, especially in highly context-dependent and unstructured situations. In the film, Prime is left in a dilemma on whether he should reveal himself or continue hiding such that the whereabouts of his autobot comrades would be protected from the CIA (Bay, 2014). If he chooses the former, he would likely be killed and his efforts to protect the other autobots would have been futile. However, if he chooses the latter, it would cost the lives of his human allies. In such complex situations, it would be hard for Prime to make the right decision. Hence, there exists a fundamental, overly simplistic reasoning of ethics as a system of clearly defined rules with a designated order for resolving conflicts between them. As such, it would be problematic to grant AI the right to be “moral agents”.
In conclusion, with the characteristics that AI possess – interactivity, autonomy and adaptability, it is evident that AI are capable of making righteous decisions whether it be by their own active rationalization process, or in response to the environment. In the film, the autobots represent the innate capacity for righteousness and morality in AI systems that humanity stands for. This emphasises that classifying and compartmentalizing all AI as “evil” is not at all accurate. Albeit, there are other factors that come into play, as explained, that hinder AI from pursuing the right course of action. As such, humanity has much to consider before granting AI the right to be “moral agents”.
Bay, M. (2014) Transformers: Age of Extinction. Washington D.C.. Paramount Pictures.
Floridi, L., & Sanders, J. (2014). The morality of artificial agents. The Ethics of Information, 14, 349-379. Retrieved November 16, 2017, from https://link-springer-com.libproxy1.nus.edu.sg/content/pdf/10.1023%2FB%3AMIND.0000035461.63578.9d.pdf.
 Optimus Prime, an AI system, is the leader of the autobots – a group of robots who stand for freedom and fight alongside humans throughout the Transformers franchise
 Cade Yaeger is the main human protagonist and is an inventor who helps Prime when he is injured in the first few scenes of the movie
 Cemetery Wind is a division in the CSI which is only supposed to hunt down evil decepticons – that is, a group of evil robots who wanted to make mankind its slaves in the movie’s previous installment. Yet, it deceived the public by hunting and melting down autobots and decepticons alike.
 The decepticons are a group of ‘bad’ alien transformers who want to make humanity their slaves across the franchise