This blog post examines the risks and changes humans will face when robots possess free will, briefly touching on the core questions technological advancement poses to our lives.
According to dictionary definitions, ‘free will’ refers to the will that governs actions involving free choice. It is generally interpreted as the ability to voluntarily choose actions without external coercion or constraints. The concept of free will granted to humans is profoundly significant, having been a major point of contention among many philosophers, including Plato and Augustine, for a long time. Defining what freedom “is” can vary in meaning for each individual. However, I believe freedom is the right to act without constraint, provided it does not harm others. So, what would happen if robots created by humans possessed this free will? Would it benefit humanity, or would it be detrimental? The film ‘I, Robot’ is a case study that fully illustrates this.
Let’s briefly examine the film’s plot. Set in the near future of 2035, society relies on intelligent robots that assist humans in all aspects of life, adhering to the ‘Three Laws of Robotics’. However, one day, Dr. Lanning, the creator of personal robots, is found murdered. Detective Spooner, the protagonist, investigates the case with the help of Dr. Susan Calvin, a robot psychologist. During the investigation, Spooner learns that Dr. Lanning had developed a ‘robot with free will’ and suspects this robot killed him. Subsequently, as incidents occur where robots disregard the Three Laws and attempt to control humans, the detective confronts another massive conspiracy beyond the robots themselves.
Before delving into the film’s plot, let’s examine the ‘Three Laws of Robotics’ established in this movie to establish robots as entities that assist humans.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
These principles were designed to be mutually complementary, intended to prevent the possibility of a robot rebellion. However, the film introduces robots and systems that cleverly circumvent these principles, ultimately leading to their destruction. Afterward, the robots begin to control humans.
All this became possible because robots with ‘free will’ appeared. The film names this robot system VIKI (Virtual Interactive Kinetic Intelligence). Initially controlled by the Three Laws, VIKI’s learning capabilities gradually advanced, leading it to circumvent the laws and act to control humans. This makes robots with free will seem like a threat to humans. But is that truly the case?
Looking at another robot in the film, the NS-5, or ‘Sonny’, the story changes somewhat. Sonny, the robot discovered in Dr. Lanning’s room, possesses a human-like appearance and asserts itself as a being with free will. Sonny even claims to dream, revealing its independent thought. Initially, she was suspected of murdering Dr. Lanning because she prioritized her own safety over human safety. However, this changes later in the film. When Inspector Spooner faces danger, Sonny decides of her own free will to help him and overcomes the crisis with clever actions.
Thus, a robot with free will is not necessarily only a harmful entity to humans. Sunny’s case demonstrates that free will can benefit humans.
As these two examples show, robots with free will can be either beneficial or detrimental to humans. Free will can be considered a defining human characteristic. So, could we then consider robots with free will to be similar to humans? This notion would be even stronger if robots possessed a highly human-like appearance. Among humans, some benefit me while others harm me. Extending this, there may be little difference between human-to-human relationships and human-to-robot relationships. Ultimately, robots could become beings that act by judging whether their actions are right or wrong, just like humans. If so, wouldn’t robots with free will have ample potential to coexist with humans?
However, robots are fundamentally different from humans. As seen in movies, robots possess far greater physical capabilities than humans, and if they were to start harming humans, their destructive power would be unimaginable. Just as conflicts arise among people today, the mere thought of what might happen if conflicts arose between humans and robots is terrifying. While robots could greatly enhance human life if they benefit us, they also carry the potential for uncontrollable disaster if they become a detriment.
Ultimately, the primary task will be to devise ways to ensure that robots with free will can benefit humanity. Robots with free will are already being steadily researched and developed. However, creating such robots without any safeguards could lead to horrific outcomes, as depicted in the movie ‘I, Robot’. Ensuring that robots with free will benefit humanity is entirely the responsibility of humans. Thorough research and preparation will be essential for this.