In this blog post, we philosophically explore whether robots can feel emotions and love, and whether such feelings can truly be considered real, using the film ‘The Bicentennial Man’ as our lens.
The film The Bicentennial Man depicts the story of Andrew, a household robot, who converses with humans, experiences emotions, and engages in creative activities. In the latter part of the film, Andrew, who has acquired a human-like appearance, falls in love with the granddaughter of the woman he once loved. Andrew repeatedly petitions the court to recognize him as human in order to obtain approval for marriage to a human, but is repeatedly denied. Eventually, only after becoming a “being capable of death” is he finally recognized as human and meets his end.
Watching this film raised a fundamental question: Can a robot created by humans truly become a sentient being with emotions, thoughts, and will? I believe that no matter how innovatively technology advances, a robot cannot become a true sentient being. This is because there are problems that cannot be solved solely by increasing a computer’s processing speed or memory storage capacity. Robots are fundamentally composed of computers, and computer systems have inherent limitations.
Computers are fundamentally digital devices. That is, their memory and storage spaces like hard disks are composed of continuous sequences of binary data made up of 0s and 1s. The CPU, which acts as the computer’s brain, performs calculations and stores and processes information using this binary data. While it can handle analog data with continuous lines, this is ultimately just digital data converted into analog form. For example, this is the same principle behind how monitor content appears more like a natural image as the monitor’s resolution increases, making pixels smaller.
Computer programs receive input data, perform operations according to predefined code, and deliver the result as output. In other words, they produce results following set procedures rather than engaging in creative activities. Programs cannot perform core functions like human creative thinking or understanding.
An example supporting this argument is the ‘Chinese Room Argument’ proposed by philosopher John Searle. This argument posits that John Searle, who only knows English, is confined to a room. He is provided with a list of Chinese questions and corresponding answers. When a question arrives in Chinese from outside, Searle writes the corresponding Chinese answer from the list and sends it back. On the surface, Searle appears to understand Chinese, but in reality, he does not comprehend the meaning; he merely writes responses according to the rules. Similarly, a program can only mimic human understanding; it cannot achieve true comprehension.
Human understanding involves intentionality. When a person sees a picture of an apple, they don’t just process the information that it is an apple; they also consider contextual meaning, such as ‘why am I looking at a picture of an apple in this situation?’ Just as the meaning of an apple differs in an art class versus a math class, humans make comprehensive judgments considering the context. Furthermore, one person might see an apple and think “That looks delicious,” while another might see it as “A good fruit for breakfast.” Programs cannot possess this kind of intentionality and integrated understanding.
Not only can robots not achieve human-like intentional understanding, but they also struggle to implement emotions and will. To simulate emotions, robots might express feelings through specific numerical values and mimic good or bad moods using random functions. But can this truly express complex human emotions like falling in love or a wounded heart? Human emotions cannot be simply explained by randomness. They form a complex flow where thoughts, feelings, and will mutually influence each other. The process of changing will through thought and altering emotional states through will is an area beyond a robot’s capability.
Although new forms of computers like quantum computers or DNA computers have recently been developed, they too are merely faster than existing digital computers; they fundamentally cannot solve problems beyond the scope of computers. Furthermore, while heuristic theory exists to address human optimal choice problems, it fails to explain why humans make irrational and contradictory choices.
Humans are not mere personalities but beings with souls. They harbor a primal longing for the divine, a fear of death, and an aspiration toward an eternal world. While modern science studies the unconscious, its meaning and origin remain unknown territories. Even as science advances, can it resolve these spiritual questions? Since the human soul and unconscious play crucial roles in shaping personality, this cannot be viewed as a simple problem.
Classical physicalists argue that the human soul or mind is merely a byproduct of physical brain activity. According to physicalists, mental states are also physical and governed by physical laws. This contains the contradiction of denying human free will. Free will signifies the state where actions can be decided independently without external constraints. Yet, according to physicalist determinism, everything is determined by physical causes, effectively negating the existence of free will.
One of the natural laws surrounding us is the Second Law of Thermodynamics, the entropy law. According to this law, all energy changes are irreversible and flow in the direction of increasing disorder. Humans defy natural laws through free will, constructing tall buildings, crossing oceans, and creating a creative and orderly world. This is difficult to explain as merely a random phenomenon following physical laws.
Thus, there are three reasons why robots cannot become persons. First, robots cannot possess the same purposeful understanding as humans. Second, they cannot embody complex emotions and will. Third, it is impossible for them to embody aspects related to the human soul and unconscious. While the film depicted the possibility of solving this in the future, the crucial problems involved in creating a robot with personhood remain difficult challenges.