"If a slave says to his master, "You are not my master," and is proven guilty of this [of falsifying in respect thereto], his owner may cut off his ear." - Hammurabi's Code (Law 282)
In 1760 B.C., Hammurabi, the sixth Babylonian King, thought of himself as a righteous ruler and proclaimed that he was ordained by God to enlighten the land and further the well-being of mankind through laws which came to be known as Hammurabi's Code. In his eyes, slavery was just, natural.
Fast forward to the nineteenth century. The president of United States of America, Abraham Lincoln stated in a now-famous code, "If slavery is not wrong, nothing is wrong. I cannot remember when I did not so think, and feel."
In modern times, slavery stands in contrast to our idea of the sanctity of human life. Many moral philosophers (Aristotle, Plato, Thomas Aquinas to name a few) saw slave labour as just, whereas most philosophers now would agree that slavery is innately wrong.
I have started with the example of slavery, but there are many more. Views on issues pertaining to religions, homosexuality, gender equality, racism, treatment of minority and many others have undergone drastic changes since we first began to concern ourselves about them. The evolution of moral thoughts is a fascinating subject, one that is of utmost significance at this particular juncture of human history as we begin to run away from all the limitations, technological or otherwise, that constrained our forefathers.
So, if we were to define the direction of morality over time, how would we generalise the trend? In my opinion, and as one can easily observe, that as time progresses, we are moving rapidly towards loosening the restrictions and with heavier inputs from scientific thinking as compared to before. Previously, morality was predominantly seen as theological and a philosophical issue, with the role of science restricted to describing how the world works rather than determining what is wrong or right. However, today, as we strive to understand moral issues, the origin of moral thinking, and how the relationship between brain and mind gives rise to moral thinking, we are relying more and more on science and scientist for answers. In support of this, neuroscientist, philosopher and author Sam Harris has argued that we overestimate the relevance of many arguments against the science of morality. Harris argues that scientific principles are appropriately applied in this domain because "human well-being entirely depends on events in the world and on states of the human brain. Consequently, there must be scientific truths known about it".
I think that the shift of the burden of explaining morality towards science is significantly responsible for why we have become more liberal and open-minded over time. There is little scientific ground behind the restriction of most behaviours that religion or other philosophical thinking discourages. Science would find nothing wrong in homosexual relationships, would find nothing natural or just (to Hammurabi's dismay) of slavery, racism or gender disparity. As cognitive psychologist, linguist and author Steven Pinker points out, much of our recent social history, including the culture wars between liberals and conservatives, consists of the moralisation or amoralisation of particular kinds of behaviour. Even when people agree that an outcome is desirable, they may disagree on whether it should be treated as a matter of preference and prudence or as a matter of sin and virtue. At the same time, many behaviours have been amoralised, switched from moral failings to lifestyle choices. They include divorce, being a working mother, homosexuality, etc. Pinker aptly captures this by arguing that there seems to be a Law of Conservation of Moralisation, so that as old behaviours are taken out of the moralised column, new ones are added to it.
Lastly, there is the important matter of artificial intelligence (AI). How will our morality address the coexistence of robots and humans? I will just note an observation about Isaac Asimov's three laws of Robotics to illustrate one dimension of the issue:
First Law - A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law - A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law - A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
Notice how the laws implicitly assume and establish the supremacy of humans over robots in the hierarchy. We can contrast these laws with Hammurabi's laws addressing masters and slaves. As AI becomes more and more advanced, will there ever come a time when we will think that there should be no hierarchy of this sort between robots and humans? And given that there is a general consensus that at some point in the future, AI may catch up to human-level intelligence, who will then get to decide on such moral dilemmas? Robots? Or us humans?
Mahir A. Rahman is a Research Associate at Bangladesh Institute of Development Studies (BIDS).