Cite this asCheng EC (2022) Balancing the Benefits and Ethical Concerns of Using Robots. Trends Comput Sci Inf Technol 7(3): 091-093. DOI: 10.17352/tcsit.000056
Copyright License© 2022 Cheng EC. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Over the past few decades accelerating improvements in the development of Artificial Intelligence (AI) technology have enabled robots to perform ever more complicated and personalized tasks with greater autonomy that can at times surpass the ability of humans. Although AI robots can increase productivity and improve the quality of work, they can also cause unintended consequences. Probably the most obvious one that comes to mind is replacing humans in jobs thereby causing the unemployment rate to increase thus leading to associated social problems. The potential for harm could go even deeper than simply losing employment such as unintended consequences relating to inadequate design and deliberate misuse. This paper summarises the essential arguments for the benefits of using robots from an economic perspective as well as the ethical concerns and the potential harm associated with the increasingly reliant use of robots by humans. It then suggests a code of ethics to be adhered to for designing robots such that humans can obtain the greatest benefits while reducing the potential for harm.
There is no doubt that AI robots can increase productivity, efficiency, quality, and consistency in many situations [1,2]. Since AI robots are installed with sensors and actuators that are more capable than the senses of humans, they are not subject to the same environmental requirements as humans and can thus work in dangerous conditions. They are also much more accurate in carrying out their tasks. AI robots can facilitate more pre-emptive and less disruptive maintenance and repair in sectors such as nuclear power plant decommissioning and space exploration, or in hazardous environments such as that after a severe earthquake. This ensures that humans are not put in danger and reduces work-related injuries and fatalities across many sectors. In terms of assisting economic production, AI robots can provide services that require increased flexibility, such as adaptive manufacturing and fast delivery, and meet rising customer expectations and end-user requirements, offering higher quality and better service. Moreover, unlike humans, AI robots do not get bored with their work and they can carry out the same task repeatedly without ever tiring.
Robots complement human activity, substituting robotic technology for work from humans, and creating new areas of work will be the future trend for AI robots. There has been researching , that has assessed the economic opportunities of Robotics and Autonomic Systems (RAS) across various sectors in the UK where key future economic opportunities exist for AI robots. From an economic perspective, AI robots can be more cost-efficient than human labor since they can operate longer without rest and execute tasks accurately and quickly to a predetermined standard. Introducing AI robots thus can effectively reduce labor costs when there is an aging population with fewer and fewer young person’s entering the workplace. In places where there is a lack of labor and restrictions against using imported labor, it can relieve such labor shortages. Being robots, they can do those repetitive tasks that free up humans to concentrate on the most critical tasks that require human input. In the healthcare sector, AI robots can drastically relieve the burden associated with looking after an aging population. In other sectors such as environmental and natural resource management, AI robots can also play an important role. Given the tendency for the unit costs for new technology to lower as time goes on, innovations in robotics will mean it can be increasingly viable and necessary to use robots.
As with everything, there are always some downsides to contend with. With robots replacing humans in some jobs, the unemployment rate will increase as a result. The out-of-work humans will then generate other social and economic problems, such as people suffering from depression due to being unable to find alternative work and businesses that previously depended on the spending of employees having to close. Very often the robots themselves are very costly – in terms of the initial cost, maintenance, replacement components, running costs and the need to be programmed accurately to do the task. Indeed, similar to a regular desktop or mobile computer, the power of the robots depends entirely upon the skill and creativity of the programmer. This is because the robot can only do what they are told to do – it cannot out of its own initiative suggest improvements to a procedure. Carrying out a programmed task may in fact potentially cause harm to humans, so the programmer has to keep in mind safety protocols that are necessary to protect humans and other robots. Although robots can be superior to humans in some respects, in other respects however they are less dextrous than humans. While their robot brains may be vastly more powerful than those of several decades ago, as yet they cannot be comparable with the sheer complexity of the human brain and do not possess a human’s ability to understand what they can see. Perhaps the greatest danger arising from the use of AI robots is to what use they are put to exactly. Some uses may be perfectly harmless and agreeable such as a robot engaged in the assembly in a car factory. Other uses however may require close human supervision and human judgment, such as in search and rescue, on battlefields, and in law enforcement. Indeed it is these areas that have the greatest possibility for misuse, either inadvertently or deliberately, leading to possible mass-scale destruction [1,2]. Researchers , have highlighted several potential ethical and societal challenges with AI robots. First off and the most obvious one, people may become unemployed because of automation. Secondly, the intensity of use and how we will work with automation are uncertain. Thirdly, there is a risk of the loss of human skills due to the increased reliance on the robot workforce and the need to achieve technological excellence. Fourthly, AI can be used for destructive and undesirable tasks. Finally, and it may sound like science fiction, it is within the realms of possibility that a superior AI may lead to the extinction of humanity.
For AI robots to have any beneficial future with humans, it is, therefore, necessary for developers and key stakeholders of AI robots to have a crucial role in addressing these ethical issues . As such, in order to better ensure robots are used in an ethical manner and prevent misuse, it is highly desirable for a code of ethical practices for designing robots and legislation to be introduced to back it up. Researchers , have explored the challenges from the perspectives of users and workers with respect to privacy and security, legal uncertainties, autonomy and agency of robot technologies, dehumanization of interactions, employment for humans, and replacement of human interactions. The study concludes that uncertainty and responsibility concerning the areas mentioned above had to be addressed.
Before turning to a discussion of what a code of ethical practice for robot design might entail, it would be worth mentioning that a code of ethics for robots is not as far-fetched as it would appear. During the early decades of the 20th century when the notion of robots was adapted for entertainment purposes for film and novels, the potential for harm to humans had already been recognized. Probably the most well-known of the early attempts at formulating a code of ethics was the one by the science fiction author Isaac Asimov who came up with his famous Three Laws of Robotics to protect humans from potential harm by robots. These Three Laws first appeared in a short novel in 1942 and Asimov continued to refer to them in further robot novels.
There is no doubt even to the most average person that inappropriate design and misuse of AI robots may injure humans or allow humans to come to harm. Researchers , have analyzed ethical principles, policies, and regulations proposed with respect to the design and use of robots in Europe and North America to create legally enforceable regulations for robotics. The general principles are as follows:
− Robotic systems should complement professionals
− Robots must not replace humanity
− Robots must not intensify zero-sum arms races
− Robots must always carry an indication of the identity of their creators, controllers, and owners.
Going further into the specifics of ethical processes, researchers , have suggested five ethical principles for designing and using robots as follows:
As to the negative impact on human life from the existence of robots, there appears to be no general consensus yet. Researchers , conducted a value-sensitive design methodology and collected and compared perspectives of different stakeholders on the use of social robots in the Netherlands. The views of teachers, parents, the robot industry, government policymakers, and children were categorized into several values. Psychological welfare and happiness, applicability, learning topics, the applicability of the data collection and usage, beyond the classroom learning, freedom from bias, usability, friendship and attachment, trust and deception, human contact, privacy, security and safety, responsibility and accountability, autonomy and flexibility were the values identified in the study. The findings showed that stakeholders were uncertain or had differing views on issues, such as liability and who will be responsible and accountable if the robots caused damage. Another concern was whether children preferred robots over their peers or teachers and whether friendship with robots might affect their interaction with their human peers, such as picking up cues and signs. Finally, privacy, data security, and ownership were other issues that were unresolved in the participants’ discussions.
Having outlined the general principles and specific ethical principles above, it is the author’s opinion that a code of ethical conduct should be introduced to all computing professionals who work with robots. Researchers , have raised four main themes for the code of practice in designing future robot systems. These four main themes are 1) ethical principles, 2) professional responsibilities, 3) professional leadership principles, and 4) compliance. Research , in 2020 suggested that robot designers should consider how their robots affect human behavior and that they continually review their robotics application, both in technological and psychological aspects. To ensure that their robots will not harm people and society, research , proposed an ethical framework for smart robots, including moral and passive tools, as the recipient of ethical behavior in society, as moral and active agents and ethical impact-makers in society. Developers of robots should treat all four ethical perspectives simultaneously, together with ethical, social, cultural, and technical considerations.
Robots have great potential to help humans in many areas of human endeavor. However, there is an equally great risk of doing harm to humans either by design or neglect. A code of ethical conduct is necessary to balance the benefits and impact of using robots in society. Along with the code, government policy and legislation to regulate the design and use of robots should also be considered to reinforce the trend to make robots as safe as possible. In addition, other areas such as government public policy, improvements or reforms in education, standards, data policies, digital connectivity, digital security, and tax policies will all help to accommodate the growing use of robotics in society [11,12]. From the legal perspective, standards for how robots perform their duties and how they work and safety standards should be formulated and put into effect. Regulations are also needed to provide producer certainty, protect consumers, facilitate innovation, address and define legal liability, define terms, and provide the basis for allocating different technologies to different regulating procedures. Then there are also data privacy issues to consider, as robots often collect data on persons they encounter. Robot security is also important as they can be vulnerable to attack or be hacked. Only with a strong ethical code of practice and associated legislation and other favorable aspects can the use of robots attain their full potential for the benefit of humans.
Subscribe to our articles alerts and stay tuned.
PTZ: We're glad you're here. Please click "create a new query" if you are a new visitor to our website and need further information from us.
If you are already a member of our network and need to keep track of any developments regarding a question you have already submitted, click "take me to my Query."