An Exclusive Tech and Law Center Interview with UN and ISO experts in Robots and Regulation, thanks to our distinguished fellow  

Dr. Yueh-Hsuan Weng, ROBOLAW.ASIA Initiative

 

Thank you very much for accepting this interview! Can you please tell us a bit about your background. What led you to become involved in robot ethics and when did you begin pursuing laws for robots?

 

Yueh-Hsuan Weng:

I am a co-founder of the ROBOLAW.ASIA Initiative at Peking University and formerly a researcher of EU FP7 Project: ROBOLAW and the Humanoid Robotics Institute at Waseda University. I received my Ph.D. from Peking University Law School in 2014, the title of my Ph.D. dissertation was called “The Study of Safety Governance for Service Robots: On Open-Texture Risk.”

 

I initially began my adventure in law and robotics at Waseda University in Tokyo. It is known for the world renowned “Humanoid Robotics Institute,” and during my stay there in 2004 as an exchange student, I decided to write a term report introducing the history of Waseda’s robotic research. During my interviews, many researchers unexpectedly introduced me to the importance of a revolutionarily concept called “Human-Robot Co-Existence”and later it became the root of my research.

Prof. Dr. ChristofHeyns, UNSpecial Rapporteur on Extra-judicial, Summary or Arbitrary Executions

 

 

Christof Heyns:

I am a professor of Human Rights Law at the University of Pretoria. I was appointed as the United Nations Special Rapporteur on extra-judicial, summary or arbitrary executions in 2010. This mandate focuses of the right to life, and threats to that right. In that context I have submitted reports to the Human Rights Council as well as the General Assembly on the use of force by the police, threats against specific groups such as journalists, and armed drones. In 2013 I presented a report to the Human Right Council on Autonomous Weapons Systems in armed conflict, and I have also subsequently presented a report on the use of unmanned systems (whether armed drones or autonomous weapons systems) during law enforcement to the General Assembly.

 

Prof. Dr. Gurvinder S. Virk, ISOTC184/SC2

 

 

Gurvinder S. Virk:

I am Professor of robotics and have been involved in research and development of new robot solutions since 1995.These have evolved from robots for hazardous environments (where there are no humans due to the dangerous situations) to service robots (where humans are everywhere and close human-robot interaction is needed). Around 2002 it became apparent to me that the current regulations at that time which covered only industrial robots were inappropriate for the new emerging service robots and this situation was a barrier to commercialisation of our R&D results. Industrial robots are largely designed to operate in workcells and human should stay outside due to the danger of harm. The collaboration between industrial robot and human was and still is, very closely regulated due to safety concerns. The new service robot applications demanded close human-robot interactions and even human-robot contact while the robot is operating. This was not allowed. I therefore approached ISO (IDO TC184/SC2) in around 2004 with some colleagues from CLAWAR (which was a Network of Excellence on climbing and walking robots aimed at widening the application of robots) to raise the issue and highlight the need for new regulations. We were invited to an ISO meeting in 2005 where we presented the situation and a new Advisory Group was set up after an international ballot and call for experts to investigate the situation officially within ISO. I was asked to chair this group and about 30 international experts from Japan, Korea, Europe and USA came forward to work on the issue. After 1 year I presented our results and several new robot standardisation work groups were created under ISO TC184/SC2. These were:

WG1: Robot vocabulary

WG7: Personal care robot safety

WG8: Service robots

 

I was invited to chair WG7 and this had produced the EN ISO 13482 safety standard for personal care robot that allows human-robot interaction and human-robot contact. This was published in Feb 2014.

 

Other work groups have been created afterwards on medical robots (JWG9, JWG35 and JWG36) which are joint work projects between ISO and IEC TC62 which focuses on medical electrical equipment. This is because medical robots will also be medical electrical equipment and the medical regulations are different from the previously considered robots as machines regulations.

 

In addition modularity for service robots (WG10) has also been started recently.  I am intimately involved in all these new developments and am leading 3 of the workgroups (WG7, JWG9 and WG10).

 

 

Yueh-Hsuan , you are fighting for laws that will guide how humans interact with robots. Can you tell us more about this?

 

Yueh-Hsuan Weng:

The recent incident of a Japanese robot Pepper is worth discussing, as there may be clues for us to think about the emerging co-existence issue with robots to be members of our society. The question now is: are robot laws necessary?

 

First of all, we should be aware the importance of public laws and regulations. While it does not refer to the debate on issues of robots to be recognized as the subject of law from the Constitution, it does mention making public regulation for the design, manufacture, selling, and usage of advanced robotics. Furthermore, robot ethics and legal regulation should not always exist in parallel, because from the perspective of regulations, robot law is just a union of robot ethics and robotics.

 

 

When we consider the international public law, an emergency need is to consider a set of new regulation for lethal autonomous weapons. Do you believe we have to ban Killer Robots? What are current challenges?

 

ChristofHeyns:

I am of the view that fully autonomous weapons should be banned - in other words, those systems or usages that do not allow meaningful human control. This is because I do not think machines can adequately make decisions concerning distinction and proportionality, and as such pose a danger to the lives of civilians. But even if they can make such decisions in specific cases, it also undermines the dignity of those targeted to have the decisions whether they will live or die taken by a robot. The people targeted are literally reduced to the numbers used in an algorithm– they are reduced to being merely targets.

 

And then there is the issue of responsibility. The right to life is violated if there is not a proper system of accountability for possible violations. The question is who will be responsible when things go wrong - as they invariably do with the use of force if humans have not exercised meaningful control? Responsibility in law and in ethics is to a large extent  tied to and premised on control. With a system of machines taking lethal decisions, it is difficult to see who can be held accountable.

 

 

Yueh-Hsuan. what is your opinion on Prof. Christof Heyns’ belief that fully autonomous robots meant to be used as weapons should be banned? 

 

Yueh-Hsuan Weng:

Yes, high risks accompany any military actions from lethal autonomous systems.I believe that the public must be in the loop at all times. However, there is a regulation gap in regards to control the circulation of core RT components. To quote my mentorProf. Atsuo Takanishiat Waseda HRI,Technically, there is a gray zone between Service Robots and Military Robots.

 

 

Prof. Heyns, do you support the existence of autonomous robots not necessarily designed to kill but to aid militaries? For example, robots that are designed for surveillance.

 

Christof Heyns:

Yes, in many cases it can enhance human control and decision making, and indeed also save lives.

 

 

Also, when it comes to autonomous robots that are not intended to be used as weapons, do you believe there are a set of laws necessary to guide human interactions with said robots? 

 

ChristofHeyns:

My concern is with using AWS to perform critical functions – the release of force.

 

When robots get highly autonomy, shall we consider an active safety regulation like Isaac Asimovs Three Laws of Robotics?

 

Yueh-Hsuan Weng:

With the exception of the “Risk Monitoring”mechanism which comes in the near future, we will need another sophisticated “Risk Control”mechanism to reduce “Open-Texture Risk”- risk occurring from unpredictable interactions in unstructured environments, when robots need high levels of autonomy to perform more unpredictable behaviors in the presence of human.

 

Asimov’s Three Laws of Robotics may sound like a feasible way for implementing “Risk Control”mechanism. However, in a previous study in 2009, I argued that the “Three Laws of Robotics”is unfeasible based on three potential problems: “Machine Meta-ethics”, “Formality”, and “Regulation.”

 

For example, “Third Existence” robots are not able to obey human laws within our natural language due to its limitation to interpret terms and clauses comprehensively, therefore resulting in the “Formality” problem.As for self-conscious “HBI” robots, they do not face the “Formality” difficulty, but we have to worry that if they spontaneously violates the human rules, and even to create their own “Robot Law 2.0” for human beings to obey? On the other hand, I have proposed a “Legal Machine Language,” in which ethics are embedded into robots through code, which is designed to resolve issues associated with Open-Texture Risk - something which Three Laws of Robotics cannot specifically address.

 

 

Recently, there was surveillance footage of a man kicking a robotic clerk. Can you talk about your reaction when you saw or head about this footage and how it fits in with what you are working on?

 

Yueh-Hsuan Weng:

The incident has been received with immense scrutiny from the public as it is regarding a human like sociable machine that was inappropriately treated. When I head about this footage, my reaction was not surprise at all as incidents like this one have occurred before. During the 19th century, steam powered locomotives were deemed “monsters” and therefore inappropriately treated in Shanghai and Yokohama when they were initially introduced to the Asian society. Instead of Pepper, if an object such as an ATM or vehicle had been vandalized, the moral impact will be much less. As such, an evolved sets of ethical principles for sophisticated and intelligent machinery like Pepper have yet to be developed.

 

 

Why do you think current laws, like damage to property, are not enough to protect robots? Why do you believe a new set of laws is necessary?

 

Yueh-Hsuan Weng:

My main argument is that the current laws do not help human beings to project their empathy while interacting with humanoid robots. We may soon need a new set of laws such as “Humanoid Morality Acts ”to provide robots a special legal status called the “Third Existence.” In addition, similar to pet owners, “Third Existence” owners should afford higher civil liability, and it can help to ease robot manufacturers to avoid too much product liability in regards to advanced robots’ uncertainty.

 

 

How would you respond to people who say “it’s just a robot, they can’t feel or think anything” and therefore believe new laws are not necessary?

 

Yueh-Hsuan Weng:

First of all, the morality of “Humanoid Morality Act” is human centered. For the foreseeable future, robots will be“ objects of law” even if they can’t feel or think. We may still need new laws to ensure their daily interactions with human beings.

 

In addition, from the perspective of risk management, a possibility could be to develop the “Robot Safety Governance Act”, which is an extension of current existing machine safety regulations. These technical norms located at the bottom of “Robot Law” will ensure the safety of new human-robot co-existence.

 

 

Prof. Virk, unlike current existing industrial robots safety standards, the ISO 13482 is the worlds first safety standard for service robots. Furthermore, it could also bring structural and influential impact for next generation robots safety certificationproduct liabilityethics and insurance in the future. What is ISO 13482s role for realization the safety governance for next generation robots?

 

Gurvinder S. Virk:

ISO 13842 presents the safety requirements for personal care robots. Personal care robots are defined to contribute directly to the quality of life of humans rather than be focused on manufacturing applications. The standard defines the internally agreed consensus on how the manufacturer to design the new robots to allow close human-robot interactions so that there will be protection against litigation in the event of an accident occurring. This is the main aim of international safety standards to provide the regulator with rules that have been formulated via the open democratic manner. Of course it does not cover issues of negligence and incompetence, etc but if the manufacturer complies with the regulations and has certified evidence to this effect he will have protection in legal suits against him. This is most important in an area of technology which is rapidly evolving and changing. This means the standards must be reviewed regularly. Normally all standards are reviewed on a 5 year cycle but as 13482 is so new we have decided to considered formally within WG7 to see is it is already worthwhile to review it already even though it was published in Feb 2014. Probably it is too early but it is interesting to note how soon the international community is thinking about the review of the new standard.

 

In addition to the new safety requirements, it is important to be able to classify the sectors in a clear manner so that all know if the robot is an industrial robot or if it is a personal care robot since each must comply to different requirements. As the robot sectors grow and evolve and each having its own regulations, it is likely that there will be confusion between the robot sector boundaries; in some cases it will not be clear if the robot is an industrial robot or a personal care one. For example consider an exoskeleton robot designed to help the movements of a human. This is a personal care robot (defined as a physical assistant robot in 13482). If this is used to help a worker perform his job in a manufacturing application, does this make the robot an industrial robot? This situation has been discussed and the consensus is that as the exoskeleton is improving the quality of life of the human worker and not improving the manufacturing process directly, it is a personal care robot. It is clear other cases will arise where the situation is more tricky and difficult to resolve. This is likely to lead to legal cases if accidents occur. 13482 can provide guidance on such legal issues.

 

There is also the situation of misuse. If a manufacturer design a robot as an industrial robot and it is used incorrectly as a personal care robot (or vice versa), problems are likely to arise. The limits on the liability of the manufacturer are unclear as concerns related to foreseeable misuses of a product need to be addressed. This defined as “use of a machine in a way not intended by the designer, but which can result from readily predictable human behaviour” but this is not clear what is foreseeable in this sense and what not…  so misuse can be quite unclear!.  Two examples may help in this respect:

  1. Vacuum cleaner robot used as a real-world Frogger game implementer on real highways. I cannot imagine this could have been foreseen by anybody (my opinion)
  2. Vacuum cleaner trapping a person’s hair who was sleeping on the floor in Korea. Sleeping on floor is common in Korea and so having a vacuum cleaner operating where people are sleep could have been foreseen by the manufacturer (my opinion)!!

 

ISO 13482 does not consider the ethical issues and it would be useful to have some guidance on such issues globally. However there are likely to be regional and national views which will differ and so ranges of possible uses and limits would be good have. Not sure how we can get such ethical perspectives.

 

According to Dr. Wengs proposal, we might need two special regulations for next generation robots, they are Humanoid Morality Act and Robot Safety Governance Act. Do you believe that a better regulation on robot safety is to ask safety requirements comply with administrative laws (i.e. EC Directives), but not to keep it as voluntary requirements?

Gurvinder S. Virk:

Not sure about this. EC Directives are law and if we need EC Directives for robot products is unclear at present. Maybe robots are OK to be treated as “any other product” at the moment but when the degree of autonomy has advanced much more, maybe we will need to think of more specific rules and regulations to accommodate the advanced intelligent robots and robot systems. Currently the regulatory framework cannot handle such advanced autonomous systems. Also systems which have the ability to adapt and learn from experience are not covered by current regulations. This is because currently systems are tested at a point in time and certified as such;  if the system has the ability to learn, its software will change due to the new capability it has acquired due to self-learning, it loses its certification. Hence having a self-learning mode is not a good option for commercially sold products if there are safety issues arising.

 

Could you please explain the Risk Assessment mechanism from the ISO safety framework for service robots?

Gurvinder S. Virk:

No risk monitoring mechanism but a risk assessment and risk reduction methodology which I will talk about. Hope this is OK.

There is a type A standard (ISO 12100 Safety of machinery - General principles for design - Risk assessment and risk reduction). Type A means it applies to all machines which is a huge area and robots are up to now essentially designed and regulated as machines. Medical robot regulations are in the pipeline but now have as yet been published.

12100 presents a methodology which has to be adopted in the design of machine products to ensure safety issues can be addressed. The process is structures to assist designers and manufacturers vis the 3 step method, which is as follows:

Step 1 : Inherently safe design measures. This means design only safe machines should be designed if possible

Step 2 :Safeguarding and complementary protective measures. This process identifies all the hazards and what harm can be caused under single fault conditions and introduces modifications to the design to reduce the likelihood of harm to a point it is acceptable

Step 3 :Information for use. This is information presented to the user to indicate any remaining issues that need to be considered by the user to operate the machine in an acceptably safe manner.

This process is expected to be followed to ensure no unacceptable risk exists and likely to cause harm during use.

 

For long term consideration, do you believe that it is important to consider a safety abiding Ethics by Design principle to embedded code for limiting autonomous robots behavioral risks?

Christof Heyns:

Those who programme robots that can hurt people must certainly take ethical considerations into account.

Gurvinder S. Virk:

Yes I think ethics should be introduced into the design process. Currently international standards/ regulations do not do this. The UK is working on a national document on robot ethical design. The document is BS 8611, Guide to the ethical design and application of robots and robotic systems. The work has extend safety assessment procedures considered in ISO 12100 to prevent “harm” by defining “ethical harm” and using the same approach taken in ISO 12100 to develop an ethical risk assessment and risk reduction process. The key definitions are as follows:

  • Harm: physical injury or damage to health
  • Ethical harm: anything likely to compromise psychological and/or societal and environmental social wellbeing

Clauses are developed in BS 8611 for many key ethical issues, such as privacy and confidentiality, human dignity, cultural diversity, legal issues, medical, military, etc.

The document is currently being developed but is expected to be published as a UK document soon.

 

Yueh-Hsuan Weng:

Yes, I believe that for a human-robot coexistence society to exist in the future, an“Ethics by Design”principle within embedded code to limit autonomous robots’ behavioral risks is inevitable.

 

However, to highly autonomous robots who behave like human beings, it is unethical to entrust robot manufacturers to apply the principle under a policy in which the code of ethics is a responsibility associated with the job. This will not be enough to ensure safety. In such scenarios, robot manufacturers have to take consumers’ preference as a priority, otherwise they may lose the market share from their competitors.

 

In this case we should consider a “Code is Law” policy –that the code of ethics should not simply be one of the manufacturers’ self-responsibility, but it should further become a part of statute law or “Technical Norms.” Although this enables the code of ethics to be well supervised during its designing stage, a major problem still falls on how to authorize the code of ethics with legal effectiveness as it relates to keeping a balance between many conflicts of interests.

Be Sociable, Share!