The Human Machine Interface

This post was originally published on this site

How Will Humans And Machines Interact And Evolve?

Creating a machine that attains or outstrips human intelligence is presently beyond technology. Throughout the intelligent machine system cycle—from design, development, testing, and distribution—we must ensure their safe use now they are acting as teachers, coaches, and companions. With unprecedented roles in many life aspects, tough questions arise about personal agency, autonomy, privacy, identity, authenticity, and responsibility, who are we and what do we want to be?

Introduction

Are you able to remember everything learned in your life? Is it possible? The answer is probably “no”—it certainly is for me! That is where today’s machines play a magnificent role. Unlike humans, they never forget. They remember every data input and pattern established in their systems. Also, machines process a massive amount of information at speed. In jobs like crime analysis and disease diagnosis, this can be done in minutes, taking people days and even weeks to bring together the relevant information for evaluation. Researchers now use machines to identify an unfamiliar object at the speed of light! This use of technology for many time-consuming tasks frees people up for other important activities.

Machine memory for volumes of data allows it to identify patterns and make inferences that most humans would not discover alone. Until now, machine capabilities have only gone so far, with human experience and context needed to achieve Big Data potential. Artificial Intelligence (AI) has used what has already been learned. It has been unable to create new knowledge and make cognitive progress, in contrast to humans who can bring their context, life experiences, and genetic heritage to solving problems.

Consider the context for articles used for carrying—called “bags.” A machine, because it lacks real experience of life patterns, would not be able to identify all the relevant uses. Bags for shopping, holiday travel, school books, camping equipment, laptop transportation, etc., demand different approaches to design because of their diversity of use. Human insight is needed to evolve the frame of reference and keep machines abreast with the changing context, correcting errors as they occur. Reflect again on bags. One might assume that once the machine understands the lingo and links with bags, it might consider other things connected, like handles, straps, or wheels, but this is not the case. Humans are needed to handle such concerns, along with crisis management, as machines cannot direct operations.

Airplanes provide another example. Most nowadays operate on autopilot, but when something unexpected happens the captain is there to troubleshoot. While the machine informs and supports the problem-solving process, like flagging up a mechanical error, the human pilot has to judge and manage the situation. The machine cannot do this without supervision and the human context, experience, and genetic abilities.

Thus, traditional design is a master-servant relationship between humans and machines, with the former controlling what it will do, when, and how, through an interface and predefined instructions. However, rapid technological advances now make it possible for machines to reach a level of intelligence that enables systems to execute tasks/missions without directions, to attain a status of a self-governing agent. This denotes a symbiotic relationship between humans and technology.

Merging Of Machine Capability And Human Consciousness

Harvard Medical School has investigated why a powerful melanoma drug stopped helping patients after a few months, using a human-computer collaboration (Prabhakar, 2017). Thus, new approaches to understanding complexities are now being pursued in other domains using this model. Scientists over centuries have engaged with ideas and “what-ifs” but now the intellectual partner is a machine that builds, stores, computes, and iterates on hundreds of equations and connections. The combination of insights from researchers and computers does not just record correlations—”When you see this, you will see that”- but also reveals the mid-steps and cause and effect links, the how and why of interactions, instead of merely the what. This enables the leap from Big Data to deeper understanding. Thus, the distinction between humans and machines becomes almost indiscernible.

Another kind of human-machine collaboration is seen at the American University of Utah (Prabhakar, 2017). Doug Fleenor lost both hands in an accident but his arm has a chip in it that communicates with a computer. Professor Greg Clark, a Utah scientist, asked him to reach out and touch the image of a wooden door on the computer monitor. As Doug guided his virtual hand across the virtual door plank, he literally, biologically, and neurologically felt the wooden surface. New software and fine electrical connections between another embedded chip and the nerves running up his arm to the brain meant he experienced a synthesized sensation of touch and texture indistinguishable from a tactile occurrence. Doug had not touched anything in his hands for 25 years since his accident, so this was remarkable. Adaptive signal processing and sensitive neural interfaces, machine reasoning, and complex systems modeling are integrating the power of digital systems and human capacity to experience insights and apply intuition. This heralds a combined evolutionary path.

Is society ready for such a prospect? Many are anxious about the impact of AI and robotics on employment and the economy (Sage & Matteucci, 2019). The Pew studies (2017, 2018, 2019) discovered that people are “more worried than enthusiastic” about the integration of biology and technology, like brain chip implants and engineered blood, so can they work and evolve together?

Can Humans And Machines Exist In Harmony?

It was John Durkin’s (2003) article, “Man and Machine,” that accelerated the debate on coexistence. He was talking about AI and dealt more with emotions of fear and distrust than the likelihood of any coexistence or what form it might take. The film AI: Artificial Intelligence was used as an illustration, pointing out human responses as if machines were one of them. David, the film’s main character, accepts his rejection by people and experiences human emotions (emulations). This then questions what rights intelligent beings should have and the ethical standards to be developed for AI. David is visually indistinguishable from a human child, so what qualities differentiate humans from machines?

Humans define themselves from the rest of nature by language and intelligence. The ability to reason through language positions them as superior in life rankings. This intelligence is often seen as synonymous with sentience (feeling, perceiving or experiencing subjectively), which is a valued attribute. People think it is fine to chop down plants and kill seemingly unintelligent flies and bugs, but wrong to do this to dogs or dolphins who are viewed as having more abilities. However, humans do not think it immoral to slaughter sentient animals required for food, as their superior needs dominate.

What is human intelligence? What do human brains have which machines cannot replicate? A brain is a composition of chemicals and biological matter with an unmatched ability to process information for survival. Studies have mapped brain regions that are active when we experience fear, pleasure, and other feelings. These emotions were once regarded as the hidden soul of a person, but are now visible as electro-chemical reactions. If it is possible to isolate the chemical components and find electronic analogs, then machines could experience such emotions. One needs to find the set of operating limits the human brain follows to then mimic them in an electronic format to create AI. David, in the film AI, is such a machine. Programming feelings and emotions into AI along with humanoid bodies blurs the distinction between man and machine. This is already beginning to happen.

Omron Automation, in Japan, has developed Forpheus, to showcase how technology works with people. It can read body language to gauge an opponent’s ability in a table tennis game and offers advice and encouragement. The aim is to understand mood and playing ability to predict the next shot. Forpheus has played against humans in Las Vegas. It is among several devices showing how robots can become more human-like by acquiring emotional intelligence and empathy. Honda, the Japanese firm, launched a new robotics program, “Empower, Experience, Empathy,” including its new 3E-A18 robot, which shows compassion to humans, using various facial expressions.

The French Blue Frog Robotics makes a companion social robot, Buddy, who asks for a caress and gets mad if poked in the eye! Qihan Technology’s Sanbot and Softbank Robotics’ Pepper are being humanized by teaching them to read and react to people’s emotional states. In Italy, I have seen Pepper supporting students with language practice and also witnessed robots able to feed people needing this help. These can interpret a smile, frown, tone of voice as well as the lexical field you use and non-verbal language, like a head angle. Key applications for these robots are supporting education, improving sport skills, and providing help for the disabled. Europole has initiated an excellent teacher course for Education Robotics and a team is using robots in activities to prevent a rise of school bullying with great success (Cobello & Milli, 2020). There are now robot receptionists and restaurant waiters. It looks like they are here to stay! (Sage & Matteucci, 2019).

Developing emotional intelligence in robots is challenging. It is not just about technology but also psychology and trust. In Japan, these robots are acting as companions for old people, but in some ways, it is disarming to see a machine consoling a desolate, crying person. Professor Juan Romero, in the preface of How World Events Are Changing Education (2020), tells the story of a Japanese man marrying a robot, with a reception for relations and friends after the ceremony. Who counts as the robot’s relatives and friends? Will the human-machine couple try for children one wonders? There are robot teachers, sports coaches, and now wives. Whatever next?

 Human-Machine Conflicts

History shows that when humans encounter other intelligent societies there have been conflicts. For example, the meeting of European culture with Native Americans, New Zealand Maoris and Australian Aborigines was a disaster for these groups. Though the physical form of people was similar, the way of life proved very different, with the presence of each other unknown until making contact. This ended in disaster for the native groups, as the Europeans dominated and exploited them. Interaction between intelligent societies is not the same as humans creating machine intelligence but demonstrates how humans behave.

Conflicts between human groups have many causes. Religious, ideological differences, and fights over land and resources are major reasons for warfare. Clashes between humans and machine intelligence are harder to predict. If machine intelligence is able to become a functioning societal group they will need resources like humans. Land, materials, and energy are necessary for both and could become a source of conflict. This depends on whether machine intelligence forms societies, seeks status and needs that equal humans. David, in the AI film, pursues this, but humans did not accept him as equal to them. It is possible that human values will conflict with the emergence of human-like AI.

Different Ways To Co-Exist

When Durkin (2003) speaks about coexistence, he suggests humans are reliant on technology and machine intelligence for survival. He says that humans will be unable to turn off their intelligent machines because they depend on them for reliable assistance with routine work. This means the machines are in effective control. Humans have developed machines to automate tasks and free people from doing them. These are the repetitive, time-consuming activities that are dull or dirty and dangerous. An example is email filters. This is software which intelligently sorts through the mail and makes decisions based on logic and reasoning. Where AI is subservient to human intelligence has varying degrees. It is possible to program software to be intelligent but still subservient and so develop AI to be controlled.

Another way would be equality, with humans and machine intelligence coexisting as partners. However, if AI continues to develop to the stage where it matches human intelligence, there will come a time when it will seek to serve its own interests. At this point, humans and machines could encounter conflicts. A war between people and intelligent machines would be humanity’s greatest test of survival and the result, another way of coexistence where individuals are subservient.

However, humans are the architects of machine intelligence, so it is possible to create software with specifications to protect humans from potential harm. The science fiction writer, Isaac Asimov, created laws in his books for robots to follow, but these have little bearing on actual AI construction. Scientists analyze the possibility of programming rigid instructions into AI and conclude that this is difficult because of the complexity of reducing the environment to be defined by the binary nature of laws. Such behavior laws would be necessary to prevent conflict. (Clarke, 1994, Grande, 2004).

Muller (2020) examines the ethics of machine existence. He talks about an astronomical pattern that an intelligent species will discover at some point and bring about its own demise. Such a “great filter” would help to explain the Fermi Paradox—why there is no sign of life in the known universe despite a probability of it emerging. Muller concludes it would be bad news if the “great filter” is ahead of us and not a hurdle that Earth has already passed.

Humans are likely to accept intelligent machines whatever their problems. When you see people on public transport or walking the streets they frequently have a smartphone glued to their ear and are shouting out their messages for all and sundry to hear. This is just one of today’s irritating issues that assail us as we go about our lives. The functions that machines serve us are necessary for our high standard of life. Automating routine, time-consuming tasks leaves us room for more meaningful, interesting activities. AI, in humanoid form, is already common across the world, acting as an intelligent assistant in different spheres of life.

However, they have substantial limits, such as human consciousness (Faggin, 2019). Consciousness refers to individual awareness of unique thoughts, memories, feelings, sensations, and environments. Essentially, it is an awareness of yourself and the world around you. This mindfulness is subjective and unique to you. Professor Federico Faggin invented the microchip and touch screen and is regarded as a top genius. He thinks it will not be possible to invest full consciousness into machines, so humans must be reminded that AI machines should be restricted to ensure a functional relationship. Therefore, human and machine intelligence should coexist under specific conditions and rules defined by people in global agreement. Human nature has a good and bad side, and nations wishing to dominate others may overstep the mark and produce machines that demand rights and freedoms. Whether human or machine will dominate is currently unknowable.

Review

Machines are learning to learn and humans are attempting to teach them. In doing so, under what conditions does a machine outperform a person and by how much? How do we design an algorithm’s learning program? What tests must it pass to be trusted? The machine outcomes must be accurate and the human influence, with its frailties, cannot be underrated. Machine Learning developers encode numerous biases into the algorithms they develop, implicit and often unintentional. We saw this in England’s school-leaving exams during the 2020 pandemic when students could not physically take these. An algorithm computed their grades, with 40% receiving ones that did not match their achievement profiles to mess up their future plans.

Therefore, undesirable human values and tendencies exist. At the national, international, and criminal level, there are opportunities to exploit algorithms or create vulnerabilities in them for wicked outcomes. Without appropriate controls in place, correlations learned from data could reinforce deep-seated inequalities in society. We have seen this in algorithms selecting students for higher education courses, which have been shown to be biased against women or people of color in some instances. A diverse data set helps, but this alone will not address the limitations of current technology. These confines mean that algorithms can also learn to recognize inaccurate correlations that result from unfortunate events or intentional tampering. When human values and societal ethics are at stake, it is vital to ensure the right balance between Machine Learning and human teaching.

As AI advances, there are concerns that powerful machines are being created to pursue undesirable goals with catastrophic results. Professor Stuart Russell (2019), the AI expert, ponders a super-intelligent system that learns to stop climate change by reducing population because science shows that human activity is the major cause of warming the Earth. Such thinking shows the importance of a theoretical foundation for correctly specifying outcomes.

However, some researchers think that encoding world knowledge into a Machine Learning algorithm is doomed, arguing that a better mimicking of human brain structure is required. Deep learning algorithms contain layers of artificial neurons with interconnections analogous to brain synapses and can have millions of parameters. However, human brains have 100 billion neurons with 100 trillion synaptic connections. If we can devise an algorithm as complex as this, its inner workings would be almost impossible to understand. Could we ever reliably teach, test, and trust such a system?

References:

  • Clarke, R. (1994) Asimov’s Laws of Robotics: Implications for Information Technology – 2. IEEE Computer. Vol 27, Issue 1, Jan. 1994:57 66.
  • Cobello, S. & Milli, E. (2020) Chapter 15. Social Aspects of Robotics. R. Sage & R Matteucci (eds) How World Events are Changing Education. (in press)
  • Durkin, J. (2003) Man & Machine: I wonder if we can coexist. AI & Soc. 17:383-390. Springer-Verlag London Ltd.
  • Faggin, F. (2019) Chapter 1.The Fundamental Differences between Artificial and Human Intelligence. R. Sage & R. Matteucci (eds) The Robots are Here: Learning to live with them. Rotterdam: Sense International Press.
  • Grand, S. (2004) Moving AI Out of Its Infancy: Changing Our Preconceptions.
  • Intelligent Systems and Their Applications, IEEE. Vol. 19, Issue 6, Nov.-Dec. 2004:74 – 77
  • Muller, V. (2020) Ethics of Artificial Intelligence and Robotics. Stanford Encyclopedia of Philosophy. 30 April. Plato.standord.edu/entries/ethics-ai/
  • Pew Research Center (2017/2018/2019) Automation in Everyday Life. 
  • Prabhakar, A. (2017) The Merging of Human and Machine is Happening. Wired Technology, January. Wired.co.uk
%d bloggers like this: