What’s so dangerous about people working with AI?
Technology is driven by lazy people! I’m sure everyone has used this phrase to excuse themselves at one time or another when it comes to touching or lying down. From the industrial revolution of the steam engine to the digital revolution of computers, technological advancement has indeed given humans, in some ways, more and more capital to lie flat. Will AI technology, the most promising platform for the next generation, make humans lazier? It does seem like it is, but that’s not good news. According to a new study published in the journal Frontiers in Robotics and Artificial Intelligence, humans really do get lazy when working with AI and machines. Teamwork can be both a blessing and a curse, according to Cymek, the study’s first author. So, in the age of AI, the biggest crisis for humans is not to be replaced by machines, but to be lazy to the point of degradation.
1. Machine assistants, so that human beings relax their vigilance
When there is such a powerful helper as a machine, it will make humans become more mindful. Researchers at the Technical University of Berlin in Germany provided 42 participants with blurry images of circuit boards and asked them to check for defects. Half of these participants were told that the circuit boards they were to work with had already been inspected by a robot called Panda and had been marked for defects.
In fact, the robot Panda detected 94.8% of the defects during the experiment. All participants saw the same 320 images of the scanned circuit boards, and when the researchers took a closer look at the participants’ error rates, they found that participants working with the Panda caught fewer defects later in the task because they had already seen that the Panda had successfully flagged many of the defects. Participants in both groups inspected almost the entire surface of the board, spent time searching, and had higher self-rated effort. As a result, participants who worked with the robot found an average of 3.3 defects, and those who completed the task alone found an average of 4.23 defects.
This suggests that participants may have been less dedicated to inspecting the board when working with a robot partner, the study said. The participants in our study seemed to maintain the effort of inspecting the circuit boards, but it seems that the inspections were performed with less mental effort and attention to the sampled information. This means that they found fewer defects if they were told that the robot had already been inspected part of it and after experiencing how reliable the robot was. Subconsciously, they assumed that the panda was less likely to miss defects, creating a social inertia effect. The implications of this study are particularly important for industries that rely on strict quality control. The authors warn that even a short relaxation of human attention, possibly due to an over-reliance on robotic accuracy, could jeopardize safety.
Loss of motivation tends to be greater over longer shifts when tasks become routine and the work environment provides less performance monitoring and feedback, researcher Onnasch mentions. This can have a negative impact on job outcomes in manufacturing in general, and in safety-related areas where double-checking is common, in particular. Of course, there were some limitations to the researchers’ test. For example, the sample wasn’t really large enough, and it’s difficult to simulate social inertia in the lab because participants know they’re being watched. the main limitation is the lab setting, explains Cymek. To understand the magnitude of the problem of loss of motivation in human-robot interactions, we need to get out of the lab and test our hypotheses in real work environments with experienced workers, often working with robots.
2. The crisis of human-robot cooperation has been happening for a long time
In fact, the degradation caused by human-robot cooperation has been occurring in the real world for a long time outside of the lab. In the field of autonomous driving, there is a phenomenon similar to social inertia called Automation complacency, typically due to distraction with automation assistance.
In March 2018, in the US state of Arizona, an Uber self-driving car equipped with a safety officer struck and killed a bicyclist. An analysis by the police found that had the safety officer been looking at the road, the safety officer could have stopped 12.8 meters in front of the victim and avoided the tragedy.
Tesla is often the target of focused attention from the US media and regulators, often due to accidents related to Autopilot. A typical scenario is when a Tesla driver falls asleep, or plays a game, while using the Autopilot feature and is involved in a fatal crash.
In the current AI frenzy, the prophecy of machines replacing humans is getting closer to reality. One side believes that machines will serve humans, while the other believes that humans will accidentally create something evil.
In the medical field, Doctor Watson, an AI system developed by IBM, has been known to give unsafe medication advice to cancer patients. A paper this year noted that generative AI can already pass all three parts of the U.S. medical licensing exam. A similar migration hypothesis is that if humans are seen by AI in the future, and then human doctors are the gatekeepers, will human doctors again suffer from social inertia and automated complacency? The authors of the aforementioned study note that combining human and robotic capabilities clearly offers many opportunities, but we should also consider that unintended cohort effects can occur in human-robot teams. When humans and robots work on a task, this can lead to a loss of motivation for human teammates and make effects such as social inertia more likely.
There are also concerns that AI could affect human thinking and creativity, weaken relationships, and distract from the whole reality. Silicon Valley’s generative AI star startup Inflection has launched Pi, a chatbot designed to be a friendly, supportive companion. The founders say Pi is a tool to help people cope with loneliness and can serve as a confidant. Critics, on the other hand, argued that it would allow people to escape from reality instead of interacting with real human beings. Now, the relationship between people and tools has evolved to a new level. All the tools that have been created have actually made humans lazier, such as the sweeper that saves people from cleaning their houses and the cell phone that saves people from having to write down phone numbers. But the difference between AI technology and the previous technology is that more thinking and selection work is given to AI, which is basically a black box, which is more like a kind of thinking autonomy cession. When a person hands over driving decisions entirely to an automated driver, and medical diagnosis to an AI system, the potential cost could be quite different from the cost of not being able to remember a phone number.
Joseph Wissenbaum, the computer scientist who developed the first chatbot in history, has likened science to an addictive drug that becomes a chronic poison as it is taken in increasing doses, and there may be no turning back if computers are introduced into some complex human activities. When man gives the power to think and judge machines as a reference, the demons of social inertia and automation complacency may also lurk and may become a chronic poison with the repetition of tasks.