The manufacturing landscape of Industry 4.0 includes a shift from automated manufacturing to smart manufacturing concepts. One of the expectations in the development of Industry 4.0 is to meet customer demands for product changes in low-volume production so that time is not wasted, such as reconfiguring assembly lines. The realization of smart manufacturing will take place through the concept of the Internet of Things, where each participating component has its known IP address. In this case, the intelligent manufacturing production system must not only produce products in small batches to meet the needs of customers, but also must have the characteristics of better predictive maintenance, robustness of product design, and adaptive production. In order for smart robotic factories to work in the context of Industry 4.0 and IoT, robots will do most of the work in the future of manufacturing, but human workers must remain in the work area, in supervisory roles or in jobs without robot training. work. The constant presence of humans in or near robot work areas has changed the way people are fenced and forbidden to robot work areas, requiring robots and humans to safely coexist and collaborate.
In this case, robots share the same workspace as humans and perform industrial activities such as raw material handling, assembly and transfer of industrial products. The traditional approach is to give humans limited access to the robot, with appropriate safety controls in place to prevent workers from entering the robot’s work area causing the machine to come to a complete stop, which, once entered, causes an interrupt and reset procedure to be activated, extending the Production time. With it comes a new proposed approach, secure human-robot collaboration (HRC), without any fences. To achieve this, additional safety and protection measures need to be implemented using collaborative robotics cyber-physical systems (CPS), which require ensuring safety and increasing productivity based on the degree of human-robot interaction. In fact, the design approach of collaborative robot cyber-physical systems is to combine safety and security issues, just like designing an industrial facility that considers both aspects. The following figure shows several application types of collaborative robots.

A collaborative robot cyber-physical system is an intelligent system in which computational and physical systems are integrated to control and sense changing states of real-world variables. The success of such CPS depends on reliable, safe and reliable sensor network and communication technology. The CPS platform continues to evolve its architecture to engineer across the digital-physical divide and remove the boundaries between key technologies. In particular, these include electronics, computing, communications, sensing, drives, embedded systems and sensor networks, among others. The CPS Model mainly includes three components, artificial component, physical component and computational component.
These three are three modules integrated together. With the development of enabling technology, there are more and more interactions between these three components, which are connected with each other by different technologies, for example, human position tracking and safety distance parameters are safe for workers in robot CPS. Important consideration. Robotic systems are highly automated systems that remove boundaries between elements and connect to each other through interaction. There are various human-computer interaction technologies based on human vision, hearing and touch. Robotic CPS can use vision systems to detect, track, and recognize human gestures, and can also command the robot using audio signals from humans. Various types of sensors and actuators can produce many different ways of interaction between the three.
In human-robot collaboration, various sensors are applied in the CPS to ensure safety. The traditional method is to manually provide guidance or reduce the speed of the robot according to the requirements. Most of these methods are open-loop, and the level of human-robot cooperation depends on the risk assessment of the application site, and is limited to the application of small robots. The second method of safety is to designate a work area that is covered by sensors like laser scanners or proximity sensors. In this case, the robot must stop when a person enters the work area. The system is a sensor-related closed-loop system, but it hardly achieves the operational purpose of human-machine collaboration, as shown in the figure below. A third approach is speed or distance monitoring through vision-based systems or other possible technologies. If a worker enters a hazardous area, the robot may slow down or even stop, using a variety of integrated sensors and sensor fusion technology, it is very likely to achieve a high level of human-robot collaboration, but if the monitoring function fails, it will also bring certain risk. The last method is force monitoring through the use of force sensors. The reduction in the speed and acceleration of the robot will be based on the amount of force allowed to hit the worker’s body part. The amount of force varies from part to part of the body. This approach provides the highest level of human-robot collaboration, but also requires integrating multiple types of sensors, fusing sensors, and challenging risk assessment in the event of a surveillance function failure.

Through the above-mentioned several different collaboration technologies and corresponding human-machine collaboration levels, researchers formally determine the human-machine collaboration scoring scheme and measure annotations. One is to count the number of sensors installed in the CPS system. The second is the data rate of a single sensor or group of sensors of the same type, since the latency of each sensor depends on the response time of the overall system. Therefore, the overall delay time of the system is a key performance indicator, enabling the robot to initiate safety protocols in time to deal with any danger, and a larger delay time will adversely affect the robot’s reaction time, thereby reducing the level of human-robot collaboration. The following figure is a level diagram of human-robot collaboration safety.

Therefore, to assess the level of human-robot collaboration that can be achieved in CPS, it is necessary to calculate metrics in a given collaborative environment. In the application of various sensors, it is necessary to clarify the positions of personnel and robots in the CPS, as well as the corresponding application scenarios. For example, vision sensors that can be used for location information associated with other sensors must operate in adequate lighting conditions. Additionally, for faster and more accurate responses to robots, communications must be faster, preferably over short-range, secure wireless networks. Overall, the system must comply with the relevant safety standards. For speed and distance monitoring situations, an inertial measurement unit (IMU) is used in addition to the basic area and position monitoring sensor system. Likewise, for a human-robot collaboration system based on force monitoring, in addition to force sensors, basic area and location monitoring is also necessary to realize human-robot collaboration. In force monitoring, different types of geometrically-appropriate tactile sensors can be used to mount on robots or in robot joints with shock absorbing properties for safe collision detection and touch-based interaction. Force sensors in different intervals can also be used to assess the force limits of individual body parts.
In addition to the sensor technology used in a specific solution in human-robot collaboration, there may be multiple industrial scenarios where a general solution or guideline can be established. These industrial scenarios range from a single robot to multiple robots working with multiple humans to form a human-robot collaborative system. This could include the use of inertial sensors, vision, radar or any hybrid approach for human position monitoring. Hybrid methods, which use a combination of two or more methods, can perform tasks with high accuracy because positioning information from multiple separate sensor systems will cooperate and compensate for errors. Also, in a given situation, different sensors may be more practical in some areas of technology, such as a camera that will not function when it encounters a visual barrier. In this case, other technology sensors will keep the system functional.
As robots gradually develop towards an open space, more and more robots use visual information processing and technology. For example, related researchers have combined multi-sensor information fusion technology to collect work cell data based on stereo cameras, systematically detect people and robots in the environment, and generate dynamic dangerous areas based on the position and trajectory of the body. There is research to build a depth space method based on depth sensors, by estimating the distance between the robot and static and moving obstacles, and then use this real-time distance measurement and obstacle velocity estimation information together with the repulsive force vector-based controller as a collision avoidance technique. . In addition, there are also studies based on behavior prediction and image data information, through RGB input data and standardized activity description vectors to eliminate the need for timing or time series information, and then predict human activity behavior and avoid collisions. Aiming at the problem of object occlusion that may exist in the process of visual acquisition, some researchers have developed a distributed distance sensor based on the airborne sensing method, and optimized the best assembly position of the robot body sensor, which reduces the unstructured environment to a certain extent. The occurrence of the next collision event.
Robot skin is based on sensing principles and design technologies (such as resistance, capacitance, piezoelectric, acoustics, etc.) to classify different sensing technologies, which can be designed differently by focusing on tactile information processing, as shown in the figure below. An interesting and useful development is the skin of robots, which can provide rich and direct feedback, enabling robotic systems to recognize objects through multiple points of contact. 1). Skin-based optical sensors: The general idea of skin-based optical sensors is to develop a multimodal sensor that can provide both tactile and visual information. With this in mind, some researchers have proposed covering the sensor’s skin surface with an opaque material to block outside light from entering the sensor. However, the use of opaque materials limits the information provided by visual sensors, thus allowing them to focus only on tactile information. To address this, researchers have proposed a prototype consisting of transparent skin, cameras, and colored markers that provide high-resolution contact force and close-up vision. 2). Skin-based soft sensors: Various conventional flexible sensors face the technical problem of reducing the sensor sensitivity. E.g,
Aging and mechanical stress can cause hysteresis and reduce sensor sensitivity. To address this challenge, a magnetorheological tactile sensor has been proposed, which consists of flexible upper and lower elastomers. The deformation on the elastic sheet causes changes in the magnetic flux, which affects the spatial response of the sensor.

The issue of safe interaction is a basic requirement for robot interaction to be integrated into the daily life of the public. As part of the research on human-robot collaboration, the research and application of sensing technology is of great significance. In order to realize “Made in China 2025”, a new industrial system and a blueprint for a smart home will be constructed. , Human-machine collaboration safety and sensor technology research needs to further improve related discipline research and industrial technology.