1 Introduction

Intelligent unmanned autonomous systems are systems that are man-made and capable of carrying out operations or management by means of advanced technologies without human intervention. Since ancient times, humans have created countless kinds of unmanned systems. The technological level of unmanned systems has gradually increased with the growth of human knowledge. The recent remarkable advances in artificial intelligence (AI) have taken unmanned autonomous systems to a more advanced level (Pan, 2016). Therefore, there is a need for an extended and detailed discussion of the development trends in intelligent unmanned autonomous systems.

Compared with traditional autonomous systems, the scope of advances in unmanned autonomous systems has greatly expanded. Various types of intelligent unmanned autonomous systems are emerging and their impact on society and human life will be significant. Systems which may be developed into intelligent unmanned autonomous systems now or in the near future include unmanned vehicles, unmanned aerial vehicles, service robots, space robots, marine robots, and unmanned workshops/intelligent plants.

Intelligent unmanned autonomous systems are complex systems created by the fusion of various technologies related to mechanics, control, computer, communication, and materials. AI is undoubtedly one of the key technologies for the development of intelligent unmanned autonomous systems. Autonomy and intelligence are the two most important features of intelligent unmanned systems. To realize and continuously improve these two features of the system, the most effective approaches are normally to use various technologies of AI, such as image recognition, human-machine interaction, intelligent decision making, reasoning, and learning. Due simply to the development of these AI technologies, we have found that humans can create much more intelligent unmanned systems with a higher level of autonomy and intelligence, in some respects approaching human levels.

In this paper, we introduce the development trends in intelligent unmanned autonomous systems by summarizing the main achievements in several areas. Sections 2 to 8 introduce the trends in the development of AI technology applications for intelligent unmanned autonomous systems, unmanned vehicles, unmanned aerial vehicles, service robots, space robots, marine robots, and unmanned workshops/intelligent plants. This forms the basis for an overall description of the current trends in the development of intelligent unmanned autonomous systems.

2 Trends in the development of AI technology applications for intelligent unmanned autonomous systems

In recent decades, AI and machine learning have developed rapidly in computer vision, acoustics, and other learning problem domains, especially since the emergence of deep learning (LeCun et al., 2015). Many amazing unmanned autonomous applications have arisen thanks to more advanced models and the improved computing capabilities of hardware. For example, unmanned ground or aerial vehicles and medical robotics have been remarkably developed due to the evolving progress in AI and machine learning. In particular, deep learning has proved to have outstanding learning capacity in tasks with great complexity. Modern computing devices like graphics processing units (GPUs) (Chetlur et al., 2014) and computation frameworks like Caffe (Jia et al., 2014), Theano (Theano Development Team, 2016), and TensorFlow (Abadi et al., 2016), have helped designers and engineers build novel and robust unmanned autonomous systems.

Machine learning has supported unmanned autonomous systems in two ways: providing perception and control similar to human interaction with the outside world, by first receiving information and then analyzing and controlling it. Sensory perception such as vision, acoustics, and tactility represent information sources from the outside world. Models are needed to transform the information into different levels of abstraction to describe the environment. When the information has been obtained, unmanned systems can learn to control actions using reinforcement learning mechanisms (Sutton and Barto, 1998) by evaluating rewards from the environment with which they interact and then choosing the best policy. These methods can help create end-to-end systems with the ability to learn a specified task with the collected data.

In vision, abstractions can include object detection (Girshick et al., 2014; Girshick, 2015; Ren et al., 2015), classification (Krizhevsky et al., 2012; Simonyan and Zisserman, 2014; Szegedy et al., 2015), and semantic understanding (Huang et al., 2013) using convolution neural networks (LeCun and Bengio, 1995). Inspired by the hierarchical architecture of the human visual cortex (Hubel and Wiesel, 1962), architectures for multiple convolution-pooling layers have been proposed and are being used in different machine learning tasks. For vision tasks (Fig. 1), convolution layers compute a feature map by convolving local windows and kernels; pooling layers compress the feature map by picking a maximum activation output or the average of a local window to one pixel, thus forming a hierarchical pyramid structure with the higher layer representing a higher level of abstraction. Convolution neural networks take advantage of existing local structures and share weights, which can dramatically reduce over-fitting problems that occur in fully connected networks.

Fig. 1
figure 1

Convolution neural network architecture and principles

For the sequence data of acoustics (Sak et al., 2014) and language (Mikolov et al., 2010; Vinyals et al., 2015), models with recurrent structures have brought significant improvements to state-of-the-art performance. They have introduced chain-like loop structures into recurrent neural networks (Funahashi and Nakamura, 1993) as shown in Figs. 2a and 2b, in which F(X, H) defines the mapping from sequence inputs X and hidden states H to sequential outputs. Simple recurrent neural network (RNN) architectures have problems with long-term dependencies, while sometimes we need only a few previous memories. Long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) models solve this problem by introducing different gate functions that control the flow of information. This approach has been quite successful in tasks involving language models (Sutskever et al., 2014) and speech recognition (Graves et al., 2013). Meanwhile, breakthroughs in image captioning (Vinyals et al., 2015) have succeeded in transforming images into the language domain by combining the convolutional neural network (CNN) and recurrent models. Inspired by research in image recognition science (Rensink, 2000), recurrent attention models (Mnih et al., 2014) further contribute to tasks involving machine translation (Luong et al., 2015) and image captioning (Xu et al., 2015).

Fig. 2
figure 2

A basic diagram of recurrent neural networks in rolled (a) and unrolled (a) form F(X, H) defines the inner network, within which models like LSTM and attention are implemented

Unlike the models described above, deep reinforcement learning attempts to learn how to interact with the environment (Sutton and Barto, 1998). It concerns environment Σ, set of actions A, state S, and a value function V in the hope of learning a policy mapping π(s, a) to make sequential decisions for a larger cumulative reward, by optimizing the Q-function Q(s, a) as an objective function which defines the utility of an action at a certain state. It is usually optimized by dynamic optimization and Monte Carlo and temporal difference methods. The first deep reinforcement learning model is the deep Q network (DQN) (Mnih et al., 2013), which transforms the learning problem into a Q-learning problem using a neural network called the ‘Q-network’. The DQN learning parameters in the Q(s, a) function are defined by aligning the maximum expectations of the utilities, which may be biased in some stochastic environments and hence result in overestimation. The double Q-network (van Hasselt et al., 2015) reduces overestimation by combining Q-learning and deep models, and thus can be used to approximate large-scale functions. The deep deterministic policy gradient (DDPG) optimization method for deep reinforcement models has improved robustness gradient (Lillicrap et al., 2015) estimation in dealing with deep continuous control models. Results from experiments show its convergence speed and robustness. Much research in applied deep neural networks in reinforcement learning has led to applications in different domains, such as the introduction of game theory into deep models (Heinrich and Silver, 2016) processing high-dimensional vision information and dealing with active perception problems (Oh et al., 2016), and engineering frameworks like OpenAI (Brockman et al., 2016; O’Shea and Clancy, 2016).

Many remarkable applications have appeared along with developments in the research areas of unmanned autonomous systems. Innovations in unmanned ground/aerial vehicles for business and security uses have surprised the research community, and even come to life. For example, Google has released its unmanned car for sale in California, while Tesla and other manufacturers’ products are undergoing testing. Also, unmanned aerial vehicles (UAVs) are frequently used in search and rescue and battle-field environments for different purposes. AI algorithms are applied inside these systems on a large scale for purposes including vision, radio/radar signal recognition, and trajectory planning (Guizzo, 2011). These inventions provide economic benefits and help save lives. In addition, advances in deep reinforcement learning have brought games into a new era as humans start to pay attention to their robot competitors: AlphaGo won against the famous Korean player Lee Sedol with a final score of 4:1. The game of Go is considered the most complicated game in human history.

3 Trends in unmanned vehicle development

Unmanned vehicles (UVs) have received significant attention from both academics and industry over the last decade. UVs are a typical complex system and have been involved in many technical fields in different disciplines such as cognitive science, AI, robotics, and vehicle engineering. They have been widely considered a universal experimental platform for verifying visual, auditory, cognitive, and AI technologies (Montemerlo et al., 2008). The development of UVs can not only improve the safety of driving and the efficiency of current transportation systems, but also play a significant role in other applications such as unmanned military combat platforms, polar and nuclear leak detection, and functions in other extreme environments.

In the early 1950s, the American company Barrett Electronics developed the world’s first automatically guided vehicle system. From 2004 to 2007, the American Defense Advanced Research Projects Agency (DARPA) organized three UV challenges, which promoted the rapid development of UV technologies (Bacha et al., 2008; Montemerlo et al., 2008; Urmson et al., 2008).

In China, the National University of Defense Technology of China developed the Hongqi CA7460 autonomous driving car with an autopilot, reaching a speed of 130 km/h, and a maximum speed of up to 170 km/h on the highway. The car also displayed passing ability on the road (Huang et al., 2010). Tsinghua University, Xi’an Jiaotong University, Hefei Institute of Physical Science of the Chinese Academy of Sciences, and other research institutes have also developed their own UVs (Zhao et al., 2012; Ma et al., 2015). From 2008 to 2015, the National Natural Science Foundation of China organized seven China Smart Car Future Challenges against the background of road traffic needs (Huang et al., 2014). In 2014, the General Reserve Department of the People’s Liberation Army (PLA) organized an unmanned ground vehicle challenge for off-road environments (Shi and Liu, 2014). The successes of these challenges have played a significant role in promoting the development of UVs in China.

Due to the development of UVs, many derivative technologies have been applied to real applications. For example, the tactical UVs of the United States Marine Corps (USMC) can execute missions such as reconnaissance, nuclear biological chemical (NBC) detection, break down barriers, and direct anti-sniper shooting in any weather or in complex terrain. Carnegie Mellon University has developed a new kind of UV ‘crusher’, which can drive in complex environments. Since the beginning of the wars in Iraq and Afghanistan, about 8000 unmanned ground vehicles of various types have been involved in the missions ‘Operation Enduring Freedom’ and ‘Operation Iraqi Freedom’. Until September 2010, these unmanned ground vehicles had performed 125 000 tasks, including suspicious target identification, road cleaning, and positioning and removal of improvised explosive devices (IEDs). The U.S. Army, Navy, and Marine Corps explosive demolition teams have used unmanned ground vehicles to detect and destroy more than 11 000 IEDs.

From 2010, the development of UVs entered a new phase because many automobile manufacturers and IT companies started to switch their attention to this field. Mercedes-Benz, BMW, Volkswagen, Ford, and independent prototype companies have launched new R&D programs for UVs. Google’s UVs (Markoff, 2010), which are the representative models, are already on the road legally in California, Nevada, Florida, and Michigan, USA. On December 22, 2014, Google officially announced the completion of the first fully functional prototype of a UV, and started official road testing in 2015. Since then they have tested vehicles over 1.4 million miles. Tesla’s UV with wireless firmware upgraded to version 7.1.1, has accumulated 780 million miles of test data, and can collect one million miles of data every 10 h. Mobileye, an Israeli intelligent driving technology equipment manufacturer, announced early in 2013 that the company’s equipment would be available for automatically driving a car on the road in 2016. Its C2-270 intelligent traffic warning systems, a successful application of the company’s products, would be launched with its product upgrade. Apple also started an internal development program called ‘Titan’.

Chinese companies have also been attracted to the boom in UVs. The Chinese search engine giant Baidu has released its first UV-related project. In cooperation with the Hefei Institutes of Physical Science, the Guangzhou Automobile Group has developed a renewable energy UV. Other native Chinese automobile manufacturers such as BYD, Yutong, and SAIC are also actively exploring the development and industrialization of UV technologies.

Despite the progress in UVs, there are still considerable problems that need to be solved, including situational awareness in real-time environments, intelligent decision making, high-speed motion control, precision driving maps, unmanned system evaluation and assessment methods, and system reliability.

4 Trends in unmanned aerial vehicle development

4.1 Overview of unmanned aerial vehicles

An unmanned aerial vehicle (UAV), commonly known as a drone, is an unmanned aircraft system (Wikipedia, 2016a). Therefore, it is a typical kind of advanced autonomous unmanned system. In general, UAVs can be used to collect data and perform monitoring, surveillance, investigation, and inspection (Nagaty et al., 2013). According to their different areas of applications, UAVs can be divided into two major categories, civilian and military (Valavanis, 2007).

Military UAVs, a kind of weapon, are used mainly for surveillance, reconnaissance, electronic countermeasures, and attack and damage assessment in battles. Compared with military uses, civilian UAVs have a wider range of application including environmental monitoring, resource exploration, agricultural surveying, traffic control, weather forecasting, aerial photography, disaster search and rescue, and transmission line and railway line inspections.

4.2 Status of military unmanned aerial vehicles

UAVs were first introduced by the U.S. military during World War I (1917) (OSD, 2002). Military requirements gave birth to a variety of UAVs. Many of them were involved in wars, such as the World War II, the Vietnam War, the Middle East Conflict, and the Kosovo War, in which they played important roles (Wikipedia, 2016b). These wars promoted the rapid development of UAV technologies. So far, the most advanced and well known military UAVs include the X47-B, Predator, Global Hawk, and Fire Scout, which are already capable of autonomous takeoff and landing, and following autonomous flight routes. Some of them can partly adapt to flight faults or condition variations. However, according to the ‘Unmanned Aircraft System Roadmap 2005–2030’ published by the U.S. Defense Department in 2005, the current autonomous level of military UAVs is lower than level 3 (OSD, 2005). They do not have autonomous capabilities for route planning, decision-making, coordination, and cooperation. Compared with Western countries, military UAV technology development started late in China, but is now in a stage of rapid growth. Considerable achievements have been made in recent years (Hsu et al., 2013; Chase et al., 2015).

4.3 Status of civilian unmanned aerial vehicles

Military UAVs are technically more advanced than civilian UAVs, except with respect to autonomy. With improvement in UAV policies, civilian UAV technologies and industrial applications are growing rapidly (Canis, 2015). Currently, civilian UAV applications focus mainly on agricultural plant protection, aerial photography, and power line inspections. In the next few years, some investment organizations have predicted that the sales of civilian UAVs will maintain an annual growth rate of 50% or more.

Civilian UAVs commonly fall into two categories: fixed wing types (Chao et al., 2010) or rotary wing types (Kendoul, 2012). As most aerial work requires low altitude and low speed operation, rotary wing UAVs are more popular in the field of civilian UAVs. With the development of such technologies as communications, sensors, and embedded systems, the autonomy of civilian UAVs has been significantly improved. Already, advanced civilian UAVs can not only take off, land, and fly along routes autonomously, but also detect and avoid obstacles in real time. In addition, some of them can fly in formation and cooperate with each other autonomously (Wang et al., 2007). Therefore, when it comes to autonomous abilities, civilian UAVs have outperformed military UAVs in some respects.

4.4 Trends in unmanned aerial vehicle development

With progress in all kinds of technologies, the future development of UAVs shows a trend towards diversification. However, as an advanced autonomous unmanned system, the UAV is destined to evolve in the direction of low manual intervention, high autonomy, and high intellectualization, no matter whether it is for military or civilian use. The predicted trends in UAV development until 2030 are shown in Fig. 3.

Fig. 3
figure 3

Predicted trends in the development of unmanned aerial vehicles

Three main features underpinning these trends are as follows:

  1. 1.

    Control systems

    The autonomous control level of UAV control systems can be divided into several grades. For example, in 2005 the U.S. Department of Defense divided the autonomous control system of military UAVs into 10 levels (OSD, 2005). Generally, we can divide such systems into three levels: remote control, automatic control, and autonomous control. Currently, most UAVs have reached the level of automatic control. In other words, altitude, speed, position, and flight path can be controlled automatically (Kendoul, 2012). However, all of these controlled behaviors are pre-programmed with certainty, and do not demonstrate autonomy of a UAV. With the development of sensor technologies and improvements in embedded computing capacity, the autonomous control capability of UAVs will be significantly improved in the future. When collision risk is increased or mission conditions change during flight, the UAV will have the ability to control its flight state autonomously, instead of mechanically following a global flight path. When the abnormal condition disappears, it will return to its original flight path (Fang et al., 2017). Future UAVs with autonomous control levels will be characterized mainly by some flight uncertainties. Moreover, security and flexibility will be significantly improved.

  2. 2.

    Human-machine relationship

    Changing human-machine relationships (Hoc, 2000) is another trend and area of future development for UAVs (Gupta et al., 2013). In the early period, UAVs were all in man-in-the-loop mode (Wikipedia, 2016c), which means that operation of the UAV could not be carried out without operation and intervention by a person. Currently, human-machine interaction with UAVs is gradually turning toward man-on-the-loop mode. In this mode, UAVs execute tasks according to preset programs while people perform only a monitoring role to check that the status is normal. With enhancements in hardware and software reliability, manual intervention in UAV systems will be reduced further in the future. People will need only to act as a commander to assign tasks to UAVs, but not to monitor and control them in real time any more (Harris, 2012). We call this kind of operation man-off-the-loop mode. A UAV at this level should have a high level of safety and reliability.

  3. 3.

    Intellectualization

    AI is a key technology for future UAV systems to improve their autonomous performance. Intellectualization of UAVs is occurring mainly in terms of autonomous flight path planning ability (Rathbun et al., 2002; Tisdale et al., 2009), autonomous decision-making ability for tasks (Ren et al., 2010), and autonomous air fleet collaboration ability (Merino et al., 2006; Maza et al., 2010). Among these abilities, autonomous path planning is the first intelligent trend in UAVs. At present, most paths or trajectories tracked by UAVs are preset by humans with low efficiency and flexibility. Future UAVs should be able to plan their flight path autonomously according to their specific mission and corresponding constraint conditions. When constraint conditions change, the UAV will adjust the flight path autonomously. The second intelligent trend in UAVs is the capability for mission understanding and decomposition. When faced with a complex mission, they will not need people to assign tasks or make decisions, but will complete the mission autonomously. The advanced intelligent UAV stage will involve swarm intelligence. A team of UAVs may be composed of many homogeneous or heterogeneous UAVs. They should have the capability for autonomous cooperation with each other and to eliminate conflicts to maximize group performance. Thus, a feature of future intellectualized UAVs will be the ability to complete complicated tasks effectively by autonomous cooperation (Valavanis and Vachtsevanos, 2014).

    With progress in science, technology, and policy, future UAV systems will become truly advanced autonomous unmanned systems. In particular, it is predicted that UAVs will reach an autonomous level of 7 or 8 within the 10-level system of the Unmanned Aircraft System Roadmap 2005–2030 published by the U.S. Defense Department, and will be widely used in many civilian applications by 2020. By 2030, the autonomy level will be further increased to 9 or 10, and the coverage of UAV applications in aerospace industry and some other industries will reach 50%.

5 Trends in service robot development

Robotics involves machinery, information, materials, intelligent control, and biomedicine. Not only does its own technology have high added value, and its applications have a wide scope, but also it has become an important radiation technology platform. It has great significance in enhancing the strength of military defense, improving emergency preparedness, promoting overall economic development, and improving people’s living standards.

In recent years, popular service robot products in domestic and foreign markets have continued to emerge. In social communication services, research has focused on applications for helping the elderly and the disabled, housekeeping, medical care, education, entertainment, national defense, aviation, and transportation. There are three main areas of development of service robots: intelligent materials and soft robots, AI technology and chips for perception and control technology, and human-computer interaction and security technology.

5.1 Intelligent materials and soft robots

From the point of view of bionics and intelligent materials, which provide the necessary technical support for realizing the function of a robot, service robots mimic biological system structures, materials, and other supports. At present, the main intelligent materials are shape memory alloys (SMAs), Li ion polymers (IPMCs), and silica gel. The application of intelligent materials makes soft robots safe, stable, oil resistant, corrosion resistant, and anti-electromagnetic. Given Imaging, an Israeli company, developed a PillCam capsule robot to replace the traditional painful endoscopy. The German robot company FESTO developed the trunk robot with many dynamic gas pipe-like link objects for muscle function, which has the flexibility to perform all kinds of precise actions. At the University of California, Berkeley, USA, Takei (Shepherd et al., 2011) made electronic skin with silicon, which can feel a 0–15 kPa pressure. At Harvard University, USA, George Whitesides led a research team (Yim et al., 2007; Martinez et al., 2012) that achieved a breakthrough in soft robots, using a different structure in the software allowing the device to grasp hands and have a bionic walking structure.

5.2 AI technology and chips for perception and control technology

Sensing technology is the main way in which a machine obtains information from outside a device, and includes machine vision, hearing, touch, taste, electromyography (EMG) for brain cognition, pattern recognition, and natural language processing. AI improves the ability of a robot to simulate human activities and study human knowledge. In 2016, Google launched Google Home (Wang, 2016), which can receive voice commands to control home appliances. The AI assistant software Siri, which is used in the iOS system, includes question answering and chat systems. In the USA, the iRobot robotic companies and the Massachusetts Institute of Technology jointly developed an automatic intelligent floor cleaner called the Roomba robot (Hong et al., 2014). It functions with a navigation beacon, using simultaneous visual localization and mapping technology to realize indoor autonomous cleaning. In 2013 at Macworld, Apple launched a smart toy called the Anki Overdrive (Feng, 2013) with development functions. This toy car uses AI algorithms to manipulate a robotic automobile body to implement autopilot and compete on a special track. The American MQ-9 Death unmanned aerial vehicle (Zhang, 2016) is equipped with electronic optical devices, infrared systems, low-light level television, and synthetic aperture radar. In Shenzhen City, China, the DJI-Innovations Company developed the Phantom 4 with sensing, automatic obstacle avoidance, and professional aerial ability. Google’s UV navigates using a camera, radar sensors, and laser range finder. In 2016, in Wuhu City, Anhui Province, China, Baidu built the country’s first UV operations area. In 2016, China’s first embedded neural processing unit (NPU) chip was born, which has been applied to the world’s first embedded video processing chip.

5.3 Human-computer interaction and security technology

As service robots become more closely related to human life and interaction, security technology is gaining wider attention. Deka Arm, funded by DARPA, was the first to obtain U.S. FDA attestation for an assistive robotic arm. It has a neural interface which will translate the neural activity of the brain cortex to a control signal manipulating an auxiliary device (Kuiken et al., 2009; Rebsamen et al., 2010). The bed and chair integration robot (Hu et al., 2013) researched and developed by the Robot Institute of Beihang University, China, is able to administer care and support for the elderly, thereby greatly reducing the burden on nursing staff. The American Intuitive company created the Da Vinci Xi operating system, which is committed to minimally invasive surgery. In 2016, Boston Dynamics introduced SpotMini, a family service robot. Assisted by many sensors, it can walk freely, and load the dishwasher in the laboratory by means of a mechanical arm.

6 Trends in space robot development

Space robots are one of the main means of autonomous on-orbit service. In the past 20 years, the leading powers in space have carried out a great deal of fruitful research into autonomous on-orbit service. A series of ground tests, including on-orbit tests and applications, have shown that autonomous on-orbit service is a feasible technique, and it has captured wide attention in research and development (Sullivan and Akin, 2001; Long et al., 2007; Flores-Abad et al., 2013).

Generally, patterns in autonomous on-orbit service are performed mainly through space robots. According to the number of space robots carrying out tasks, autonomous on-orbit service can be categorized into two types: on-orbit service using a single fully functional space robot, and on-orbit service using multiple space robots with relatively simple functions.

6.1 Current research status of space robots

6.1.1 USA

The USA conducted research into on-orbit servicing early on and took a leading position internationally. Twelve projects were conducted, six of which proceeded to on-orbit demonstration. Currently, there are three projects: the FREND (Akin and Bowden, 2003; Obermark et al., 2007; Debus and Dougherty, 2009), a robot refuel task (Kandaswamy et al., 2014), and the Phoenix program. The FREND and Phoenix programs are aimed mainly at enabling GEO satellites to operate autonomous on-orbit service, while the robot refuel task applies to international space stations and has practical functions. The development of the American space robot has undergone a complete technological progression from vision measurements, circle supervision, and rendezvous and docking to autonomous capture. The robots are aimed at on-orbit servicing of targets which are high orbiting and non-cooperative. Through the research and on-orbit verification of autonomous on-orbit service projects, the USA has made good progress in space manipulation, vision measurement targeted at cooperative objects, circling, rendezvousing, and docking.

6.1.2 Germany

Germany attaches great importance to the study of space robots and automation. There have been six projects on space robots so far (Hirzinger et al., 1994; Settelmeyer et al., 1997; Cusumano et al., 2004; Albu-Schaffer et al., 2006; Landzettel et al., 2006; Preusche et al., 2006), three of which conducted on-orbit demonstrations. At present, two representative projects are underway, the DOES project and the OLEV project. OLEV is aimed at providing service for GEO orbiting satellites while DOES targets mainly technical verification of low-orbiting, non-cooperative objects. The development of German on-orbit service progressed from cabin robots (ground-based verification teleoperation), outboard robot joint technical verification, to carrying out a study of free-drive space robots. Furthermore, they have comprehensive research and applications in teleoperation technology.

6.1.3 Japan

On-orbit service technology in Japan is relatively mature and has a high status internationally. Japan has launched three projects related to space robots (Masanori et al., 1998; Oda et al., 1999; Sato and Wakabayashi, 2001), all of which have been through on-orbit demonstration verification, especially the ETS-VII project which performed the first experiment in autonomous grasping. The Japanese approach to on-orbit service development has achieved a great leap forward in evolution from a robot arm to a free-drive flight robot. Through the demonstration of its on-orbit project, Japan has mastered the technology of the space robot arm, rendezvous and docking, and space teleoperation, thereby contributing significantly to the development of space technology.

6.1.4 Canada

The projects conducted on space robots in Canada serve mainly the Shuttle Remote Manipulator System (SRMS) for the space shuttle and the Mobile Serving System (MSS) of the space station (Taylor and Ramakrishnan, 1992; Zimpfer and Spehar, 1996; Stieber et al., 1999). The primary function of the SRMS is to capture and relieve satellites, acting as auxiliary equipment. The MSS consists of a mobile base, Space Station Remote Manipulator System, and Specific Purpose Dexterous Mechanical Arm, and its primary function is to assist with the docking and transportation of cargo. Canadian On-Orbit Development Center studies of large-scale space robot arm technology have progressed from basic arm techniques, delicate operational arms to dexterous robot hands, accumulating abundant experience in design, manufacture, and application.

6.2 Trends in space robot development

Space robots are typical intelligent unmanned autonomous systems. The likely future trends in their development can be described as follows.

6.2.1 Requirements

  1. 1.

    In the future, there will be a strong demand for space robots in the fields of space station maintenance, on-orbit service for satellites, and on-orbit assembly of large-scale spacecraft.

  2. 2.

    The operation of space robots will be more concerned with small-scale, generalized, and accurate operation.

6.2.2 Mechanical structure

  1. 1.

    Looking back on their history, space robots have followed a development route from single-arm robots, to dual-arm robots, then multi-arm robots. Thus, future space robots will become multi-armed and increasingly complicated.

  2. 2.

    Given the diversity of tasks and environments, appropriate reconfigurable and compliant robots will be applied to each workspace.

6.2.3 Manipulation by the end effector

  1. 1.

    Generalized multi-fingered robot hands and customized replaceable tool sets are two major trends in the design of the end effector.

  2. 2.

    There will be a strong demand for various kinds of sensing approaches, which will determine the manipulation capacity and intelligence of the robots.

6.2.4 Dynamics and control

  1. 1.

    With the increasing complexity of robotic systems, more attention will be paid to the cooperative control of multi-arm, multi-robot systems.

  2. 2.

    To ensure the safety of astronauts and machines during the human-machine collaborative process, security will become an important aspect of the design of space robots.

  3. 3.

    With the increased capacity for sensing and information processing, more emphasis will be placed on human-computer interaction, which will gradually evolve into semi-autonomous, and finally fully autonomous control.

6.2.5 Human-machine interaction

  1. 1.

    To make full use of the intelligence of robots, the advantages of human-in-loop control should be explored. A robot control system should be compatible with various human-machine interaction approaches and multi-modal interactions.

  2. 2.

    Natural and flexible are the characteristics of the new generation of human-computer interaction methods. This will promote human-oriented, flexible human-machine interaction methods, such as voice, wearable equipment, and EMG.

6.2.6 Modeling and experimentation

Because of the lower cost, experiments with space robots will be carried out on the ground with a gravity condition of 1g. Industrial robots will be used to verify key robotic techniques, taking into account the equivalence of space and ground robots.

In space exploration, future research will focus on multi-robot coordinative control with autonomous decision making for on-orbit operation, deep learning for space robots with a large time delay and remote operation techniques by means of depth of immersion, as well as autonomous recognition and reconstruction techniques for next-generation modular and replaceable intelligent aerospace systems. The above techniques will provide invaluable support for constructing autonomously operating unmanned scientific research stations on the lunar surface.

7 Trends in marine robot development

The trends in development for next-generation marine robots will be determined and influenced by application demands and related technical advances. In this paper, we will describe the trends in the development of marine robots in terms of the platform and intelligence.

7.1 Trends in marine robot platform development

Given the advances in marine robots, their applications in various missions relating to ocean exploration and exploitation are booming. However, harsh ocean environments bring great challenges, and marine robot platforms need to be reliable enough to perform their tasks safely. With the advances in general robotic techniques, the techniques relating to marine robots are becoming increasingly mature, and robot reliability is improving. The following four development trends can be identified for marine robots.

7.1.1 Long endurance marine robots

A typical application for marine robots is to observe the ocean and collect all kinds of scientific data. This usually requires the robots to be able to survey the ocean in a large spatial scale and long temporal scale. Several types of long endurance marine robot platforms have been developing rapidly in recent years. The design of drive modes without a propeller is a hot topic. Underwater gliders have been developed recently. They use an engine to adjust their buoyancy and wings to generate lift force to enable their gliding motion in the ocean. Wave gliders are also undergoing rapid development. Unlike underwater gliders, wave gliders use surface waves to drive their motion, giving them more endurance than underwater gliders. Recently, mobile ocean sensor networks combined with multiple underwater gliders have received significant attention and have been used in a number of ocean observation missions around the world. Aside from decreasing on board power consumption, some new energy harvesting techniques such as thermal engines are being developed to further increase the endurance of marine robots. In the future, with advances in power supply technology, marine robots will have longer operational endurance, based partly on the use of environmental energy such as solar, currents, waves, and biology.

7.1.2 Hybrid marine robots

Marine environments are very complex, and marine robot missions are varied. There is no one type of marine robot that can accomplish all kinds of missions. Each type of marine robot platform has its specific application field and limitations. Therefore, hybrid marine robots, which combine the features and capabilities of different types of robots, have become a new development trend. The Nereus is a hybrid marine robot used to explore the Mariana Trench. It was developed by the Woods Hole Oceanographic Institute, MA, USA. It is a hybrid remotely operated vehicle (ROV), which combines the advantages of a ROV and an autonomous underwater vehicle (AUV) through changing its operating mode. Based on the AUV mode, it can carry out light interventions with the manipulator and optical fibres. In China, the Shenyang Institute of Automation at the Chinese Academy of Sciences has also developed a hybrid marine robot for polar exploration. The Arctic ARV (Autonomous and Remotely Operated Underwater Vehicle) can move under the sea ice as an AUV according to the mission program. When something of interest is found, it can be switched to ROV mode to operate remotely by optical fibre. Thus, in one dive, the Arctic ARV can execute a task in a hybrid manner. Recently, in addition to the hybrid AUV and ROV, other marine robots combining an unmanned surface vehicle (USV) and an AUV, a UAV and an AUV, and gliders and AUVs have been developed. In the near future, there will be more types of hybrid marine robots developed to meet the requirements of ocean survey.

7.1.3 Fine intervention marine robot

A number of missions such as underwater intervention or construction require marine robots to perform complicated and fine tasks in complex underwater structures. These require marine robot platforms to be able to resist various types of disturbances and have good manoeuvrability. Some advanced techniques such as dexterous fingers with force and tactile sensing, which have been employed by other field robotic systems, will be integrated into marine robots to make them ‘skilled workers’.

7.1.4 Biomimetic marine robots

The development of biomimetic marine robots that imitate the behavior or mechanisms of marine animals, has always been a trend in the development of marine robots. A variety of biomimetic marine robot platforms such as robotic fish, crabs, snakes, and turtles have been developed. However, most have not been applied because their capabilities are not good enough to satisfy the requirements of practical applications. In the future, biomimetic marine robots will be employed in many practical applications, following advancements in techniques in areas of new materials, new energy, and new sensors.

7.2 Trends in marine robot intelligence development

Generally, the autonomous performance of a robot depends on cognition, control, and swarm intelligence. This holds true for marine robots. In Fig. 4, these three evaluation metrics are further divided into several levels according to the development of marine robots and the history of AI.

Fig. 4
figure 4

Evaluation metrics for marine robots (SLAM: simultaneous localization and mapping)

Scientists and engineers working on marine robots have focused on the capabilities in autonomous control in recent decades and great progress has been made along the ‘autonomous control’ axis (Fig. 4). Though manned submarines were proposed by Bourne in 1578 and brought into service by van Drebbel in 1620, it was soon recognized that unmanned marine robots would be more appropriate for many underwater tasks. The first ROV project was launched by the U.S. Navy in 1958, with the goal of building an underwater salvage device that could be controlled through a tether cable. The first AUV, SPURV, was developed by the Applied Physics Laboratory, University of Washington, USA in 1957. The SPURV AUV was built for studying the diffusion and acoustic transmission of submarines. Though the AUV was proposed at almost the same time as the ROV, progress in its level of autonomy has largely been blocked by the limitations of AI, control technology, and sensing. The outcome of this is that ROVs and AUVs always coexist in the sea, yet they work in different situations. For example, an ROV is appropriate for local and precise field operations such as underwater engineering, while an AUV is usually more appropriate for large-scale survey tasks such as long-range search and detection. The relative independence of AUVs and ROVs will last a long time before the essential breakthrough occurs in the areas of autonomous cognition and control.

The autonomous environmental cognition ability of marine robots can be ranked according to the following six levels: basic data collection and mechanical collision avoidance, object classification, recognition, simultaneous localization and mapping (SLAM), inference and semantic understanding. Almost all marine robots, no matter whether they are ROVs or AUVs, are equipped with several kinds of sensors to collect environmental data, such as forward sonar, side scan sonar, and altimeters. However, not all are capable of extracting valuable information from the data. It is reported that the REMUS and the Bluefin, which have been adopted by the U.S. Navy, are able to avoid possible collisions and to recognize specific objects. However, even with typical mine detection tasks, there are still many problems needing to be solved. In future, marine robots should be able to infer the existence of unknown objects based on other known environmental information and prior knowledge.

Swarm intelligence depends on communication networks. In the case of ground or air, wireless communication networks lead to problems of optimization such as formation and cooperation. However, it is different in the case of marine robots, because of the rapid signal degradation in acoustic communications. The ranking standards listed along the ‘swarm intelligence’ axis in Fig. 4 are common for other field robots. For example, the problems of formation control, task planning, cooperation, task re-planning, and cooperative exploration apply to both ground vehicles and unmanned air vehicles. ‘Duty’ means that the marine robots understand the tasks, allocate them to each member autonomously, and solve the problem by themselves. As discussed above, the key problem with marine robots is acoustic signal degradation and delay. Even formation control and cooperation under such weak communication conditions are still at the academic research stage.

The hope is that a marine robot’s actions will be as swift as those of a fish, and its intelligence comparable to that of a human. We are sure that marine robots will achieve practical status in the near future with advances in control, cognition, and swarm intelligence.

8 Trends in the development of unmanned workshops/intelligent plants

In the past 30 years, China’s industrialization has made remarkable progress and contributed greatly to global economic growth. Because the industrialization process has been accompanied by progress in informatization (Fig. 5), it is neither feasible nor necessary for China to follow a traditional development pattern, i.e., realizing industrialization first and then informatization. China should grasp the tremendous historic opportunity brought about by the rapid development in information and communication technology (ICT). Two historical processes (informatization and industrialization) are progressing together in China.

Fig. 5
figure 5

The process of industrialization and technology development in different parts of the world

With the progress in world trade and globalization and the development of ICT and industrial technology, manufacturing patterns and technology are facing a turning point. Many developed or developing countries have published their national strategies supporting their economic transformation, including: (1) integration of Industrialization & Informatization (iI&I) and Manufacturing 2025 in China; (2) Industry 4.0 for Germany; (3) re-industrialization and industrial Internet for the USA.

Faced with the current complicated international and domestic economic situations and trends, iI&I with smart manufacturing is a strategy critical for the survival and long-term sustainability of Chinese enterprises. The iI&I in Chinese enterprises has its own characteristics. According to China’s industrialization and ICT application status and shortcomings, in-depth exploration and practices should be initiated. To support transformation, standardization is an important part of China’s manufacturing and technology development strategy, which includes several activities: (1) introducing and translating ISO/IEC standards into Chinese, (2) developing sets of technical standards, (3) developing standard frameworks for industrial enterprises, and (4) developing management architecture and related management standards.

To identify developing trends in smart manufacturing, classifying and positioning all related standards, and describing relationships among standards clusters, three reports have been introduced into the reference models.

As shown in Fig. 6a, based on the ARC Advisory Group’s model for collaboration manufacturing management (ARC Advisory Group, 2002) and ISA95’s Enterprise-Control System Integration hierarchical model (Barkmeyer, 1996), NIST describes a smart manufacturing ecosystem (Lu et al., 2016). The reference architecture model for Industry 4.0 is shown in Fig. 6b (DIN, 2016). To realize the Chinese Manufacturing 2025 national strategy, the Ministry of Industry and Information Technology of China (MIIT) and the Standardization Administration of China (SAC) published a joint report entitled ‘National Smart Manufacturing Standards Architecture Construction Guidance’. In this report, based on the Smart Manufacturing Standardization Reference Model of China (MIIT and SAC, 2015) (Fig. 6c), to realize the Chinese Manufacturing 2025 national strategy, the unmanned workshop/intelligent plant will become the most important carrier. In each plant, all processes are predicted to be operated by computer-controlled robots, computer numerical control machining equipment, unmanned transport trucks, and automated warehouse equipment.

Fig. 6
figure 6

Smart manufacturing reference architectures: (a) smart manufacturing ecosystem of NIST (Lu et al., 2016); (b) reference architecture model for Industry 4.0 (DIN, 2016); (c) smart manufacturing standardization reference model of China (MIIT and SAC, 2015)

Although the three reports share some common ideas and similar concepts and elements, it is necessary to develop a general reference model for smart manufacturing standardization:

  1. 1.

    A generalized reference model is needed to link these reference models together to realize interoperation among them.

  2. 2.

    In these reference models, standards are located in every dimension. Developing and using these standards covers two or three dimensions, which have not been discussed in detail, especially in the NIST report.

  3. 3.

    There are different viewpoints for standards development and implementation, so combining them is a significant challenge.

  4. 4.

    For a manufacturing company, it is necessary to accept and apply a standard framework as a whole to support its smart manufacturing program. Thus, a system is required to describe standard clusters.

The biggest changes to the factory of the future will also come from information technology. The unmanned workshop/intelligent plant will strengthen information management services by using the Internet of Things technology and monitoring technology, improving production process controllability, reducing the production line of human intervention, as well as introducing reasonable planning scheduling. At the same time, a set of intelligent instruments and systems and other technologies will continue to appear with continuous developments in industry and technology, such as computer-aided design. Simulation technologies will reduce the time and cost of bringing new products to market and advanced robotics technology will make automation cheaper and more flexible.

From the above discussion we have created a high-level architecture for the unmanned workshops/intelligent plants (Fig. 7). The components of this architecture are identified as man-machine fusion, hybrid virtual and reality technique, distribution and centralization, and defining common terminology used throughout this section. The proposed theoretical model can be divided into four spaces: device-level manufacturing space, unit-level manufacturing space, cross-layer manufacturing space, and cross-domain production cyberspace. The focus of the technologies in each space reflects the important problems to be solved for unmanned workshops/intelligent plants in that space.

Fig. 7
figure 7

Unmanned workshop/intelligent plant hierarchy

9 Conclusions

In this paper, we have described the trends in the development of intelligent unmanned autonomous systems with regard to seven aspects: technology applications of AI for intelligent unmanned autonomous systems, unmanned vehicles, unmanned aerial vehicles, service robots, space robots, marine robots, and unmanned workshops/intelligent plants. We hope these trends and predictions will be realized in the near future. The world will be changed for the better and human life will be improved by means of intelligent unmanned autonomous systems.