Robots and Other Cognitive Systems: Challenges and European Responses
- First Online:
- Cite this article as:
- Kroes, N. Philos. Technol. (2011) 24: 355. doi:10.1007/s13347-011-0037-4
- 437 Views
Robots have come a long way since the Czech writer Karel Čapek first used this term, some 90 years ago, to denote rather frightening creatures—not unlike Golems or Frankenstein's monster, yet workers all the same. Today, more than ever, robots continue to fascinate: they take over activities which humans find too dangerous or impossible. For example, the recent use of robots at Japan’s Fukushima nuclear power plant or in the recovery of the flight recorder of Air France's Rio de Janeiro–Paris flight which went down in deep seas in 2009. They go to war and deactivate mines. And they increasingly come into our homes as children's toys, almost like family pets!
In just a few years, technological progress in this area has been tremendous and Europe is one of the leaders in this research and industrial application. Yet, this is just the beginning of the robot history as many challenges remain to be addressed. Refining and improving the mechanics of robots and their sensorial capacities (including ones that living organisms do not possess) has always been of major concern for engineers. Reducing the amount of human intervention in the operation of these machines has been another persistent trend, leading for instance to numerically controlled machine tools. Ultimately, however, this means more than merely automating the completion of a task according to some preset rules. It means that, within certain limits, machines ought to be able to take “decisions” autonomously and independent of external (e.g., remote) control on how to proceed with a given task should new conditions arise unexpectedly. This could be in the form of a roving robot that is supposed to retrieve some object from a distant place but on its way encounters an unexpected obstacle.
The ease of use, safety, and partial autonomy are essential if robotic devices are to leave the shop floor and strictly controlled environments and become truly useful and helpful for people, including those with special needs. This could include steering a wheelchair, driving a car, guiding a blind person, performing precision surgery, operating a leg amputee's prosthesis, or many of our everyday chores.
None of these machines are expected to solve chess conundrums or any other classical artificial intelligence problem. But they should have their wits about them, if for instance, they might need to recognise a certain object viewed from a different angle or under different lighting conditions. Other systems will need to understand their users’ intentions and what they are saying in plain natural language. All of them would have to understand to a greater or lesser degree the aspects and features of their environment. We may for instance want robots to “know” or be able to “learn” what they can do with certain objects of our world: what the handle of a mug is for or a dishwasher or the curb of the pavement along a busy street…
Machines and systems which are cognitive are still far from being as intelligent or conscious as humans or animals of what they are doing. Engineers have a lot to learn to catch up with solutions that natural evolution has developed over billions of years.
Considerable research effort taking new, multidisciplinary approaches are needed to significantly advance the engineering of the machines and systems described above. From the very start, the European Union’s Framework Programmes for Research and Technical Development has acknowledged the potential of robotics and cognitive systems research for increasing the productivity of human labour and creating new useful products and services.
Cognitive systems were one of the key challenges in the Information Society Technologies chapter of the Sixth Framework Programme which ran from 2002 to 2006. In the current EU research programme (FP-7), the scope has been broadened to cognitive systems and robotics and given even more weight in the Information and Communication Technologies (ICT) programme.1 More lines of relevant basic research have been opened up in the Future and Emerging Technologies (FET) part of this programme.2
Currently, about 100 research grants, falling within the remit of the FP7-ICT Cognitive Systems and Robotics challenge, have been or will shortly be awarded to the consortia of European researchers. Funded projects address general issues related to endowing artificial systems with cognitive capabilities and issues specifically related to the design of all kinds of robots.
RoboCom, one of those robotics projects, is amongst the six finalists competing for the chance to become a “FET” flagship.3 It proposes an ecology of sentient machines that will validate our understanding of the general design principles underlying biological bodies and brains, establishing positive feedback between science and engineering.
These projects add to Europe's knowledge and prowess in building robotic systems which are ever safer, more robust, efficient, easy to use, and—where needed—more autonomous. Projects also develop the features needed so these systems could be used in a wide range of scenarios: industrial/service/medical robotics, land, submarine or spatial exploration, logistics, maintenance and repair, search and rescue, environmental monitoring and control, physical and cognitive assistance to disabled and elderly people, and many more.
But some questions remain. We cannot and must not curb scientific curiosity but we should ask: are there general principles that might guide public funding of research and the use of its results beyond innovation and competitiveness?
Take for instance the concept of an autonomous machine. This could be a self-controlling road vehicle, which may become a reality sooner rather than later given the current speed of technological advancement. There are also various examples of military autonomous vehicles operating on land, at sea or in the air. Who is responsible for their actions? Who is liable in case of damage? Can it be considered that such machines operate on their own accord?
The answer is a firm “no”. Machines are designed, built and programmed so that they can render services. They are always owned and controlled by people. Machines—no matter how sophisticated—are as “ethical” as the people who design, build, programme and use them.
We humans, jointly and individually, have to take full responsibility for what we are doing, good or bad, constructive or destructive, through our own inventions and creations, to each other and our world at large.
Bertolt Brecht, in “The Life of Galilei”, had the great scientist say: “I maintain that the only goal of science is to alleviate the drudgery of human life.” Sound advice indeed! We will continue to fund research whose results help create better living conditions for everyone on this planet and research that helps us to better understand ourselves and the world we live in. Both go hand in hand—and robots should take their fair share in this ICT landscape.
This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.