The sociotechnical system of a robot, including its software, matters. Context is central to analyzing the ethical implications of that software for the public and the ethical responsibilities of the developers of that software. Possibly morally neutral concepts such as mass production, information storage, information acquisition, connectivity, ownership and learning can have a collective positive or negative ethical impact for a world with robots. Since robots are a type of artificial agent (AA), we start with a claim by Floridi that the actions of AAs can be sources of moral or immoral actions. Because AAs are in essence multi-agent systems, we apply Floridi’s Distributed Morality (DM). In this paper, we will analyze proprietary and open source licensing schemes as a policy component of DM and show the distinctions between software licensing schemes in terms of how they work to “aggregate good actions” and “fragment evil actions” for the uses and features of robots now and in the future. We also argue that open source licensing schemes are more appropriate than proprietary software licenses for robot software that incorporates the results from automated learning algorithms.
Keywords
- Multiagent System
- Humanoid Robot
- Proprietary Software
- Software License
- License Scheme
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.