Developments within the Internet went roughly through three phases. In the first phase, which lasted until the mid-nineties, the Internet was primarily used for client-server applications like e-mail, FTP and telnet. These applications were mostly used by researchers; people with no background in computer science had little or no idea of the potential of the Internet and only a few of them were using the Internet for their daily lives. At that time the emerging protocol to manage the Internet was SNMP. It should be noted, however, that outside the Internet community other researchers were also working on different approaches, such as CMIP/CMIS (ISO-OSI), TMN (ITU), CMOL (IEEE) and TL1 (Bellcore).

In the mid-nineties the introduction of the World Wide Web marked the beginning of the second phase. The simple user interface offered by browsers like Mosaic and Netscape allowed also non-technicians to use the Internet and search for information. Companies started to build websites and soon the use of the Internet exploded. Although SNMP remained to play an important role in the management of the Internet, it turned out that also web technologies could be used for that purpose. In the second half of the previous decade the management research community therefore investigated approaches using technologies like HTTP and HTML; nowadays many Internet services as well as consumer network devices can be managed using these technologies. Around the year 2000 the interest of the research community moved towards more advanced XML-based techniques; examples of these are NetConf and Web Services. Like SNMP, these techniques rely on client-server architectures and allow for centralized forms of management (manager-agent approaches).

The third phase can be characterized as the Peer to Peer (P2P) phase. Using P2P technologies, Internet users can voluntarily share computer resources such as storage space, processing power, and bandwidth. An important characteristic of P2P systems is that there is little or no central control; each system can act as a client as well as a server. Initially P2P systems were used for file-sharing purposes; examples of such systems are Kazaa and Bittorrent. Nowadays, P2P technologies are also used for many other kinds of applications, including VoIP (Skype), video distribution (GhostShare) and decentralized auctions (PeerMart).

From a management perspective, P2P systems pose completely different challenges than traditional client-server systems. P2P systems include hardly any centralized management components and most distributed management functions are performed in an automatic way; examples of such functions include resource discovery, security and NAT traversal.

From a management perspective, P2P technologies are also interesting to manage traditional client-server application. Compared to traditional manager-agent technologies, the promise of P2P technologies is to offer better scalability, improved reliability, and lower operational costs.

This special issue of Journal of Network and Systems Management focuses on the management of P2P systems and the use of P2P technologies for managing traditional systems. For this special JNSM issue we received 30 high quality submissions, which were all reviewed by at least three reviewers. After a thorough review process, six papers were accepted for publication (corresponding to an acceptance rate of 20%).

The first paper is written by Toufik Ahmed and Mubashar Mushtaq (University of Bordeaux) and has as title: “P2P Object-based Adaptative Multimedia Streaming”. The paper proposes an efficient mechanism for transmitting real-time multi-media content over P2P networks. In this mechanism audio–visual quality adaptation is based on four steps: (1) selecting appropriate peers, (2) organizing these peers into an overlay network, (3) managing peers that enter or leave the network and (4) managing QoS.

The second paper is written by Georgios Exarchakos and Nick Antonopoulos (University of Surrey) and is titled: “Resource Sharing Architecture for Cooperative Heterogeneous P2P Overlays”. The paper proposes an unstructured P2P overlay for sharing resources between underutilized and overloaded networks. It aims at satisfying the excessive resource demands of some networks by using free resources from others. The paper proposes a Capacity Sharing Overlay Architecture and shows, via a number of simulations, its ability to provide capacity to underlying networks, even in the presence of high node failure rates.

The third paper, written by Samir Ghamri Doudane (University of Paris 6) and Nazim Agoulmine (University of Evry), is entitled “Enhanced DHT-based P2P Architecture for Effective Resource Discovery and Management”. It discusses the lack of scalability, query expressiveness and flexibility of current P2P systems, and proposes a novel system supporting advanced multi-keyword queries. Using the approach of this paper, a substantial reduction of bandwidth and an improved load balance should be achievable. As an example, the paper describes a service discovery and management application.

The authors of the fourth paper are Giancarlo Ruffo and Rossano Schifanella (University of Torino); the title of their paper is: “FairPeers: Efficient Profit Sharing in Fair Peer-to-Peer Market Places”. FairPeers is a P2P framework that supports a straightforward digital rights management and fair economic model. In fact, distributors of a resource are paid as well as the authors. One of the most peculiar features of the proposed system is a lottery scheme that enables efficient multi-source downloading, without executing many accounting operations. The paper includes a security analysis, and describes a prototype implementation.

The fifth paper is written by Xiaoxiang Leng, Jun Bi, Miao Zhang, and Jianping Wu (Tsinghua University) and is entitled: “Connecting IPvX Networks over IPvY with a P2P Method”. It discusses the transition from IPv4 to IPv6. It allows IPvX (IPv4 or IPv6) islands inside an IPvY (IPv6 or IPv4) network to communicate with each other, as well as with the native IPvX network. For this purpose, the paper proposes a new method—PS64, which establishes direct tunnels between the IPvX islands. To propagate information, a P2P network is maintained between the edge gateways of these islands.

The final paper, written by Luca Deri (ntop.org), is entitled “High-Speed Dynamic Packet Filtering”. The paper focuses on the detection of P2P traffic on Gigabit networks. On such networks it is important to filter only those packets that are interesting for a given task, while ignoring others. Popular packet filtering technologies such as BPF enable users to specify complex filters, but not to specify multiple filters. It is therefore difficult to detect P2P traffic using these traditional packet filtering techniques. The paper describes the design and implementation of a new dynamic packet filtering solution that allows users to specify several IP filters simultaneously with almost no packet loss, even on high-loaded Gigabit links.

We would like to thank the authors for submitting their papers to this special issue of JNSM and the reviewers for spending their time and effort. As usual Lisandro Zambenedetti Granville did a great job in supporting the JEMS paper submission system; finally we would also like to acknowledge the support of the EC IST-EMANICS Network of Excellence (#26854).