Advertisement

Seamlessly selecting the best copy from internet-wide replicated web servers

  • Yair Amir
  • Alec Peterson
  • David Shaw
Contributed Papers
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1499)

Abstract

The explosion of the web has led to a situation where a majority of the traffic on the Internet is web related. Today, practically all of the popular web sites arc served from single locations. This necessitates frequent long distance network transfers of data (potentially repeatedly) which results in a high response time for users, and is wasteful of the available network bandwidth. Moreover, it commonly creates a single point of failure between the web site and its Internet provider. This paper presents a new approach to web replication, where each of the replicas resides in a different part of the network, and the browser is automatically and transparently directed to the “best” server. Implementing this architecture for popular web sites will result in a better response-time and a higher availability of these sites. Equally important, this architecture will potentially cut down a significant fraction of the traffic on the Internet, freeing bandwidth for other uses.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    NSFNET Backbone Traffic Distribution by Service report. ftp://ftp.merit.edu/nsfnet/statistics/1995/nsf-9503.ports.gzGoogle Scholar
  2. 2.
    Trigdell, A., Mackerras, P.: The Rsyne Algorithm. Technical Report TR-CS-96-05, The Australian National University. Available at ftp://samba.anu.edu.au/pub/rsync/Google Scholar
  3. 3.
    Amir, Y.: Replication Using Group Communication over a Partitioned Network, Ph.D. Thesis, Institute of Computer Science, The Hebrew University of Jerusalem, Israel (1995). Available at http://www.cs.jhu.edu/~yairamir/Google Scholar
  4. 4.
    Ladin, R., Liskov, B., Shrira, L., Ghemawat, S.: Providing Availability Using Lazy Replication ACM Transactions on Computer Systems, 10(4), pages 360–391.Google Scholar
  5. 5.
    Squid Internet Object Cache. http://squid.nlanr.net/Squid/Google Scholar
  6. 6.
    Chankhunthod, A., Danzig, P. B., Neerdaels, C., Schwartz, M. F., Worrell, K. J.: A Hierarchical Internet Object Cache. Technical Report 95-611, Computer Science Department, University of Southern California, Los Angeles, California, (1995).Google Scholar
  7. 7.
    The Apache HTTP Server Project. http://www.apache.org/Google Scholar
  8. 8.
    Berners-Lee, T., Fielding, R., Frystyk, H.: RFC-1945: Hypertext Transfer Protocol — HTTP/1.0. (1996).Google Scholar
  9. 9.
    Fielding, R., Gettys, J., Mogul, J., Frystyk, H., Berners-Lee, T.: RFC-2068: Hypertext Transfer Protocol — HTTP/1.1. (1997).Google Scholar
  10. 10.
    On Interpreting Access Statistics. http://www.cranfield.ac.uk/docs/stats/Google Scholar
  11. 11.
    Mockapetris, P.: RFC-1034: Domain Names — Concepts and Facilities. (1987).Google Scholar
  12. 12.
    Mockapetris, P.: RFC-1035: Domain Names — Implementation and Specification. (1987).Google Scholar
  13. 13.
    Vixie, P. (ed), Thompson, S., Rekhter, Y., Bound, J.: RFC-2136: Dynamic Updates in the Domain Name System (DNS UPDATE). (1997).Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • Yair Amir
    • 1
  • Alec Peterson
    • 1
  • David Shaw
    • 1
  1. 1.Department of Computer ScienceJohns Hopkins UniversityUSA

Personalised recommendations