A Brief History of the Internet and the Web

The pre-history can be traced to a mathematical theory proposed in 1961 by Leonard Kleinrock.  He wrote his doctoral dissertation at the Massachusetts Institute of Technology on queueing theory, which is the basis of packet-switching, employed to direct messages in large networks.  In 1964, computer networking pioneer Paul Baran   proposed the idea of breaking messages up into small units, known as packets, that can be transmitted at variable rates.  He developed the concept of packet-switching in military networks.  The work of Kleinrock and Baran advanced the conceptual and mathematical framework that allowed the Internet to develop.

 

The first precursor to the present-day Internet was ARPAnet, conceived by Advanced Research Projects Agency (ARPA), a research and development agency of the United States Department of Defense, in 1967.  The first operational node was placed on ARPAnet in 1969.  The nodes on the network were initially connected through wires and cables.  However, satellite technology was an active area of research and development, and, in 1970, the ALOHAnet satellite network in Hawaii was established.  ARPAnet, consisting of 15 nodes, was unveiled to the public and demonstrated in 1972.  The first e-mail program was also introduced during this time.

 

In 1974, Vinton Cerf and Robert Kahn published a proposal for the architecture for interconnecting networks, known as the Transmission Control Protocol (TCP).  TCP is a protocol for transferring packets through a network, and is therefore needed for delivering messages from applications, broken up into packets, from sender (the source of the message) to receiver (the destination).   Their protocol included the basic internetworking principles that define the present-day Internet: minimalism, autonomy, simplicity in interconnecting networks, routers that facilitate the movement of packets from source to destination through intermediate nodes, and de-centralized control.

 

In terms of Internet architecture, a substantial innovation came in 1976, when Xerox PARC, the research and development division of Xerox, introduced Ethernet, a networking technology that became the basis of small, localized networks, known as local area networks, or LANs.  LANs are common in most organizations, and primarily serve the needs of that organization.  For example, an accounts payable or human resources network provide databases and management services to their respective departments.  LANs can be connected together to form larger LANs within an organization, and may also be connected to non-local networks, or even the Internet, for external access.  In the late 1970s, the precursor to Asynchronous Transfer Mode (ATM) was introduced that allowed the switching of fixed length packets.

 

By 1979, ARPAnet had grown to include 200 nodes.  The TCP protocol proposed by Cerf and Kahn is foundational for the Internet architecture.  Because of the increasing interconnection between networks and the “Internet” that was beginning to emerge from these interconnections, the Internet Protocol, or IP, which is used for transmitting messages across network boundaries was developed.  Recall that messages from applications are decomposed into smaller units known as packets.  When they are relayed across networks, additional source and destination information, or routing information, must be appended to these packets.  The resulting unit is called a datagram.  IP is the protocol used by these datagrams.  Nodes on the Internet are identified by numeric addresses or codes, known as IP addresses.  Because this protocol complements TCP in internetworked applications, the combination of these two protocols is called TCP/IP.  TCP/IP was first deployed in 1983.

 

The decade of the 1980s also witnessed the inception of many services that are commonly associated with the Internet as it is known today.  In 1982, the Simple Mail Transfer Protocol, or SMTP, was defined.  This protocol provides the standard communication protocol for e-mail messages.  In 1983, the Domain Name System, or DNS, was defined for converting numeric IP addresses to simple, human-readable names, and conversely.  The File Transfer Protocol (FTP), the protocol for transferring data and files to/from a client system (a system making a request) from/to a server system (the system providing the file or handling the client’s request), was defined in 1985.  An improvement was made to TCP to address network congestion in 1988.  In the 1980s, new national networks, including Csnet (Computer Science Network), BITnet (a cooperative university network in the United States), NSFnet (a network of the National Science Foundation) were established.  NSFnet, which went online in 1986, was the first large-scale networking system in which multiple independently-operating networks were networked through the deployment of emerging Internet technologies.  At that point in time, the growing Internet consisted of 100,000 hosts.

 

In the early years of the 1990s, ARPAnet was decommissioned, or withdrawn from service.  In 1991, NSFnet allowed commercial use.  The network was decommissioned in 1995.  However, major advances in the accessibility of the Internet were seen in the decade of the 1990s.  The idea of Web hypertext is based upon earlier proposals by Vannevar Bush in 1945 and Ted Nelson, who coined the terms “hypertext” and “hypermedia” in 1963.  Hypertext is a software-based system that links topics together, and that links topics with related information and other media, such as graphics, and in which these references are immediately accessible to the user.  It is one of the most crucial factors that provides accessibility to the World Wide Web, and is a core component of the Hypertext Markup Language (HTML), used to express Web pages.  The links themselves are known as hyperlinks.  Hypertext is particularly relevant to the digital humanities, as it allows “non-linear” reading of texts.  Nelson imaged hypertext as “…a computer filing system which would store and deliver the great body of human literature, in all its historical versions and with all its messy interconnections, acknowledging authorship, ownership, quotation and linkage” (Barnet, 2010).

 

The concept of the World Wide Web, or “Web”, merits clarification.  First, the Web is not synonymous with the Internet.   The Web is one of many services provided by the Internet.  As mentioned earlier, some of these other services include VoIP, Telnet, SMTP for email, and FTP for file transfer.  The Web is an interface to the Internet.  It is a system for identifying documents and other Internet resources through an identification system known as a Uniform Resource Locator, or URL.  URLs are known to users as the common identifying names for websites, such as www.somewebsitename.ca.    URLs may be linked through hyperlinks, as explained above.  Resources on the Web are provided to users through the Hypertext Transfer Protocol (HTTP).  Users access these resources through specialized applications known as web browsers, or simply browsers, which operate on a variety of computational devices, including desktop and laptop computers, mobile phones, as well as larger computer systems.

 

It is British computer scientist Tim Berners-Lee and his colleagues at the European Laboratory for Particle Physics (CERN) who is generally credited with “inventing” the World Wide Web in 1989.  The Web was constructed from various networking projects and the open innovation principles at CERN (Dutton, 2013).  Hypertext was a key enabling technology for the new interface.  The Web itself became part of the public consciousness around 1994, with the introduction of the Mosaic browser, which later was named Netscape.  The Web became increasingly commercialized in the late 1990s.  In the late 1990s into the early 2000s came the major innovations, including instant messaging and peer-to-peer (P2P) file sharing.  Also during this period, network security, including the protection of Internet resources and personal user information, was thrust into the foreground.  Links between major networks were running at speeds on the order of gigabits (1 billion bits) per second (Gbps).  By the early years of the 21st century, there were an estimated 50 million hosts, with over 100 million users.

 

In the years following 2005, there were approximately 750 million hosts on the Internet.  This time period also saw the introduction of Smartphones connected to the Internet and tablets, which are small devices connected wirelessly to the Internet, and where interaction is primarily in the form of touch screens.  Access to broadband – a data transmission mechanism that transports multiple signals simultaneously – improved markedly in the years following 2005.  High-speed wireless access became more commonplace.  Social media and social networks introduced during this period drastically changed how social interaction takes place.  Finally, cloud computing, in which computational resources and data are provided to users on-demand through networks, and without the user having to explicitly manage these resources, became prominent.  Commercial activity enabled by the Internet, known as e-commerce, as well as institutions and other enterprises, began to increasingly utilize cloud services.

 

The decade of 2010 to the present day is characterized by further innovations, with a major focus on accessibility.  Ubiquitous computing is characterized by computational devices embedded in other, not explicitly computational systems, and in common everyday devices, with availability at all times and in all places.  Sensor networks, which collect data from remote locations and transmit the data to computer systems where they are further processed and analyzed, became crucially important in environmental science, meteorology, public safety, and a host of other application areas.  Used for health and medical monitoring, where human beings are non-invasively instrumented with various sensors for heart rate, blood pressure, temperature, etc., has become an active area of scientific endeavor known as Smart health.  Smart cities is a concept wherein vehicular traffic flow, pedestrian traffic, and other data in urban areas are collected to efficiently and cost-effectively manage resources and to study ways to improve services and their delivery.  Additionally, improvements and enhancements of Internet technology facilitated remote and virtual education, as well as virtual laboratories and studios.

[NEXT]

License

Icon for the Creative Commons Attribution-ShareAlike 4.0 International License

Contemporary Digital Humanities Copyright © 2022 by Mark P. Wachowiak is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book