Alex Preston / bookshelf

Introduction to Networking: How the Internet Works by Charles Severance

Packets and Routers

Highlight [8]: The idea of breaking a message into packets was pioneered in the 1960s, but it was not widely used until the 1980s because it required more computing power and more sophisticated networking software

Highlight [8]: As networks moved away from the store-and-forward approach, they started to include special-purpose computers that specialized in moving packets. These were initially called “Interface Message Processors” or “IMPs” because they acted as the interface between general-purpose computers and the rest of the network. Later these computers dedicated to communications were called “routers” because their purpose was to route the packets they received towards their ultimate destination.

The Link Layer

Highlight [16]: The Link layer needs to solve two basic problems when dealing with these shared local area networks. The first problem is how to encode and send data across the link. If the link is wireless, engineers must agree on which radio frequencies are to be used to transmit data and how the digital data is to be encoded in the radio signal. For wired connections, they must agree on what voltage to use on the wire and how fast to send the bits across the wire. For Link layer technologies that use fiber optics, they must agree on the frequencies of light to be used and how fast to send the data.

Coordination in Other Link Layers

Highlight [32]: The token approach is best suited when using a link medium such as as a satellite link or a undersea fiber optic link where it might take too long or be too costly to detect a collision. The CSMA/CD (listen-try) is best suited when the medium is inexpensive, shorter distance, and there are a lot of stations sharing the medium that only send data in short bursts. So that is why WiFi (and CSMA/CD) is so effective for providing network access in a coffee shop, home, or room in a school.

Getting an IP Address

Highlight [48]: This ability for your computer to get a different IP address when it is moved from one network to another uses a protocol called “Dynamic Host Configuration Protocol” (or DHCP for short).

A Different Kind of Address Reuse

Highlight [49]: Addresses that start with “192.168.” are called “non-routable” addresses. This means that they will never be used as real addresses that will route data across the core of the network. They can be used within a single local network, but not used on the global network

Highlight [49]: So then how is it that your computer gets an address like “” on your home network and it works perfectly well on the overall Internet? This is because your home router/gateway/base station is doing something we call “Network Address Translation”, or “NAT”

Global IP Address Allocation

Highlight [50]: At the top level of IP address allocations are five Regional Internet Registries (RIRs). Each of the five registries allocates IP addresses for a major geographic area. Between the five registries, every location in the world can be allocated a network number. The five registries are North America (ARIN), South and Central America (LACNIC), Europe (RIPE NCC), Asia-Pacific (APNIC) and Africa (AFRNIC

Allocating Domain Names

Highlight [60]: At the top of the domain name hierarchy is an organization called the International Corporation for Assigned Network Names and Numbers(ICANN). ICANN chooses the top-level domains (TLDs) like .com, .edu, and .org and assigns those to other organizations to manage. Recently a new set of TLDs like .club and .help have been made available

Packet Headers

Highlight [66]: The IP header holds the source and destination Internet Protocol (IP) addresses as well as the Time to Live (TTL) for the packet. The IP header is set on the source computer and is unchanged (other than the TTL) as the packet moves through the various routers on its journey

Packet Reassembly and Retransmission

Highlight [68]: The combination of the receiving computer acknowledging received data, not allowing the transmitting computer to get too far ahead (window size), and the receiving computer requesting the sending computer to “back up and restart” when it appears that data has been lost creates a relatively simple method to reliably send large messages or files across a network

Exploring the HTTP Protocol

Highlight [78]: The “telnet” application was first developed in 1968, and was developed according to one of the earliest standards for the Internet: Telnet is a simple application. Run telnet from the command line (or terminal) and type the following command: telnet 80

Highlight [81]: The status codes for HTTP are grouped into ranges: 2XX codes indicate success, 3XX codes are for redirecting, 4XX codes indicate that the client application did something wrong, and 5xx codes indicate that the server did something wrong

Flow Control

Highlight [84]: A key thing to notice in this picture is that the transport layers do not keep the packets for the entire file. They only retain packets that are “in transit” and unacknowledged. Once packets are acknowledged and delivered to the destination application, there is no reason for either the source or destination Transport layer to hold on to the packets

Highlight [85]: This ability to start and stop the sending application to make sure we send data as quickly as possible without sending data so fast that they clog up the Internet is called “flow control”. The applications are not responsible for flow control, they just try to send or receive data as quickly as possible and the two transport layers start and stop the applications as needed based on the speed and reliability of the network


Highlight [86]: The entire purpose of the lower three layers (Transport, Internetwork, and Link) is to make it so that applications running in the Application layer can focus the application problem that needs to be solved and leave virtually all of the complexity of moving data across a network to be handled by the lower layers of the network model

Encrypting and Decrypting Data

Highlight [92]: The concept of protecting information so it cannot be read while it is being transported over an insecure medium is thousands of years old. The leaders in Roman armies sent coded messages to each other using a code called the “Caesar Cipher”. The simplest version of this approach is to take each of the characters of the actual message (we call this “plain text”) and shift each character a fixed distance down the alphabet to produce the scrambled message or “ciphertext

Highlight [92]: The Caesar Cipher is very simple to defeat, but it was used to protect important messages until about 150 years ago. Modern encryption techniques are far more sophisticated than a simple character shift, but all encryption systems depend on some kind of a secret key that both parties are aware of so they can decrypt received data

Two Kinds of Secrets

Highlight [94]: We call the encryption key the “public” key because it can be widely shared. We call the decryption key the “private” key because it never leaves the computer where it was created. Another name for asymmetric keys is public/private keys.

Secure Sockets Layer (SSL)

Highlight [94]: Since network engineers decided to add security nearly 20 years after the Internet protocols were developed, it was important not to break any existing Internet protocols or architecture. Their solution was to add an optional partial layer between the Transport layer and the Application layer. They called this partial layer the Secure Sockets Layer (SSL) or Transport Layer Security (TLS

The OSI Model

Highlight [103]: The OSI model has seven layers instead of the four layers of the TCP/IP model. Starting at the bottom (nearest the physical connections) of the OSI model, the layers are: (1) Physical, (2) Data Link, (3) Network, (4) Transport, (5) Session, (6) Presentation, and Application. We will look at each layer in the OSI model in turn, starting with the Physical layer

Wrap Up

Highlight [111]: It has been said that building the Internet solved the world’s most complex engineering problem to date. The design and engineering of the Internet started well over 50 years ago. It has been continuously improving and evolving over the past 50 years and will continue to evolve in the future

Highlight [111]: The Internet is so complex that it is never fully operational, The Internet is less about being “perfect” and more about adapting to problems, outages, errors, lost data, and many other unforeseen problems. The Internet is designed to be flexible and adapt to whatever problems are encountered.

Highlight [112]: The Internetwork Protocol (IP) layer is how data is routed across a series of hops to get quickly and efficiently from one of a billion source computers to any of a billion destination computers. The IP layer dynamically adjusts and reroutes data based on network load, link performance, or network outages

Highlight [112]: The Transport layer compensates for any imperfections in the IP or Link layers. The Transport layer makes sure that any lost packets are retransmitted and packets that arrive out of order are put back into order before being passed on to the receiving application. The Transport layer also acts as flow control between the sending and receiving applications to make sure that data is moved quickly when the network is fast and the links are not overloaded, and to slow the transfer of data when using slower or heavily loaded links.