Which services are provided by the Internet layer of the TCP IP protocol suite?

MCSA/MCSE 70-291: Reviewing TCP/IP Basics

Deborah Littlejohn Shinder, ... Laura Hunter, in MCSA/MCSE (Exam 70-291) Study Guide, 2003

Layer 2: Internet

The TCP/IP suite has four core protocols that work at the Internet layer, which maps to the Network layer of the OSI model. The Internet layer is responsible for packaging, addressing, and routing the data. The four core protocols used in the TCP/IP suite are:

The Internet Protocol (IP)

The Internet Control Message Protocol (ICMP)

The Internet Group Management Protocol (IGMP)

The Address Resolution Protocol (ARP)

Internet Protocol

The Internet Protocol (IP) is probably the best known of the TCP/IP protocols. Many people, especially those who have even a passing familiarity with computer technology, have heard or used the term IP address. Later in this chapter, we’ll take an in-depth look at how the IP protocol works and you’ll learn the intricacies of IP addressing.

With regard to the TCP/IP architecture, IP is a routable protocol (meaning it can be sent across networks) that handles addressing, routing, and the process of putting data into or taking data out of packets. IP is considered to be connectionless because it does not establish a session with a remote computer before sending data. Data sent via connectionless methods are called datagrams. An IP packet can be lost, delayed, duplicated, or delivered out of sequence and there is no attempt to recover from these errors. Recovery is the responsibility of higher layer protocols including Transport layer protocols such as TCP.

IP packets contain data that include:

Source IP address The IP address of the source of the datagram.

Destination IP address The IP address of the destination for the datagram.

Identification Identifies a specific IP datagram as well as all fragments of a specific IP datagram if the datagram becomes fragmented.

Protocol Indicates to which protocols the receiving IP should pass the packets.

Checksum A simple method of error control that performs a mathematical calculation to verify the integrity of the IP header.

Time-to-Live (TTL) Designates the number of networks the datagram can travel before it is discarded. This prevents datagrams from circling endlessly on the network.

Internet Control Message Protocol

The Internet Control Message Protocol (ICMP) is not as well known as its famous cousin, IP. It is responsible for handling errors related to IP packets that cannot be delivered. For instance, if a packet cannot be delivered, a message called Destination Unreachable is sent back to the sending device so it will know that there was an undelivered message. The Destination Unreachable message has several subtypes of messages that can be sent back to the host to help pinpoint the problem. For instance, Network Unreachable and Port Unreachable are two examples of Destination Unreachable messages that may be returned to help the host determine the nature of the problem.

If you have ever used the Ping utility (discussed at the end of this chapter) and received an error, it was ICMP that was responsible for returning the error. In addition to announcing errors, ICMP also announces network congestion (source quench messages) and timeouts (which occur when the TTL field on a packet reaches zero).

Which services are provided by the Internet layer of the TCP IP protocol suite?
Note

For more information about ICMP, see RFC 792 at www.freesoft.org/CIE/RFC/792/index.htm, which defines the specifications for this protocol.

Internet Group Management Protocol

The Internet Group Management Protocol (IGMP) manages host membership in multicast groups. IP multicast groups are groups of devices (typically called hosts) that listen for and receive traffic addressed to a specific, shared multicast IP address. Essentially, IP multicast traffic is sent to a specific MAC address but processed by multiple IP hosts. (As you’ll recall from our earlier discussion, each NIC has a unique MAC address, but multicast MAC addresses use a special 24-bit prefix to identify them as such.) IGMP runs on the router, which handles the distribution of multicast packets (often, multicast routing is not enabled on the router by default and must be configured).

Multicasting makes it easy for a server to send the same content to multiple computers simultaneously. IP addresses in a specific range (called Class D addresses) are reserved for multicast assignment. The IGMP protocol allows for different types of messages, used to join multicast groups and to send multicast messages.

A unicast message is sent directly to a single host, whereas a multicast is sent to all members of a particular group. Both utilize connectionless datagrams and are transported via the User Datagram Protocol (UDP) that we’ll discuss in the Host-to-Host Transport Layer section. A multicast is sent to a group of hosts known as an IP multicast group or host group. The hosts in this group listen for IP traffic sent to a specific IP multicast address. IP multicasts are more efficient than broadcasts because the data is received only by computers listening to a specific address. A range of IP addresses, Class D addresses, is reserved for multicast addresses. Windows Server 2003 supports multicast addresses and, by default, is configured to support both the sending and receiving of IP multicast traffic.

Which services are provided by the Internet layer of the TCP IP protocol suite?
Note

For more information about IGMP, see RFC 1112 at www.cis.ohio-state.edu/cgibin/rfc/rfc1112.html, which defines the specifications for IP multicasting.

Exam Warning

Although their acronyms are very similar and they function at the same layer of the networking models, ICMP and IGMP perform very different functions, so be sure you don’t get them confused on the test.

Address Resolution Protocol

The Address Resolution Protocol (ARP) is the last of the four core TCP/IP protocols that work at the Internet layer. As we’ve discussed, each NIC has a unique MAC address. Each NIC also is assigned an IP address that is unique to the network on which it resides. When a packet is sent on a TCP/IP network, the packet headers include a destination IP address (along with other information). The IP address must be translated into a specific MAC address in order for the data to reach its intended recipient. Without ARP, computers must send broadcast messages each time an IP address needs to be matched to a MAC address.

ARP is responsible for maintaining the mappings of IP addresses to MAC addresses. These mappings are stored in the arp cache so if the same IP address needs to be matched to a MAC address again, the mapping can be found in the cache; it’s not necessary to repeat the discovery process.

The protocol includes four different types of messages: ARP request, ARP reply, RARP request, and RARP reply. RARP refers to Reverse Address Resolution protocol, which resolves addresses in the opposite direction (MAC address to IP address). These messages are used to discover the MAC addresses that correspond to specific IP addresses (and vice versa). When the MAC address is correlated to the specific IP address, the data can be sent to the proper host.

ARP was originally designed for DEC/Intel/Xerox 10Mbps Ethernet networks, but is now used with other types of IP-based networks as well.

These are the four primary protocols involved in TCP/IP at the Internet layer, which is responsible for addressing, packaging, and routing packets of data. As we move up the protocol stack, we will examine the Transport layer.

Which services are provided by the Internet layer of the TCP IP protocol suite?
Note

For more information about ARP and RARP, see RFCs 826 and 903 at www.networksorcery.com/enp/rfc/rfc826.txt and www.networksorcery.com/enp/rfc/rfc903.txt.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978193183692050007X

An introduction to delay and disruption tolerant networks (DTNs)

Joel J.P.C. Rodrigues, Vasco N.G.J. Soares, in Advances in Delay-Tolerant Networks (DTNs) (Second Edition), 2021

1.1 Introduction

The Internet Protocol (IP) suite, commonly known as TCP/IP (the well-known Transmission Control Protocol/Internet Protocol), makes implicit assumptions of continuous, bi-directional end-to-end paths, short round-trip times, high transmission reliability, and symmetric data rates (Socolofsky and Kale, 1991). However, a wide range of emerging networks (outside the Internet) usually referred to as opportunistic networks, intermittently connected networks, or episodic networks violate these assumptions. These networks fall into the general category of delay/disruption-tolerant networks (DTNs) (Cerf et al., 2007). DTNs experience any combination of the following: sparse connectivity, frequent partitioning, intermittent connectivity, large or variable delays, asymmetric data rates, and low transmission reliability. More importantly, an end-to-end connection cannot be assumed to be available in these networks. Table 1.1 summarizes the main differences between traditional networks (Internet) and DTN networks.

Table 1.1. Main differences between the assumptions of traditional and delay-tolerant networks.

Traditional (Internet-like)DTN
End-to-end connectivityContinuous Frequent disconnections
Propagation delayShort Long
Transmission reliabilityHigh Low
Link data rateSymmetric Asymmetric

The TCP/IP stack does not properly handle such connectivity challenges. Firstly, the performance of TCP is severely limited by high latency and moderate to high loss rates. Secondly, the performance of the network layer is affected by the loss of fragments. Furthermore, the high latency also causes traditional routing protocols to incorrectly label links as nonoperational. This motivated the proposal of a new network architecture that was designed to enable communication under stressed and unreliable conditions.

The work on Interplanetary Internet Architecture, later generalized to the DTN architecture, began in the late 1990s (Burleigh et al., 2003). DTN is a network research topic focused on the design, construction, performance evaluation, and application of architectures, services, and protocols that intend to enable data communication among heterogeneous networks in extreme environments (Cerf et al., 2007; Scott and Burleigh, 2007; Fall and Farrell, 2008; Fall, 2003). To answer these challenges the DTN Research Group (DTNRG) (2002), which was chartered as part of the Internet Research Task Force (IRTF) (2013), proposed an architecture (i.e., RFC 4838) (Cerf et al., 2007) and a communication protocol (i.e., RFC 5050) (Scott and Burleigh, 2007) for DTNs.

This chapter provides an introduction to delay and disruption tolerant networks and it is organized as follows. Section 1.2 reviews the DTN architecture and its key concepts. Next, application scenarios for these networks are presented in Section 1.3. The most relevant well-known routing protocols for DTN-based networks are discussed in Section 1.4. Finally, Section 1.5 concludes the chapter presenting a summary of the review.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780081027936000011

Mobile Network and Transport Layer

Vijay K. Garg, in Wireless Communications & Networking, 2007

14.4 TCP/IP Suite

The TCP/IP suite (Figure 14.8) occupies the middle five layers of the 7-layer open system interconnection (OSI) model (see Figure 14.9) [30]. The TCP/IP layering scheme combines several of the OSI layers. From an implementation standpoint, the TCP/IP stack encapsulates the network layer (OSI layer 3) and transport layer (OSI layer 4). The physical layer, the data-link layer (OSI layer 1 and 2, respectively) and application layer (OSI layer 7) at the top can be considered non-TCP/IP-specific. TCP/IP can be adapted to many different physical media types.

Which services are provided by the Internet layer of the TCP IP protocol suite?

Figure 14.8. TCP/IP protocol suite.

Which services are provided by the Internet layer of the TCP IP protocol suite?

Figure 14.9. A comparison of the OSI model and TCP/IP protocol layers.

IP is the basic protocol. This protocol operates at the network layer (layer 3) in the OSI model, and is responsible for encapsulating all upper layer transport and application protocols. The IP network layer incorporates the necessary elements for addressing and subnetting (dividing the network into subnets), which enables TCP/IP packets to be routed across the network to their destinations. At a parallel level, the ARP serves as a helper protocol, mapping physical layer addresses typically referred to as MAC-layer addresses to network layer (IP) addresses.

There are two transport layer protocols above IP: the UDP and TCP. These transport protocols provide delivery services. UDP is a connectionless delivery transport protocol and used for message-based traffic where sessions are unnecessary. TCP is a connection-oriented protocol that employs sessions for ongoing data exchange. File transfer protocol (FTP) and Telnet are examples of applications that use TCP sessions for their transport. TCP also provides the reliability of having all packets acknowledged and sequenced. If data is dropped or arrives out-of-sequence, the stack's TCP layer will retransmit and resequence. UDP is an unreliable service, and has no such provisions. Applications such as the simple mail transport protocol (SMTP) and hyper text transfer protocol (HTTP) use transport protocols to encapsulate their information and/or connections. To enable similar applications to talk to one another, TCP/IP has what are called “well-known port numbers.” These ports are used as sub-addresses within packets to identify exactly which service or protocol a packet is destined for on a particular host.

TCP/IP serves as a conduit to and from devices, enabling the sharing, monitoring, or controlling those devices. A TCP/IP stack can have a tremendous effect on a device's memory resources and CPU utilization. Interactions with other parts of the system may be highly undesirable and unpredictable. Problems in TCP/IP stacks can render a system inoperable.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012373580550048X

Standards and Protocols in Data Communications

William Shay, in Encyclopedia of Information Systems, 2003

XX. Simple Mail Transfer Protocol

The standard mail protocol in the TCP/IP suite (Internet) is the SMTP. It runs above TCP/IP and below any local mail service. Its primary responsibility is to make sure mail is transferred between different hosts. By contrast, the local service is responsible for distributing mail to specific recipients.

Figure 11 shows the interaction between local mail, SMTP, and TCP. When a user sends mail, the local mail facility determines whether the address is local or requires a connection to a remote site. In the latter case, the local mail facility stores the mail (much as you would put a letter in a mailbox), where it waits for the client SMTP. When the client SMTP delivers the mail, it first calls the TCP to establish a connection with the remote site. When the connection is made, the client and server SMTPs exchange packets and eventually deliver the mail. At the remote end the local mail facility gets the mail and delivers it to the intended recipient.

Which services are provided by the Internet layer of the TCP IP protocol suite?

Figure 11. The SMTP.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0122272404001684

Networking Support

Dan C. Marinescu, in Cloud Computing, 2013

7.14 History notes

The Internet is a global network based on the Internet Protocol Suite (TCP/IP); its origins can be traced back to 1965, when Ivan Sutherland, the head of the Information Processing Technology Office (IPTO) at the Advanced Research Projects Agency (ARPA), encouraged Lawrence Roberts, who had worked previously at MIT’s Lincoln Laboratories, to become the chief scientist at ISTO Technologies and to initiate a networking project based on packet switching rather than circuit switching.

In the early 1960s Leonard Kleinrock at the University of California at Los Angeles (UCLA) developed the theoretical foundations of packet networks and, in the early 1970s, for hierarchical routing in packet-switching networks. Kleinrock published the first paper on packet-switching theory in 1961 and the first book on the subject in 1964.

In August 1968 DARPA released a request for quotation (RFQ) for the development of packet switches called interface message processors (IMPs). A group from Bolt Beranek and Newman (BBN) won the contract. Several researchers and their teams including Robert Kahn from BBN, Lawrence Roberts from DARPA, Howard Frank from Network Analysis Corporation, and Leonard Kleinrock from UCLA, played a major role in the overall ARPANET architectural design. The idea of open-architecture networking was first introduced by Kahn in 1972, and his collaboration with Vint Cerf from Stanford led to the design of TCP/IP. Three groups, one at Stanford, one at BBN, and one at UCLA, won the DARPA contract to implement TCP/IP.

In 1969 BBN installed the first IMP at UCLA. The first two nodes the ARPANET interconnected were the Network Measurement Center at UCLA’s School of Engineering and Applied Science and SRI International in Menlo Park, California. Two more nodes were added at UC Santa Barbara and the University of Utah. By the end of 1971 there were 15 sites interconnected by ARPANET.

Ethernet technology, developed by Bob Metcalfe at Xerox PARC in 1973, and other local area network technologies, such as token-passing rings, allowed the personal computers and workstations to be connected to the Internet in the 1980s. As the number of Internet hosts increased, it was no longer feasible to have a single table of all hosts and their addresses. The Domain Name System (DNS) was invented by Paul Mockapetris of USC/ISI. The DNS permitted a scalable distributed mechanism for resolving hierarchical host names into an Internet address.

UC Berkeley, with support from DARPA, rewrote the TCP/IP code developed at BBN and incorporated it into the Unix BSD system. In 1985 Dennis Jennings started the NSFNET program at NSF to support the general research and academic communities.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124046276000075

Secure Communication in Distributed Manufacturing Systems

István Mezgár, Zoltán Kincses, in Agile Manufacturing: The 21st Century Competitive Strategy, 2001

Transmission Control Protocol/Internet Protocol – TCP/IP

The TCP/IP is two interrelated protocols that are part of the Internet protocol suite. TCP operates on the OSI Transport Layer and breaks data into packets, controls host-to-host transmissions over packet-switched communication networks.

Internet protocol (IP) was designed for use in interconnected systems of packet-switched computer communication networks. IP operates on the OSI Network Layer and routes packets. The Internet protocol provides for transmitting blocks of data called datagrams from sources to destinations, where sources and destinations are hosts identified by fixed-length addresses. The Internet protocol also provides for fragmentation and reassembly of long datagrams, if necessary, for transmission through small-packet networks.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080435671500188

Overview of the Internet

Anthony Steed, Manuel Fradinho Oliveira, in Networked Graphics, 2010

3.7 Summary

We’ve given a brief summary of the different layers of the IP suite. There are shelves full of books that cover this in more depth, from high-level application protocols through to router and switch configuration. We have focused on pulling out a few characteristics that will shape the way we implement NVEs.

Application layer protocols need to balance efficiency and compactness against readability. Although it is tempting to design messages to be as small as possible, ASCII strings are commonly used for header-like information as they are human readable.

At each layer of the TCP/IP stack, there is header information to add and to provide that layer with the necessary information to handle the data it contains. This encapsulation in layers is an important property of the stack, and means that very different host types and software types can interoperate.

UDP is an unreliable connectionless service, whereas TCP is a reliable connection-oriented service. The choice between the two is not as simple as just deciding whether or not reliability is needed. Reliability can be added on top of UDP: after all, TCP uses IP just as UDP. However, we pointed out that retransmission is not desirable in situations where the sender is sending rapidly changing data such as positions.

TCP provides flow control and congestion control so that bandwidth of the application can be managed. It also avoids fragmentation of packets, so it can be more efficient at transmitting large messages, utilizing the bandwidth available fully. Implementing this over UDP would be onerous.

IP provides the packet-routing layer of the Internet. Routers provide the basic forwarding mechanism, but there is also a “back-channel”, ICMP, that can be useful to get information about the reachability of hosts.

There are various link technologies in use, from mobile through wireless to fiber. There is a balance to be struck between cost and availability, though latency will be a particularly important feature for NVEs.

Multicast and QoS support might be available under certain situations.

We’ve also introduced a small suite of tools that can help us explore how the network is working. Wireshark, Ping, traceroute/tracert, nslookup and ipconfig/ifconfig are all essential tools to help understand and debug network applications.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123744234000033

Network Traffic Classification and Demand Prediction

Mikhail Dashevskiy, Zhiyuan Luo, in Conformal Prediction for Reliable Machine Learning, 2014

12.1 Introduction

The Internet is a global system of interconnected computer networks that use the standard Internet Protocol (IP) suite to serve billions of users all over the world. Various network applications utilize the Internet or other network hardware infrastructure to perform useful functions. Network applications often use a client-server architecture, where the client and server are two computers connected to the network. The server is programmed to provide some service to the client. For example, in the World Wide Web (WWW) the client computer runs a Web client program like Firefox or Internet Explorer, and the server runs a Web server program like Apache or Internet Information Server where the shared data would be stored and accessed.

The Internet is based on packet switching technology. The information exchanged between the computers are divided into small data chucks called packets and are controlled by various protocols. Each packet has a header and payload where the header carries the information that will help the packet get to its destination such as the sender’s IP address. The payload carries the data in the protocols that the Internet uses. Each packet is routed independently and transmission resources such as link bandwidth and buffer space are allocated as needed by packets. The principal goals of packet switching are to optimize utilization of available link capacity, minimize response times, and increase the robustness of communication.

In a typical network, the traffic through the network is heterogeneous and consists of flows from multiple applications and utilities. Typically a stream of packets is generated when a user visits a website or sends an email. A traffic flow is uniquely identified by four-tuple {source IP address, source port number, destination IP address, destination port number}. Two widely used transport layer protocols are Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). The main differences between TCP and UDP are that TCP is connection-oriented and a connection is established before the data can be exchanged. On the other hand, UDP is connectionless.

Many different network applications are running on the Internet. The Internet traffic is a huge mixture and thousands of different applications generate lots of different traffic. In addition to “traditional” applications (e.g., email and file transfer), new Internet applications such as multimedia streaming, blogs, Internet telephony, games, and peer-to-peer (P2P) file sharing have become popular [219]. Therefore, there are different packet sending and arrival patterns due to interaction between the sender and the receiver and data transmission behavior. Many of these applications are unique and have their own requirements with respect to network parameters such as bandwidth, delay, and jitter. Loss-tolerant applications such as video conferencing and interactive games can tolerate some amount of data loss. On the other hand, elastic applications such as email, file transfer, and Web transfer can make use of as much, or as little bandwidth as happens to be available. Elastic Internet applications have the greatest share in the traffic transported over the Internet today. Traffic classification can be defined as methods of classifying traffic data based on features passively observed in the traffic, according to specific classification goals. Quality of Service (QoS) is the ability to provide different priority to different applications, users, or data flows, or to guarantee a certain level of performance to a data flow.

The predictability of network traffic is of significant interest in many domains. For example, it can be used to improve the QoS mechanisms as well as congestion and resource control by adapting the network parameters to traffic characteristics. Dynamic resource allocation with the support of traffic prediction can efficiently utilize the network resources and support QoS. One of the key requirements for dynamic resource allocation framework is to predict traffic in the next control time interval based on historical data and online measurements of traffic characteristics over appropriate timescales. Machine learning algorithms are capable of observing and identifying patterns within the statistical variations of a monitored parameter such as resource consumption. They can then make appropriate predictions concerning future resource demands using this past behavior. There are two main approaches to predictions used in dynamic resource allocation: indirectly predicting traffic behavior descriptors and directly forecasting resource consumption.

In the indirect traffic prediction approach, it is assumed that there is an underlying stochastic model of the network traffic. Time-series modeling is typically used to build such a model from a given traffic trace. First, we want to determine the likely values of the parameters associated with the model. Then a set of possible models may be selected, and parameter values are determined for each model. Finally, diagnostic checking is carried out to establish how well the estimated model conforms to the observed data. A wide range of time series models has been developed to represent short-range and long-range dependent behavior in network traffic. However, it is still an open problem regarding how to fit an appropriate model to the given traffic trace. In addition, long execution times are associated with the model selection process.

The direct traffic prediction approach is more fundamental in nature and more challenging than indirect traffic descriptor prediction. This is because we can easily derive any statistical traffic descriptors from the concrete traffic volume, but not vice versa. Different learning and prediction approaches, including conventional statistical methods and machine learning approaches such as neural networks, have been applied to dynamic resource reservation problems [61]. Despite the reported success of these methods in asynchronous transfer mode and wireless networks, the learning and prediction techniques used can only provide simple predictions; that is, the algorithms make predictions without saying how reliable these predictions are. The reliability of a method is often determined by measuring general accuracy across independent test sets. For learning and prediction algorithms, if we make no prior assumptions about the probability distribution of the data, other than that it is identically and independently distributed (i.i.d.), there is no formal confidence relationship between the accuracy of the prediction made with the test data and the prediction associated with a new and unknown case.

Network behavior can change considerably over time and space. Learning and prediction should ideally be adaptive and provide confidence information. In this chapter, we apply conformal predictions to enhance the learning algorithms for network traffic classification and demand prediction problems. The novelty of conformal predictions is that they can learn and predict simultaneously, continually improving their performance as they make each new prediction and ascertain how accurate it is. Conformal predictors not only give predictions, but also provide additional information about reliability with their outputs. Note that in the case of regression the predictions output by such algorithms are intervals where the true value is supposed to lie.

The reminder of this chapter is structured as follows. Section 12.2 discusses the application of conformal predictions to the problem of network traffic classification. Section 12.3 considers the application of conformal predictions to the network demand predictions and presents a way of constructing reliable prediction intervals (i.e., intervals that include point predictions) by using conformal predictors. Section 12.4 shows experimental results of the conformal prediction on public network traffic datasets. Finally, Section 12.5 presents conclusions.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123985378000122

Internet Security

Jesse Walker, in Computer and Information Security Handbook, 2009

Publisher Summary

This chapter illustrates how cryptography is used on the Internet to secure protocols and reviews the architecture of the Internet protocol suite, as even what security means is a function of the underlying system architecture. It also reviews the Dolev-Yao model, which describes the threats to which network communications are exposed. In particular, all levels of network protocols are completely exposed to eavesdropping and manipulation by an attacker, so using cryptography properly is a first-class requirement to derive any benefit from its use. It is learnt that effective security mechanisms to protect session-oriented and session establishment protocols are different, although one can share many cryptographic primitives. Cryptography can be very successful at protecting messages on the Internet, but doing so requires preexisting, long-lived relationships. How to build secure open communities is still an open problem; it is probably intractable because a solution would imply the elimination of conflict between human beings who do not know each other.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123743541000078

Internet Security

Jesse Walker, in Network and System Security (Second Edition), 2014

5 Summary

This chapter examined how cryptography is used on the Internet to secure protocols. It reviewed the architecture of the Internet protocol suite, for even the meaning of what security means is a function of the underlying system architecture. Next it reviewed the Dolev–Yao model, which describes the threats to which network communications are exposed. In particular, all levels of network protocols are completely exposed to eavesdropping and manipulation by an attacker, so using cryptography properly is a first-class requirement to derive any benefit from its use. We learned that effective security mechanisms to protect session-oriented and session establishment protocols are different, although they can share many cryptographic primitives. Cryptography can be very successful in protecting messages on the Internet, but doing so requires preexisting, long-lived relationships. How to build secure open communities is still an open problem; it is probably an intractable question because a solution would imply the elimination of conflict between human beings who do not know each other.

Finally, let’s move on to the real interactive part of this chapter: review questions/exercises, hands-on projects, case projects, and optional team case project. The answers and/or solutions by chapter can be found in the online Instructor’s Solutions Manual.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124166899000071

What is a service provided by the internet layer in the TCP IP model?

DNS – The domain name system (DNS) is the name service provided by the Internet for TCP/IP networks. DNS provides host names to the IP address service. DNS also serves as a database for mail administration.

Which services are provided by the internet layer of the TCP IP protocol suite choose three?

DNS, DHCP, and FTP are all application layer protocols in the TCP/IP protocol suite.

What are the services provided by TCP protocol?

Following are some of the services offered by the Transmission Control Protocol (TCP) to the processes at the application layer:.
Stream Delivery Service..
Sending and Receiving Buffers..
Bytes and Segments..
Full Duplex Service..
Connection Oriented Service..
Reliable Service..

What are the layers of TCP IP protocol suite?

The TCP/IP suite of protocols can be understood in terms of layers (or levels). This figure depicts the layers of the TCP/IP protocol. From the top they are, Application Layer, Transport Layer, Network Layer, Network Interface Layer, and Hardware.