400-101 Free Demo Questions and Answers

Author: Richard Koons

Question: 1

Which two options are causes of out-of-order packets? (Choose two.)

A. a routing loop

B. a router in the packet flow path that is intermittently dropping packets

C. high latency

D. packets in a flow traversing multiple paths through the network

E. some packets in a flow being process-switched and others being interrupt-switched on a transit router

Answer: D, E

Explanation:

In traditional packet forwarding systems, using different paths have varying latencies that cause out of order packets, eventually resulting in far lower performance for the network application. Also, if some packets are process switched quickly by the routing engine of the router while others are interrupt switched (which takes more time) then it could result in out of order packets. The other options would cause packet drops or latency, but not out of order packets.

Question: 2

A TCP/IP host is able to transmit small amounts of data (typically less than 1500 bytes), but attempts to transmit larger amounts of data hang and then time out. What is the cause of this problem?

A. A link is flapping between two intermediate devices.

B. The processor of an intermediate router is averaging 90 percent utilization.

C. A port on the switch that is connected to the TCP/IP host is duplicating traffic and sending it to a port that has a sniffer attached.

D. There is a PMTUD failure in the network path.

Answer: D

Explanation:

Sometimes, over some IP paths, a TCP/IP node can send small amounts of data (typically less than 1500 bytes) with no difficulty, but transmission attempts with larger amounts of data hang, then time out. Often this is observed as a unidirectional problem in that large data transfers succeed in one direction but fail in the other direction. This problem is likely caused by the TCP MSS value, PMTUD failure, different LAN media types, or defective links.

Reference: http://www.cisco.com/c/en/us/support/docs/additional-legacy-protocols/ms-windows-networking/13709-38.html

Question: 3

Refer to the exhibit.

ICMP Echo requests from host A are not reaching the intended destination on host B. What is the problem?

A. The ICMP payload is malformed.

B. The ICMP Identifier (BE) is invalid.

C. The negotiation of the connection failed.

D. The packet is dropped at the next hop.

E. The link is congested.

Answer: D

Explanation:

Here we see that the Time to Live (TTL) value of the packet is one, so it will be forwarded to the next hop router, but then dropped because the TTL value will be 0 at the next hop.

Question: 4

Refer to the exhibit.

Which statement is true?

A. It is impossible for the destination interface to equal the source interface.

B. NAT on a stick is performed on interface Et0/0.

C. There is a potential routing loop.

D. This output represents a UDP flow or a TCP flow.

Answer: C

Explanation:

In this example we see that the source interface and destination interface are the same (Et0/0). Typically this is seen when there is a routing loop for the destination IP address.

Question: 5

Which three conditions can cause excessive unicast flooding? (Choose three.)

A. Asymmetric routing

B. Repeated TCNs

C. The use of HSRP

D. Frames sent to FFFF.FFFF.FFFF

E. MAC forwarding table overflow

F. The use of Unicast Reverse Path Forwarding

Answer: A, B, E

Explanation:

Causes of Flooding

The very cause of flooding is that destination MAC address of the packet is not in the L2 forwarding table of the switch. In this case the packet will be flooded out of all forwarding ports in its VLAN (except the port it was received on). Below case studies display most common reasons for destination MAC address not being known to the switch.

Cause 1: Asymmetric Routing

Large amounts of flooded traffic might saturate low-bandwidth links causing network performance issues or complete connectivity outage to devices connected across such low-bandwidth links.

Cause 2: Spanning-Tree Protocol Topology Changes

Another common issue caused by flooding is Spanning-Tree Protocol (STP) Topology Change Notification (TCN). TCN is designed to correct forwarding tables after the forwarding topology has changed. This is necessary to avoid a connectivity outage, as after a topology change some destinations previously accessible via particular ports might become accessible via different ports. TCN operates by shortening the forwarding table aging time, such that if the address is not relearned, it will age out and flooding will occur.

TCNs are triggered by a port that is transitioning to or from the forwarding state. After the TCN, even if the particular destination MAC address has aged out, flooding should not happen for long in most cases since the address will be relearned. The issue might arise when TCNs are occurring repeatedly with short intervals. The switches will constantly be fast-aging their forwarding tables so flooding will be nearly constant.

Normally, a TCN is rare in a well-configured network. When the port on a switch goes up or down, there is eventually a TCN once the STP state of the port is changing to or from forwarding. When the port is flapping, repetitive TCNs and flooding occurs.

Cause 3: Forwarding Table Overflow

Another possible cause of flooding can be overflow of the switch forwarding table. In this case, new addresses cannot be learned and packets destined to such addresses are flooded until some space becomes available in the forwarding table. New addresses will then be learned. This is possible but rare, since most modern switches have large enough forwarding tables to accommodate MAC addresses for most designs.

Forwarding table exhaustion can also be caused by an attack on the network where one host starts generating frames each sourced with different MAC address. This will tie up all the forwarding table resources. Once the forwarding tables become saturated, other traffic will be flooded because new learning cannot occur. This kind of attack can be detected by examining the switch forwarding table. Most of the MAC addresses will point to the same port or group of ports. Such attacks can be prevented by limiting the number of MAC addresses learned on untrusted ports by using the port security feature.

Reference: http://www.cisco.com/c/en/us/support/docs/switches/catalyst-6000-series-switches/23563-143.html#causes

Question: 6

Which congestion-avoidance or congestion-management technique can cause global synchronization?

A. Tail drop

B. Random early detection

C. Weighted random early detection

D. Weighted fair queuing

Answer: A

Explanation:

Tail Drop

Tail drop treats all traffic equally and does not differentiate between classes of service. Queues fill during periods of congestion. When the output queue is full and tail drop is in effect, packets are dropped until the congestion is eliminated and the queue is no longer full.

Weighted Random Early Detection

WRED avoids the globalization problems that occur when tail drop is used as the congestion avoidance mechanism on the router. Global synchronization occurs as waves of congestion crest only to be followed by troughs during which the transmission link is not fully utilized. Global synchronization of TCP hosts, for example, can occur because packets are dropped all at once. Global synchronization manifests when multiple TCP hosts reduce their transmission rates in response to packet dropping, then increase their transmission rates once again when the congestion is reduced.

Reference: http://www.cisco.com/c/en/us/td/docs/ios/12_2/qos/configuration/guide/fqos_c/qcfconav.html#wp1002048

Question: 7

Which two options are reasons for TCP starvation? (Choose two.)

A. The use of tail drop

B. The use of WRED

C. Mixing TCP and UDP traffic in the same traffic class

D. The use of TCP congestion control

Answer: C, D

Explanation:

It is a general best practice to not mix TCP-based traffic with UDP-based traffic (especially Streaming-Video) within a single service-provider class because of the behaviors of these protocols during periods of congestion. Specifically, TCP transmitters throttle back flows when drops are detected. Although some UDP applications have application-level windowing, flow control, and retransmission capabilities, most UDP transmitters are completely oblivious to drops and, thus, never lower transmission rates because of dropping.

When TCP flows are combined with UDP flows within a single service-provider class and the class experiences congestion, TCP flows continually lower their transmission rates, potentially giving up their bandwidth to UDP flows that are oblivious to drops. This effect is called TCP starvation/UDP dominance.

TCP starvation/UDP dominance likely occurs if (TCP-based) Mission-Critical Data is assigned to the same service-provider class as (UDP-based) Streaming-Video and the class experiences sustained congestion. Even if WRED or other TCP congestion control mechanisms are enabled on the service-provider class, the same behavior would be observed because WRED (for the most part) manages congestion only on TCP-based flows.

Reference: http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/WAN_and_MAN/QoS_SRND/QoS-SRND-Book/VPNQoS.html

Question: 8

Refer to the exhibit.

While troubleshooting high CPU utilization of a Cisco Catalyst 4500 Series Switch, you notice the error message that is shown in the exhibit in the log file.

What can be the cause of this issue, and how can it be prevented?

A. The hardware routing table is full. Redistribute from BGP into IGP.

B. The software routing table is full. Redistribute from BGP into IGP.

C. The hardware routing table is full. Reduce the number of routes in the routing table.

D. The software routing table is full. Reduce the number of routes in the routing table.

Answer: C

Explanation:

L3HWFORWADING-2

Error Message C4K_L3HWFORWARDING-2-FWDCAMFULL:L3 routing table is full. Switching to software forwarding.

The hardware routing table is full; forwarding takes place in the software instead. The switch performance might be degraded.

Recommended Action: Reduce the size of the routing table. Enter the ip cef command to return to hardware forwarding.

Reference: http://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst4500/12-2/31sg/system/message/message/emsg.html

Question: 9

Refer to the exhibit.

Which two are causes of output queue drops on FastEthernet0/0? (Choose two.)

A. an oversubscribed input service policy on FastEthernet0/0

B. a duplex mismatch on FastEthernet0/0

C. a bad cable connected to FastEthernet0/0

D. an oversubscribed output service policy on FastEthernet0/0

E. The router trying to send more than 100 Mb/s out of FastEthernet0/0

Answer: D, E

Explanation:

Output drops are caused by a congested interface. For example, the traffic rate on the outgoing interface cannot accept all packets that should be sent out, or a service policy is applied that is oversubscribed. The ultimate solution to resolve the problem is to increase the line speed. However, there are ways to prevent, decrease, or control output drops when you do not want to increase the line speed. You can prevent output drops only if output drops are a consequence of short bursts of data. If output drops are caused by a constant high-rate flow, you cannot prevent the drops. However, you can control them.

Reference: http://www.cisco.com/c/en/us/support/docs/routers/10000-series-routers/6343-queue-drops.html

Question: 10

Refer to the exhibit.

Which statement about the output is true?

A. The flow is an HTTPS connection to the router, which is initiated by 144.254.10.206.

B. The flow is an HTTP connection to the router, which is initiated by 144.254.10.206.

C. The flow is an HTTPS connection that is initiated by the router and that goes to 144.254.10.206.

D. The flow is an HTTP connection that is initiated by the router and that goes to 144.254.10.206.

Answer: A

Explanation:

We can see that the connection is initiated by the Source IP address shown as 144.254.10.206. We also see that the destination protocol (DstP) shows 01BB, which is in hex and translates to 443 in decimal. SSL/HTTPS uses port 443.

Question: 11

What is the cause of ignores and overruns on an interface, when the overall traffic rate of the interface is low?

A. a hardware failure of the interface

B. a software bug

C. a bad cable

D. microbursts of traffic

Answer: D

Explanation:

Micro-bursting is a phenomenon where rapid bursts of data packets are sent in quick succession, leading to periods of full line-rate transmission that can overflow packet buffers of the network stack, both in network endpoints and routers and switches inside the network.

Symptoms of micro bursts will manifest in the form of ignores and/ or overruns (also shown as accumulated in "input error" counter within show interface output). This is indicative of receive ring and corresponding packet buffer being overwhelmed due to data bursts coming in over extremely short period of time (microseconds). You will never see a sustained data traffic within show interface’s "input rate" counter as they are averaging bits per second (bps) over 5 minutes by default (way too long to account for microbursts). You can understand microbursts from a scenario where a 3-lane highway merging into a single lane at rush hour – the capacity burst cannot exceed the total available bandwidth (i.e. single lane), but it can saturate it for a period of time.

Reference: http://ccieordie.com/?tag=micro-burst

Question: 12

Refer to the exhibit.

Which statement about the debug behavior of the device is true?

A. The device debugs all IP events for 172.16.129.4.

B. The device sends all debugging information for 172.16.129.4.

C. The device sends only NTP debugging information to 172.16.129.4.

D. The device sends debugging information every five seconds.

Answer: A

Explanation:

This is an example of a conditional debug, where there is a single condition specified of IP address 172.16.129.4. So, all IP events for that address will be output in the debug.

Question: 13

Which statement about MSS is true?

A. It is negotiated between sender and receiver.

B. It is sent in all TCP packets.

C. It is 20 bytes lower than MTU by default.

D. It is sent in SYN packets.

E. It is 28 bytes lower than MTU by default.

Answer: D

Explanation:

The maximum segment size (MSS) is a parameter of the Options field of the TCP header that specifies the largest amount of data, specified in octets, that a computer or communications device can receive in a single TCP segment. It does not count the TCP header or the IP header. The IP datagram containing a TCP segment may be self-contained within a single packet, or it may be reconstructed from several fragmented pieces; either way, the MSS limit applies to the total amount of data contained in the final, reconstructed TCP segment.

The default TCP Maximum Segment Size is 536. Where a host wishes to set the maximum segment size to a value other than the default, the maximum segment size is specified as a TCP option, initially in the TCP SYN packet during the TCP handshake. The value cannot be changed after the connection is established.

Reference: http://en.wikipedia.org/wiki/Maximum_segment_size

Question: 14

Which two methods change the IP MTU value for an interface? (Choose two.)

A. Configure the default MTU.

B. Configure the IP system MTU.

C. Configure the interface MTU.

D. Configure the interface IP MTU.

Answer: C, D

Explanation:

An IOS device configured for IP+MPLS routing uses three different Maximum Transmission Unit (MTU) values: The hardware MTU configured with the mtu interface configuration command

The IP MTU configured with the ip mtu interface configuration command

The MPLS MTU configured with the mpls mtu interface configuration command

The hardware MTU specifies the maximum packet length the interface can support … or at least that's the theory behind it. In reality, longer packets can be sent (assuming the hardware interface chipset doesn't complain); therefore you can configure MPLS MTU to be larger than the interface MTU and still have a working network. Oversized packets might not be received correctly if the interface uses fixed-length buffers; platforms with scatter/gather architecture (also called particle buffers) usually survive incoming oversized packets.

IP MTU is used to determine whether am IP packet forwarded through an interface has to be fragmented. It has to be lower or equal to hardware MTU (and this limitation is enforced). If it equals the HW MTU, its value does not appear in the running configuration and it tracks the changes in HW MTU. For example, if you configure ip mtu 1300 on a Serial interface, it will appear in the running configuration as long as the hardware MTU is not equal to 1300 (and will not change as the HW MTU changes). However, as soon as the mtu 1300 is configured, the ip mtu 1300 command disappears from the configuration and the IP MTU yet again tracks the HW MTU.

Reference: http://blog.ipspace.net/2007/10/tale-of-three-mtus.html

Question: 15

Which implementation can cause packet loss when the network includes asymmetric routing paths?

A. the use of ECMP routing

B. the use of penultimate hop popping

C. the use of Unicast RPF

D. disabling Cisco Express Forwarding

Answer: C

Explanation:

When administrators use Unicast RPF in strict mode, the packet must be received on the interface that the router would use to forward the return packet. Unicast RPF configured in strict mode may drop legitimate traffic that is received on an interface that was not the router's choice for sending return traffic. Dropping this legitimate traffic could occur when asymmetric routing paths are present in the network.

Reference: http://www.cisco.com/web/about/security/intelligence/unicast-rpf.html

Question: 16

Which two mechanisms can be used to eliminate Cisco Express Forwarding polarization? (Choose two.)

A. alternating cost links

B. the unique-ID/universal-ID algorithm

C. Cisco Express Forwarding antipolarization

D. different hashing inputs at each layer of the network

Answer: B, D

Explanation:

This document describes how Cisco Express Forwarding (CEF) polarization can cause suboptimal use of redundant paths to a destination network. CEF polarization is the effect when a hash algorithm chooses a particular path and the redundant paths remain completely unused.

How to Avoid CEF Polarization

Alternate between default (SIP and DIP) and full (SIP + DIP + Layer4 ports) hashing inputs configuration at each layer of the network.

Alternate between an even and odd number of ECMP links at each layer of the network.

The CEF load-balancing does not depend on how the protocol routes are inserted in the routing table. Therefore, the OSPF routes exhibit the same behavior as EIGRP. In a hierarchical network where there are several routers that perform load-sharing in a row, they all use same algorithm to load-share.

The hash algorithm load-balances this way by default:

  1. 1
  2. 7-8
  3. 1-1-1
  4. 1-1-1-2
  5. 1-1-1-1-1
  6. 1-2-2-2-2-2
  7. 1-1-1-1-1-1-1
  8. 1-1-1-2-2-2-2-2

The number before the colon represents the number of equal-cost paths. The number after the colon represents the proportion of traffic which is forwarded per path.

This means that:

For two equal cost paths, load-sharing is 46.666%-53.333%, not 50%-50%.

For three equal cost paths, load-sharing is 33.33%-33.33%-33.33% (as expected).

For four equal cost paths, load-sharing is 20%-20%-20%-40% and not 25%-25%-25%-25%.

This illustrates that, when there is even number of ECMP links, the traffic is not load-balanced.

Cisco IOS introduced a concept called unique-ID/universal-ID which helps avoid CEF polarization. This algorithm, called the universal algorithm (the default in current Cisco IOS versions), adds a 32-bit router-specific value to the hash function (called the universal ID - this is a randomly generated value at the time of the switch boot up that can can be manually controlled). This seeds the hash function on each router with a unique ID, which ensures that the same source/destination pair hash into a different value on different routers along the path. This process provides a better network-wide load-sharing and circumvents the polarization issue. This unique -ID concept does not work for an even number of equal-cost paths due to a hardware limitation, but it works perfectly for an odd number of equal-cost paths. In order to overcome this problem, Cisco IOS adds one link to the hardware adjacency table when there is an even number of equal-cost paths in order to make the system believe that there is an odd number of equal-cost links.

Reference: http://www.cisco.com/c/en/us/support/docs/ip/express-forwarding-cef/116376-technote-cef-00.html

Question: 17

Which two mechanisms provide Cisco IOS XE Software with control plane and data plane separation? (Choose two.)

A. Forwarding and Feature Manager

B. Forwarding Engine Driver

C. Forwarding Performance Management

D. Forwarding Information Base

Answer: A, B

Explanation:

Control Plane and Data Plane Separation

IOS XE introduces an opportunity to enable teams to now build drivers for new Data Plane ASICs outside the IOS instance and have them program to a set of standard APIs which in turn enforces Control Plane and Data Plane processing separation.

IOS XE accomplishes Control Plane / Data Plane separation through the introduction of the Forwarding and Feature Manager (FFM) and its standard interface to the Forwarding Engine Driver (FED). FFM provides a set of APIs to Control Plane processes. In turn, the FFM programs the Data Plane via the FED and maintains forwarding state for the system. The FED is the instantiation of the hardware driver for the Data Plane and is provided by the platform.

Reference: http://www.cisco.com/c/en/us/products/collateral/ios-nx-os-software/ios-xe-3sg/QA_C67-622903.html

Question: 18

Refer to the exhibit.

What is the PHB class on this flow?

A. EF

B. none

C. AF21

D. CS4

Answer: D

Explanation:

This command shows the TOS value in hex, which is 80 in this case. The following chart shows some common DSCP/PHB Class values:

Service

DSCP value

TOS value

Juniper Alias

TOS hexadecimal

DSCP - TOS Binary

Premium IP

46

184

ef

B8

101110 - 101110xx

LBE

8

32

cs1

20

001000 - 001000xx

DWS

32

128

cs4

80

100000 - 100000xx

Network control

48

192

cs6

c0

110000 - 110000xx

Network control 2

56

224

cs7

e0

111000 - 111000xx

Reference: http://www.tucny.com/Home/dscp-tos

Question: 19

Refer to the exhibit.

What kind of load balancing is done on this router?

A. per-packet load balancing

B. per-flow load balancing

C. per-label load balancing

D. star round-robin load balancing

Answer: A

Explanation:

Here we can see that for the same traffic source/destination pair of 10.0.0.1 to 14.0.0.2 there were a total of 100 packets (shown by second entry without the *) and that the packets were distributed evenly across the three different outgoing interfaces (34, 33, 33 packets, respectively.

Question: 20

What is the most efficient way to confirm whether microbursts of traffic are occurring?

A. Monitor the output traffic rate using the show interface command.

B. Monitor the output traffic rate using the show controllers command.

C. Check the CPU utilization of the router.

D. Sniff the traffic and plot the packet rate over time.

Answer: D

Explanation:

Micro-bursting is a phenomenon where rapid bursts of data packets are sent in quick succession, leading to periods of full line-rate transmission that can overflow packet buffers of the network stack, both in network endpoints and routers and switches inside the network.

In order to troubleshoot microbursts, you need a packet sniffer that can capture traffic over a long period of time and allow you to analyze it in the form of a graph which displays the saturation points (packet rate during microbursts versus total available bandwidth). You can eventually trace it to the source causing the bursts (e.g. stock trading applications).

Reference: Adam, Paul (2014-07-12). All-in-One CCIE V5 Written Exam Guide (Kindle Locations 989-994). Kindle Edition.

Question: 21

What is a cause for unicast flooding?

A. Unicast flooding occurs when multicast traffic arrives on a Layer 2 switch that has directly connected multicast receivers.

B. When PIM snooping is not enabled, unicast flooding occurs on the switch that interconnects the PIM-enabled routers.

C. A man-in-the-middle attack can cause the ARP cache of an end host to have the wrong MAC address. Instead of having the MAC address of the default gateway, it has a MAC address of the man-in-the-middle. This causes all traffic to be unicast flooded through the man-in-the-middle, which can then sniff all packets.

D. Forwarding table overflow prevents new MAC addresses from being learned, and packets destined to those MAC addresses are flooded until space becomes available in the forwarding table.

Answer: D

Explanation:

Causes of Flooding

The very cause of flooding is that destination MAC address of the packet is not in the L2 forwarding table of the switch. In this case the packet will be flooded out of all forwarding ports in its VLAN (except the port it was received on). Below case studies display most common reasons for destination MAC address not being known to the switch.

Cause 1: Asymmetric Routing

Large amounts of flooded traffic might saturate low-bandwidth links causing network performance issues or complete connectivity outage to devices connected across such low-bandwidth links

Cause 2: Spanning-Tree Protocol Topology Changes

Another common issue caused by flooding is Spanning-Tree Protocol (STP) Topology Change Notification (TCN). TCN is designed to correct forwarding tables after the forwarding topology has changed. This is necessary to avoid a connectivity outage, as after a topology change some destinations previously accessible via particular ports might become accessible via different ports. TCN operates by shortening the forwarding table aging time, such that if the address is not relearned, it will age out and flooding will occur

Cause 3: Forwarding Table Overflow

Another possible cause of flooding can be overflow of the switch forwarding table. In this case, new addresses cannot be learned and packets destined to such addresses are flooded until some space becomes available in the forwarding table. New addresses will then be learned. This is possible but rare, since most modern switches have large enough forwarding tables to accommodate MAC addresses for most designs.

Reference:

http://www.cisco.com/c/en/us/support/docs/switches/catalyst-6000-series-switches/23563-143.html

Question: 22

Refer to the exhibit.

Video Source S is sending interactive video traffic to Video Receiver R. Router R1 has multiple routing table entries for destination R. Which load-balancing mechanism on R1 can cause out-of-order video traffic to be received by destination R?

A. per-flow load balancing on R1 for destination R

B. per-source-destination pair load balancing on R1 for destination R

C. CEF load balancing on R1 for destination R

D. per-packet load balancing on R1 for destination R

Answer: D

Explanation:

Per-packet load balancing guarantees equal load across all links, however potentially the packets may arrive out-of-order at the destination as differential delay may exist within the network.

Reference: http://www.cisco.com/en/US/products/hw/modules/ps2033/prod_technical_reference09186a00800afeb7.html

Question: 23

What is Nagle's algorithm used for?

A. To increase the latency

B. To calculate the best path in distance vector routing protocols

C. To calculate the best path in link state routing protocols

D. To resolve issues caused by poorly implemented TCP flow control.

Answer: D

Explanation:

Silly window syndrome is a problem in computer networking caused by poorly implemented TCP flow control. A serious problem can arise in the sliding window operation when the sending application program creates data slowly, the receiving application program consumes data slowly, or both. If a server with this problem is unable to process all incoming data, it requests that its clients reduce the amount of data they send at a time (the window setting on a TCP packet). If the server continues to be unable to process all incoming data, the window becomes smaller and smaller, sometimes to the point that the data transmitted is smaller than the packet header, making data transmission extremely inefficient. The name of this problem is due to the window size shrinking to a "silly" value. When there is no synchronization between the sender and receiver regarding capacity of the flow of data or the size of the packet, the window syndrome problem is created. When the silly window syndrome is created by the sender, Nagle's algorithm is used. Nagle's solution requires that the sender sends the first segment even if it is a small one, then that it waits until an ACK is received or a maximum sized segment (MSS) is accumulated.

Reference: http://en.wikipedia.org/wiki/Silly_window_syndrome

Question: 24

Which statement is true regarding the UDP checksum?

A. It is used for congestion control.

B. It cannot be all zeros.

C. It is used by some Internet worms to hide their propagation.

D. It is computed based on the IP pseudo-header.

Answer: D

Explanation:

The method used to compute the checksum is defined in RFC 768:

"Checksum is the 16-bit one's complement of the one's complement sum of a pseudo header of information from the IP header, the UDP header, and the data, padded with zero octets at the end (if necessary) to make a multiple of two octets."

In other words, all 16-bit words are summed using one's complement arithmetic. Add the 16-bit values up. Each time a carry-out (17th bit) is produced, swing that bit around and add it back into the least significant bit. The sum is then one's complemented to yield the value of the UDP checksum field.

If the checksum calculation results in the value zero (all 16 bits 0) it should be sent as the one's complement (all 1s).

Reference: http://en.wikipedia.org/wiki/User_Datagram_Protocol

Question: 25

Which statement describes the purpose of the Payload Type field in the RTP header?

A. It identifies the signaling protocol.

B. It identifies the codec.

C. It identifies the port numbers for RTP.

D. It identifies the port numbers for RTCP.

Answer: B

Explanation:

PT, Payload Type. 7 bits: Identifies the format of the RTP payload and determines its interpretation by the application. A profile specifies a default static mapping of payload type codes to payload formats. Additional payload type codes may be defined dynamically through non-RTP means. An RTP sender emits a single RTP payload type at any given time; this field is not intended for multiplexing separate media streams. A full list of codecs and their payload type values can be found at the link below:

Reference: http://www.networksorcery.com/enp/protocol/rtp.htm