Featured Post

The great debacle of healthcare.gov

This is the first time in history when the president of the United States of America, or probably for any head of state around the world,...

Tuesday, June 30, 2015

What makes a network IPv6 capable

To declare a network is capable of IPv6 connectivity, there are certain infrastructural components that have to be in place before hand.

Operating System: The operating system (or the client) has to be IPv6 enabled client. This could be in the form of dual stack client or IPv6 only client so that it would have IPv6 address assigned to it.

Dynamic Host Configuration Protocol (DHCP6) server: Now a days, almost every network is managed by DHCP to assign IP address, though this is not a mandatory device. But to declare that the network is IPv6 enabled, there has to be a DHCP6 server if the network needs to be managed in a stateful manner to autoconfigure the network IP addresses. The alternate is stateless autoconfiguration that doesn’t need DHCP6 server (which comes with some security risks). In that case no DHCP6 server is required.

Router: the router has to be able to recognize and process IPv6 packets. Other option is tunneling which doesn’t say that the network is IPv6 enable but just a work around

Domain Name System: the DNS has to be capable to resolve IPv6 addresses so that the source host can reach out the destination IPv6 hosts

There are few other types of devices that are sometime placed in the network like NAT, Proxy server, Firewall. If these devices are present in the network (and probably are), then all these devices should also be IPv6 aware so that the end to end connectivity can take place. 

There’s a hard way to make the IPv4 network works for IPv6 communication, which is through tunneling. In that way the IPv6 packets are transmitted by encapsulating into IPv4 packets. But this is a complicated way to achieve IPv6 connectivity with high cost of configuration and performance

Flow control & Congestion control: The two most important features of TCP that keeps the Internet alive

Flow control is the mechanism where the sender and receiver sync up the data rate between them to not to overwhelm the receiver, in the case where the receiver has less capacity than the sender.

Congestion control, on the other hand, is the sender trying to figure out what the network is able to handle. This the mechanism at the sender’s end to determine through the data loss on the transmission link and adjust the throttle accordingly to be most efficient.

Both flow and congestion controls are necessary to effectively transmit data from sender to receiver. Without flow control, the sender would overwhelm the receiver’s buffer and the data would be discarded and also sender would be forced to continuously re-transmit the unacknowledged data. This would tremendously impact the performance of the TCP protocol, throughput and the performance of the network link. Similarly congestion control helps the sender to determine if the data being sent over the network are capable of delivery to the receiver or not. There could be situation where both the sender and receiver are perfectly fine to accept a higher data rate but if the link in between isn’t capable enough, then a lot of bandwidth would be wasted just to re-transmit the lost data on the link. This would effectively make the data transmission slower than the true capacity of the link. Though both are necessary to achieve the optimal performance but they are essentially two different things:
  • Flow control is between sender and receiver, whereas congestion control is between the sender and the network
  • Flow control is dictated mostly by the receiver through negotiation, whereas the sender dictates the congestion control
  • Flow control is to sync up the data transmission between sender and receiver, whereas congestion control is to sync up the data transmission between the sender and the network link
  • Flow control is end to end but congestion control is not end to end but resides at the sender’s end alone

Implementation of flow control: TCP uses sliding window model to implement flow control. This is achieved through the use of advertised window size from the receiver. The receiver communicates the buffer size during the connection establishment and can change it anytime during the life cycle of the connection. The receiver and sender negotiates the buffer size where the SWS (Sender Window Size) is set to RWS (Receiver Window Size) that ensures that the Sender isn't sending more data than the Receiver can receive before acknowledging them.

Implementation of congestion control: TCP probes the network by starting with small amount of data to come up with optimal transmission rate in the sliding window model. TCP uses a new variable in the sliding window called congestion window, to control the streaming rate of bytes. In conjunction with sliding window’s advertised window size, this congestion window helps to determine the maximum size of the allowed window, which is the minimum of those two windows. Unlike advertised window, the congestion window is determined by the sender, determined by the network link’s ability on the data transmission. The loss of data is used as indication of congestion on the link and set the congestion window accordingly. TCP considers that the network is otherwise reliable (wireless is handled differently through).

There are various techniques that are used to implement congestion control: Slow start, Additive Increase/Multiplicative Decrease (AIMD), Fast re-transmit, Fast recovery etc. In AIMD, TCP starts streaming bytes at a minimum rate and increase the rate in an additive fashion. Another implementation is to use slow start with a small amount and then increase the rate exponentially up to the congestion threshold level. After that it goes back to additive increase until congestion is sensed, which triggers the TCP to sharply decrease the rate and also reset the congestion threshold to a lower number (depending on the implementation). This continues throughout the life cycle of the connection and sync up with the network link’s ability to handle the transmission

Monday, June 29, 2015

Basic concept of Collision and Broadcast Domains in Computer Networking

Collision Domain is the group of computer devices that are connected to each other in a topology where every packet transmitted over the network has the potentiality to collide on the network link. The 802.3 network uses the CSMA/CD method to send data through a collision domain. In the 802.3 network, only one device suppose to send a frame over the network when it finds that no other devices is using the link, i.e. the network is free. But the collision happens when two (or multiple) devices sense the network as free and start sending frames. The frames then collides.

Broadcast Domain is the concept where every device connected to a network is able to reach out to all other devices with a single message sent using a special messaging, known as broadcast message. In 802.3 network, by default, every device is part of the broadcast domain as they listen to every data frame sent over that network. In the broadcast domain, every Network Interface Card (NIC) receives every frame transmitted over but discards all but the one addressed to itself. The exception is the broadcast message which is accepted by every NICs. Thus, in a broadcast domain, any device can reach out to every device at any point of time.

Ethernet Hub is a dumb device that forwards every Ethernet frame to all other ports in it thus creating a large single collision domain for the devices connected to its ports. This also creates a single broadcast domain for the connected devices. Essentially, it’s like a bus topology where all the devices are connected to a thick cable.

Ethernet Switch creates multiple collision domains, determined by the number of ports in it i.e. a single collision domain with the devices that are connected to a single port. When a device sends a frame to another device connected to the switch, it forwards the frame only to the port at which the the destination device is connected to. In that way, if multiple devices are connected to each other (like a Star of Stars topology) then those devices connected to that port, forms one single collision domain. In the other hand, all the devices connected to a switch form a single broadcast domain, i.e. one device still can reach out to all the devices connected to the switch using a single broadcast message. So it can be said that an Ethernet switch creates a single broadcast domain while breaking that into a multiple collision domains.

Router breaks both the collision and broadcast domain into a single port level. That means, every device connected its one port creates a collision domain and a broadcast domain. The purpose of a router is to connect multiple networks, thus one port of a router creates a single collision domain for that network as well as a single broadcast domain. The network connected to the router decides on its internal detail. All the broadcast packets are dropped at the router.




References: