Saturday 3 November 2012

congestion control techniques quality of services


Introduction: In computers network the data is the major thing that is to be transfer over the whole network. So the data should be passed in the smooth manner from one node to another node. For this rate of transferring the data among the nodes, in per unit time, matters a lot. The network has the capacity to transfer the data to some extent. But if the data comes to the network, is beyond to the capacity of the network then there comes the extra burden on the network. It tends to the traffic in the network and congestion occurs in the network. Congestion in the network is major issue that is concerned with the quality of the services. Improving one means the improving another. Adopting the better technique to improve quality of services is related to the congestion avoidance.  
   Data Traffic:  Congestion control and quality of services takes the data traffic in to account and to avoid it to be happened in the network. Peak data rate is the maximum data rate of the traffic in the network. The average data rate in the network is the number of bits sent per unit of time. The maximum bandwidth is used by the network is needed for the traffic without changing the data flow. Data flow traffic profile can be: constant bit data rate, variable bit rate, or bursty. In constant bit rate the average data rate is same as the peak data rate of the network. Whereas, in variable bit rate traffic profile, average data rate differs from the peak data rate in the network.   
   Congestion: Congestion is the main issue in the packet switching network where the router has the queues at the each interface of it. If number of packets sent to a network is greater than of its capacity of handling the number of packets then congestion may occur.  At each interface of router there are two queues that are used for storing data. Input data is stored in input queue and then it is processed for finding the next interface using routing table and then storing into output queue of respective interface. For congestion control the rate of packet arrival in input queue and packet departure rate from the respective output queue should be balances with packet processing rate of by the router. That is the load is the issue that is concerned with the delay as well as the throughput of the network. When the load on network goes higher than of its capacity then delay increases sharply and throughput of network decreases. For this we use the various congestion control techniques that are used for the congestion avoidance as well as for congestion removal.
Congestion control techniques are as:
             

   Open-loop Congestion Control: Open-loop policies prevent congestion before it happens. In this, source or destination avoids congestion of not to be happened in the network. These policies are as:
    Retransmission Policy: Retransmission of the data is done when the particular data is corrupted or lost in the network. But it becomes the issue when retransmission is done again and again. So the policy for the retransmission should be designed in a way such that it could prevent congestion. For this, TCP uses the retransmission policy which prevents the congestion.  
   Window Policy: This is the policy in which the corrupted data is retransmitted but only the affected part of the data is retransmitted not the entire copy of the data. This is done to avoid the retransmission of duplicated data which was not corrupted. For example, in the web browser if the timer for the packet times out then retransmission or reload is done. In that case Selective Repeat policy loads only the lost packets, which is time efficient. On other side G-Back-N policy reload all packets which set to duplicity of packets and consume time.
   Acknowledgement Policy:  The number of acknowledgements made by the receiver to the sender also effects the congestion. There should a policy that could manage the acknowledgements whenever it is needed. More the acknowledgements put more load on the network because the acknowledgement is also a data packet.
   Discarding Policy:  In this the router discards the packets in the network. Only the less sensitive packets are discarded by the router when the congestion seems to happen. This doesn’t affect the quality of data as only less sensitive packets are discarded.
   Admission Control: In this policy, router will only admit that data which will not lead to the congestion. Resource requirement for the data flow is checked before admitting into network. If there is the possibility of congestion that data is not taken by the router to avoid congestion.

   Close-loop Congestion Control: It controls the congestion after it happens. It uses the different techniques to control the congestion. These are as:
   Backpressure: It is the node to node congestion control that is started with a node and propagates in the direction of sender. Congested node stops receiving the data from its upstream node, then, next node, stops receiving data from its upstream node and so on until sender is not intimated.  Diagram shows the backpressure technique:


    Here the third node is congested and stops responding the data from second node and inform it to slow down its speed. Second node also stops receiving data from first node and so on until source is not intimated for slow down its speed. This is used only in virtual-circuit, X.25 network where the knowledge or information is there of upstream node. Hence it is not used in datagram network the router has not information of upstream node.



Choke Packet: The congested node informs the sender directly by sending a packet called choke packet. The intermediate node does not take any action; only the sender is informed to slow down its speed. Diagram shows the choke packet technique:

   The congested node third directly informs the source to slow down its speed by sending a choke packet. The intermediate nodes are not disturbed.
Implicit signaling:  Source itself checks for the congestion in the network. It makes a guess for the congestion. For example if acknowledgements are not proper source assume that network is somewhere congested and slow down its speed.
Explicit signaling:  In this the congested node sends a signal to the source or destination. In this signal is sent rather than a packet. It can be in forward or backward direction. In forward signaling, a bit can be sent in the direction of congestion i.e. to the sender. Then sender slows down its speed of transferring the packets across the network.
   In backward signaling, bit is sent opposite to the direction of congestion i.e. to the receiver. Then receiver slows down the acknowledgements to control the congestion.

    Different protocols use the different congestion control technique to control the congestion.
 Explicit signaling is better technique to control the direction because it controls the congestion, wherever it is needed, in the same or opposite direction, where the packets are required in the network. In this limited data rate can also be provided and indicates how many packets a source can send.

Quality of services: Quality of services means the services provided by network are ample as per the requirements considered by the user. Quality of services of a network is dependent of the following characteristics.
·         Reliability: Data transferring should be reliable in the network. If the network is not reliable, there may be the loss of the packets or the acknowledgment.
·         Delay: Delay is another issue in the electronic communication. There is the need of less delay in the network while transferring the packets.
·         Jitter: Jitter is the variation in the delay in the packets that belongs to same flow of data. The minimum kind of constant delay is not the big issue, whereas varying the delay times of the data become a problem in the data communication.
·         Bandwidth: Bandwidth is also concerned with the quality of services. It is dependent upon the application that how much bandwidth it would use for data transferring.

Many techniques are used to improve the quality of services. Commonly used quality of services techniques are given on next page.

Techniques to improve QoS:
·         Scheduling
·         Traffic Shaping
·         Resource Réservations
·         Admission Control
Scheduling: In a network, packets are accumulated and queued into memory buffers of routers and switches. Different techniques are used for scheduling the packets. The most common way to arrange packets is called first-in-first-out (FIFO), but various methods may be used to prioritize packets or ensure. The techniques of scheduling are as:
·         First-in-first-out (FIFO)
·         Priority Queuing (PQ)
·         Weighted Fair Queuing (WFQ)

FIFO: The FIFO discipline is the basic first-in, first-out queuing method. It is very simple: the first packet in the queue is the first packet that is served. In FIFO, all packets are treated in the same way: they are placed in a single queue and are served in the same order they were placed. When the queue becomes full, congestion occurs and the new incoming packets are dropped. Packets from different flows arrive at the router, which multiplexes them into the same FIFO queue in the same arrival.
  


PQ: PQ is the priority queuing, in which the packets assigned to a priority class. This technique uses multiple queues. Queues are serviced with different levels of priority and queues with higher priority are serviced first. The flows are the different interfaces to the router. Packets are placed in one of the queues according to their classification. Packets are scheduled from a certain queue only if the higher priority queues are empty. Packets are scheduled in FIFO order within each priority queue. In case of congestion, packets are dropped from lower priority queues. In this case, three priority queues are used. Firstly, packets from the highest priority queue are served in FIFO order.

        

 The main benefit of PQ is that packets of different classes can be managed using different queues and, consequently, a certain class of traffic can be handled in different way than another one. If the amount of highest priority traffic is excessive then the lower priority queue may not get any service until the highest priority traffic is served completely. During this while, the queues allocated to lower priority traffic may overflow. As a result, the lower priority traffic may experience a large delay or, in the worst case, or having resource starvation.


Weighted Fair Queuing (WFQ): In this technique, packets are assigned as same as priority queuing but for each queue a weight is assigned.  Higher priority queue gets large weight. Suppose for three queues weight is 3, 2, 1 respectively then three packets are chosen form highest priority queue, two from medium priority queue, and one from low priority queue for processing. This technique prevents the resource starvation for the packets. WFQ has a complex algorithm requiring as per service class state and iterative checking of all states for each packet arrival and departure.

Now, as for as, scheduling is concerned best method is weight fair queuing. Even the complex algorithm is used in it but this method allows the each flow to use the resources and burst flow of data doesn’t effects the other data flow.

Traffic Shaping: Traffic shaping is used to control the volume of traffic entering in the networks and the rate at which packets are transmitted. In this way, the flow of packets is smoothened according to the configured profile of traffic load on the network. Traffic shaping includes two techniques: leaky bucket and token bucket.

Leaky Bucket:  Leaky bucket algorithm shapes burst traffic into fixed-rate traffic by averaging the data rate. It may drop the packets if the bucket is full. Packets flowing into the bucket are paced out in steady way at a specified rate. Using a leaky bucket filter does not allow sending burst of packets, but only at a specified rate.
 


Token Bucket:    It allows router to accumulate credit for future in the form of tokens. In this token bucket is used, containing the tokens. For every data cell a single token is used. At every clock cycle a token is added to this bucket. If bucket is full of token a burst data can be transferred, it is empty then host or router cannot send data.


Token bucket is better for the bursty data transferring at regulated maximum speed than that of leaky bucket where fixed flow of data is there. In leaky bucket if there is no traffic for a certain period of time, the amount of unused bandwidth cannot be used for later packets but in token bucket this is achieved by using a token bucket filter. Hence token bucket is better than the leaky bucket.

1 comment:

  1. congestion control techniques quality of services are described here..

    ReplyDelete