Introduction to Error Control Coding-I

Introduction to Error Control Coding-I

welcome to the course on coding theory I'm at rich Banerjee from IIT Kanpur today we are going to talk about basic introduction to what coding theory is all about so we'll start our lecture so an introduction as I said we will talk about what is coding theory will illustrate with a very simple example how error correcting codes can be used for error detection and error correction before I start my lecture I would like to talk about the books that we are going to use for this course so we are going to follow this book error control coding by Lin and Costello the second edition of this book we are going to follow this book as our textbook and there are some very nice books which you can use as a reference book for example this book by Sloan and McWilliams is a very nice book on block codes you could also follow this book by bleh hood on algebraic codes for data transmission this book error control coding by Todd ki-moon this also gives a very nice introduction to error correcting codes or you could use this book by Huffman and plus called fundamentals of error control codes so when we talk about communications communications basically involves three basic steps – first is encoding a message you have a message source that you want to represent efficiently now you can for example consider a speech signals now if you want to transmit a speech signal you first have to convert your analog signal to digital signal and then you need to get rid of useless general C's why because we want to transmit basically useful information at the we want a source inherently has lot of redundancy and when we try to represent a source we would like to represent source efficiently in you know number of possible bits so first step involved in any communication is basically encoding a message second this once you have represented your source you want to transmit that source over the communication channel so the second thing is transmission of the message through a communication channel and finally once the receiver received the message it has to decode to find out what was the information that was transmitted so broadly there are three steps involved in communication encoding transmission and then finally decoding so information theory basically gives us fundamental limits of what is the maximum limit of laguage the max best compression fundamental limit and compression that we can achieve it also gives us fundamental limits on what is the maximum transmission rate possible over a communication channel so let's spend some time on what is our transmission medium so the transmission medium over which we want to send a packet that is known as channel and here I have illustrated two very simple channel models the first which is binary symmetric channel now you can see there are it has binary inputs zeros and one and similarly it has binary output 0 and 1 now with probability 1 minus Epsilon basically whatever you transmit is received correctly at the receiver so this is the transmitter and this is the receiver side so if you cross with 0 with probability 1 minus epsilon you will receive it correctly similarly if you transmit 1 with probability 1 minus epsilon you will receive it correctly and this course of our probability of error is basically given by Epsilon so this is basically a symmetric channel and it's a binary channel because the binary input binary output is known as binary symmetric Channel another channel which basically is very commonly used to model packet data networks is what is known as binary erasure Channel so they're binary inputs zeroes and ones and the outputs are either you see whatever is being transmitted you receive it correctly or whatever you have transmitted is basically erased so this Delta that you see basically we are denoting an erased bit using this symbol so with probability 1 minus up Delta you received a bit correctly and with probability Delta the bit is erased or lost so in his landmark paper in 1948 Shannon introduced this concept of channel capacity that what is the maximum rate at which we can communicate over a communication link so a channel capacity is defined as the maximum amount of information that can be conveyed from the input to the output of a channel Shannon in a theorem also proved that there exists channel coding schemes that can achieve very low arbitrary very low probability of error as long as the transmission rate is below channel capacity so Shannon showed that there exists good channel quotes so as long as the transmission rate is below channel capacity we can achieve arbitrary low probability of error for example if we talk about channel capacity of a particular link to be 2 gigabits per second then basically we should be able to communicate at rate any rate up to 2 gigabits over this communication link without basically and Anna can achieve very low probability of error at the decoder now in this theorem Shannon did not specify how Oh to design such codes which have read close to capacity and that's where basically error control coding comes into picture so the goal of error correcting code increased is to achieve this to design course which can achieve this limit so basically Shannon has mentioned that we could cross MIT we could design as long as we design error correcting codes which are rate less than channel capacity we can achieve arbitrary low probability of error so the bole of the coding theory or error control coding is to design such error correcting codes with rates as close to capacity which can achieve arbitrary lower probability of error and Shannon didn't specify how to design such course so basically we're coding theories come into picture so how do we design an error correcting code an error correcting code is designed by adding some redundant bits to your message bits the message between we call them information bits and those additional redundant bits that we add those are known as parity bits so error correcting code is designed by properly adding some redundant bits to your message bit and then send this coded message over a communication link now we use this additional redundant bits to detect error and to character recollecting codes has wide range of applications in digital communication and storage listed basically a few of the uses example when we send a signal our communication link it's get corrupted by noise fading interference so to combat the effect of all these basically we use error correcting codes to correct the errors similarly in digital storage systems you want to correct the error caused due to storage defects particles radiations we use a recurring codes there so let us take a very simple example of error correcting codes and illustrate how we can use error correcting codes to detect and correct errors so example I am going to show you right now is of what is known as repetition code so the rate is defined as a ratio of number of information which to number of coded bits so when I say rate 1/2 code I mean there is one information bit or one message bit and there are two coded bits for example in a repetition code in a binary repetition code basically we repeat whatever is the information bit is so a rate half repetition code would be will look like this a binary rate half the petition code would look something like this so for 0 we would be transmitting 0 0 and for 1 P would be transmitting 1 1 similarly for a rate 1/3 repetition code for 0 we will be transmitting 0 0 0 and for 1 we will be transmitting 1 1 1 so you can see here in this in rate 1/2 case we are adding one additional redundant bit and in case of rate 1/3 code basically we are adding two additional redundant bits now how we are going to make use of these redundant bits for error correction and error detection that will be explained in the next slide so let us take this example let's say I want to transmit these set of bits so I want to transmit 0 0 1 1 0 1 now if I use a rate 1/2 repetition code what would be my coded bits four zero I will be transmitting zero zero four zero I will be encoding them as zero zero four one I will be encoding them as 1 1 and for 1 I will be sending them as one one four zero as be sending as V zero and four one I will be sending as one one so this will be my coded sequence okay now here I am Illustrated one case where there is a single error so this was basically a sequence which was transmitted think of it that this base sequences is transmitted over a binary symmetric channel and this is what the receive sequence I received so you can see here this is a case of a single error the first bit which was transmitted zero was received as one now how can I use error correcting codes to detect error so since it's a rate 1/2 code for each information bit I am sending two coded bits so at the receiver I will look at two bits at a time so I will look at first I look at this one zero now since this repetition code what do you expect I expect that both the bits should be same right but here in this case first bit is 1 second bit is 0 which means there is a transmission error so I am able to detect single error how because these bits were encoded using the rate of repetition code I expect these two bits to be same so I know there is an error in the first bit but I do not know whether this is bit zero or bit one let us look at other receive bits 0 0 this will be decoded at 0 1 1 this will be decoded as 1 1 1 this will be decoded as 1 there is no ambiguity Zi 0 this will be decoded as 0 again there is no ambiguity and 1 1 this will be decoded as 1 so we can see that using one additional redundant bit we are able to detect single error now let us look at example for double error so let us say the first and second bit are received in error so what we have received is basically 1 1 0 0 1 1 1 1 0 0 1 1 so the first to receive bits are in error 1 1 now let us see whether we can detect error using this rate 1/2 repetition code so again we will follow the same logic for decoding we look at 2 bits at a time so first two bits are 1 1 now since bits are same will decode them as 1 but what was transmitted it was 0 so we can see that this is the case of undetected error even though these 2 bits were received in error the decoder is not able to detect this error so this kind of thing happens when the error pattern is such that it transforms one codeword into some other code word so since 1 1 is a valid codeword for 1 the decoder is not able to detect this error so this rate 1/2 repetition code is able to detect single error but it is not able to detect SS now let us look at whether it can correct any error so let us look at this example when we had single error so note here so what we receive on 1 0 so we were able to detect error that there was an error but can we correct it no we cannot why is equally likely that this one that we received was zero or this zero that we received was one if we are talking about of binary symmetric channel right so we do not know whether the first bin got flipped to zero first bin got flipped to 1 instead of 0 or the second min got flipped to 0 instead of being 1 so this particular rate half repetition code cannot correct any errors it can only detect single errors now let us look at another example this time we are considering a rate one-third repetition code so what does rate one-third prediction code means for each and again we are considering binary code so for each bit we are adding so two parity bits and we are repeating the same bit so for 0 we will be transmitting we will be coding it as 0 0 0 for 1 we will be coding it as 1 1 1 so again we consider the same example of CrossFitting 0 0 1 1 0 1 so we are transmitting the same information sequence this time we are encoding them using rate 1 third repetition code so this 0 will be encoded as 0 0 0 similarly one will be encoded as 1 1 1 so we will be transmitting this so this this information sequence will be coded in this particular way now we will again look at what happens when there are errors the receive sequence like we did for rate 1/2 repetition code so let's again look at example for single error scenario so let us say the first bit was received in error so instead of 0 we received a 1 now let's see whether our rate 1/3 repetition code can detect single error so since the rate 1/3 code for 8 information bit we are sending 3 coded bits so we are going at the receiver we are going to look at 3 bits at a time at the decoder we are going to look at 3 bits at a time so we will first look at these 3 bits 1 0 0 now what do we expect we expect since we are using a repetition code we expect all these 3 bits to be same but here in this case they are not because what we have received is 1 0 0 now what does that mean that means there is a transmission error so we are able to detect single error using a rate 1 third repetition code if we look at other sets of bits 0 0 0 there is no error here 1 1 1 no error again no error here no error here no error here so great 1/2 repetition code was able to detect single error even rate one third code is also able to detect single error now let us look at double error so let us consider scenario when the first two bits are received in error so we have 1 1 and rest of the sequence is less okay now can we detect double error so let us look at we again look at 3 bits at a time so if you look at 3 bits at a time the first 3 bits are 1 1 0 now we could see that there is an error why because either this should have been 0 0 0 or 1 1 1 but we received 1 1 0 so we are able to detect using rate 1 repetition code we are able to detect double errors as well which we were not able to detect using rate 1/2 repetition code now let us look at the error correcting capability of this code so let's go back again and look at single error situation now when the single error happens so something like this let us say one of the bit got flipped 1 0 0 now can we correct single errors and the answer in this case is yes why because if you look at these 3 bits two bits are already 0 and one bit is 1 so it is and whatever possible are outcomes this this could be either 0 0 0 or 1 1 1 and it is more likely that one bit got flipped it is more likely that 0 got flip to one rather than two zeros two ones getting flipped to 0 so it's more likely that this bit got flipped from 0 to 1 instead of these 2 bits getting flipped from 1 to 0 so using majority logic this majority of the bits are 0 we will decode this as 0 so we can see this rate 1/3 repetition code can correct single error this was not possible for rate 1/2 repetition code now can it correct double errors now if you look at this 1 1 0 it will think that this particular bit got flip from 1 to 0 so it will decode this as 1 so this cannot correct double errors so to summarize we saw that rate 1/2 repetition code can detect single error but cannot correct single error it cannot detect double errors whereas rate 1/3 repetition code can correct single error and can detect single errors it can detect double error but it cannot correct double errors so why is this core better then the first code it has certainly has better error detecting capability then the rate 1/2 code this we will discuss in subsequent lecture it has to do with separation between the distance separation between the code words and you can see basically in this particular code we are using two redundant bits and in the previous case we were using one redundant case so the error correcting capability and error detecting capability of the code is depending this dependent on the distance properties of the code and that we will talk about in subsequent lectures so to summarize it I think this quotations by Solomon Golomb rightly captures what error correcting code is all about so I'll read it a message of content and clarity has got to be quite a rarity to combat the terror of serious error use bits of appropriate parity thank you

5 thoughts on “Introduction to Error Control Coding-I

Leave a Reply