Case Study Paper Rosen
The case study states that the easiest way to detect an error when a bit string is being transmitted is to add a parity check bit at the end of the string (Rosen, 2007). A parity check bit, sometimes referred to as a check bit, Is the part that actually tells you the number of bits that are In the string and have a value of one, Is even or odd. These check bits are the easiest and simplest error detecting codes. This means that if the number of Is in a bit string is odd there will be an error with the transmission of the codeword.
However, if there are two errors in an extended tiring the errors cannot be detected because the number of Is that are in the extended string will still result in an even number.
The next piece of the coding theory that we will discuss is Error Correcting Codes. The section above showed us that by adding a parity check bit to our string we can detect errors In the transmission of codeword. This section Is going to make the situation even better because we are going to be able to add even more redundancy. This will enable us to check for errors and also correct them.
To make the ideas about triple repetition code above even more precise, we would need to implement ideas about the distance between the codeword and how likely errors are between that distance.
Hamming distance Is what we need to Implement in order to obtain the probability of an error between the distances of the codeword. I Nils tenure Is want Railcar Hamming uses Walt Nils Temperamental work In ten cooing theory (Rosen, 2007). The text shows us that the Hamming distance that there is between two strings equates to the number of changes that are in the individual bits that are needed to change one of the strings into the other.
The Hamming distance can also be used to decode as well. For instance, when a codeword x from a code such as C is transmitted, the bit string of y is received.
If there are not any errors then, we know that y is the same as x (Rosen, 2007). The next step would be to decode y. To do this, you would take codeword of the minimum Hamming distance from the y, that we received above if the codeword is unique. Using the nearest neighbor decoding method if there are not many errors in the transmission the code word should be x, which is the codeword that was sent.
The next section to discuss is what is known as perfect codes.
This part enables error correction to take place. In order for it to take place, you need to have a large distance between codeword. While doing, this will limit the number of codeword that are available you will be able to give a minimum distance within the binary code. The next portion deals with the generator matrices. This section deals with generalizing the concept of a parity check bit that we discussed earlier in this paper.
In order to generalize it we need to add more than one check bit (Rosen, 2007).
The text showed that the message is encoded using XIX… Xx as XIX.
.. Xx+1, in this case Xx+l = (XSL +XX+… Xx) (Rosen, 2007).
In order to generalize this, we have to take the last n-k bits and make sure that hey are parity check bits. These are obtained from the k bits that are in the message. The parity check matrices enable us to see what linear relationships the components of the codeword need to satisfy. We can use the parity check matrix to make a decision of whether a vector is a codeword or not.
However, it is important to remember that the parity check matrix can also be used to decode algorithms. The parity check matrix is also used in defining the hamming codes, which we will discuss in the next portion of this paper.
Hamming codes are known as a family of linear codes that aid in error correction. These codes also help in the generalization that we discussed as a necessity above. Unlike the simple parity code, hamming codes enable you to check for up to two-bit errors and also correct one-bit errors.
They can do this without detecting errors that have not been corrected. Hamming codes are what we discussed above, perfect codes, because they can achieve the highest rate for codes given their block length and their minimum distance.
There are certain limitations with hamming codes, because of their limited redundancy. This adds to the data and only allows then to detect and correct errors when the error rate is low. A huge area f interest in today’s technological environment is cloud computing.
The major risk or concern with those moving to cloud computing is the security of their data. Those currently setup with in-house backup and storage solutions are responsible and can control their security. Using a graph as discussed by Gary Page, my instructor would allow you to see what areas could be a potential risk for failure.
This would allow you to see that when a certain root is compromised the intruder would then be able to have access to all of the vertices attached to it. Reference Rosen, K. (2007). Mathematics and Its Applications (Volt. 7). New York, NY: McGraw-