Unit-4
Digital Receiver
Q1) Apply Shannon’s encoding (binary algorithm) to the following set of messages and obtain code efficiency and redundancy.
m 1 | m 2 | m 3 | m 4 | m 5 |
1/8 | 1/16 | 3/16 | ¼ | 3/8 |
A1) Step 1:
The symbols are arranged according to non-incraesing probabilities as below:
m 5 | m 4 | m 3 | m 2 | m 1 |
3/8 | ¼ | 3/16 | 1/16 | 1/8 |
P 1 | P 2 | P 3 | P 4 | P 5 |
The following sequences of are computed
= 0
P1 + = 3/8 + 0 = 0.375
= P2 + = ¼ + 0.375 = 0.625
= P3 + = 3/16 + 0.625 = 0.8125
= P4 + = 1/16 + 0.8125 = 0.9375
= P5 + = 1/8 + 0.9375 = 1
Step 3:
The smallest integer value of li is found by using
2 li ≥ 1/Pi
For i=1
2 l1 ≥ 1/P1
2 l1 ≥ 1/3/8 = 8/3 = 2.66
The samllest value ofl1 which satisfies the above inequality is 2 therefore l1 = 2.
For i=2 2 l2 ≥ 1/P2 Therefore 2 l2 ≥ 1/P2
2 l2 ≥ 4 therefore l2 = 2
For i =3 2 l3 ≥ 1/P3 Therefore 2 l3 ≥ 1/P3 2 l3 ≥ 1/ 3/16 = 16/3 l3 =3
For i=4 2 l4 ≥ 1/P4 Therefore 2 l4 ≥ 1/P4
2 l4 ≥ 8 Therefore l4 = 3
For i=5
2 l5 ≥ 1/P5 Therefore
2 l5 ≥ 16 Therefore l5=4
Step 4
The decimal numbers are expanded in binary form upto li places as given below.
= 0
0.375
0.375 x2 = 0.750 with carry 0
0.75 x 2 = 0.50 with carry 1
0.50 x 2 = 0.0 with carry 1
Therefore
(0.375) 10 = (0.11)2
= 0.625
0.625 X 2 = 0.25 with carry 1
0.25 X 2 = 0.5 with carry 0
0.5 x 2 = 0.0 with carry 1
(0.625) 10 = (0.101)2
= 0.8125
0.8125 x 2 = 0.625 with carry 1
0.625 x 2 = 0.25 with carry 1
0.25 x 2 = 0. 5 with carry 0
0.5 x 2 = 0.0 with carry 1
(0.8125) 10 = (0.1101)2
= 0.9375
0.9375 x 2 = 0.875 with carry 1
0.875 x 2 = 0.75 with carry 1
0.75 x 2 = 0.5 with carry 1
0.5 x 2 = 0.0 with carry 1
(0.9375) 10 = (0.1111)2
= 1
Step 5:
and l1 = 2 code for S1 ----00
=(0.011) 2 and l2 =2 code for S2 ---- 01
= (.101)2 and l3 =3 code for S3 ------- 101
= (.1101)2 and l4 =3 code for S ----- 110
= (0.1111)2 and l5 = 4 code for s5 --- 1111
The average length is computed by using
L =
= 3/8 x 2 + ¼ x 2 + 3/16 x 3 + 1/8 x 3 + 1/16 x 4
L= 2.4375 bits/msg symbol
The entropy is given by
H(s) = 3/8 log (8/3) + ¼ log 4 + 3/16 log 16/3 + 1/8 log 8 + 1/16 log 16
H(s) =2.1085 bits/msg symbol.
Code efficiency is given by
ηc = H(s)/L = 2.1085/2.4375 = 0.865
Percentage code effeciency = 86.5%
Code redundancy
R ηc = 1 - ηc = 0.135 or 13.5%.
Q2) Given the messages X1,X2,X3,X4,X5 and X6 with respective probabilities of 0.4,0.2,0.2,0.1,0.07 and 0.03. Construct a binary code by applying Huffman encoding procedure . Determine the efficiency and redundancy of the code formed.
A2)
Now Huffmann code is listed below:
Source Symbols | Pi | Code | Length (li) |
X1 | 0.4 | 1 | 1 |
X2 | 0.2 | 01 | 2 |
X3 | 0.2 | 000 | 3 |
X4 | 0.1 | 0010 | 4 |
X5 | 0.07 | 00110 | 5 |
X6 | 0.03 | 00111 | 5 |
Now Average length (L) = li
L = 0.4 x 1 + 0.2 x 2 + 0.2 x 3 + 0.1 x 4 + 0.07x 5 + 0.03 x 5
L = 2.3 bits/msg symbol.
Entropy H(s) = = log 1/Pi
0.4 log 1/0.4 + 0.2 log 1/0.2 + 0.1 log 1/0.1 + 0.07 log 1/0.07 + 0.03 log 1/0.03
H(s) = 2.209 bits /msg symbol.
Code effeciency = ηc = H(s) /L = 2.209/2.3 = 96.04%.
Q3) Given are the messages X1,X2,X3,X4,X5,X6 with respective probabilities 0.4,0.2,0.2,0.1,0.07 and 0.03. Construct a binary code by applying Shannon-fano encoding procedure . Determine code effeciency and redundancy of the code.
A3) Applying Shannon-fano encoding procedure
Average length L is given by
L =
= 0.4 x 1 + 0.2 x 3 + 0.2 x 3 + 0.1 x 3 + 0.07 x4 + 0.03 X 4
L = 2.3 bits/msg symbol.
Entropy
H(s) = log 1/Pi
0.4 log 1/0.4 + 2x 0.2 log 1/0.2 + 0.1 log 1/0.1 + 0.07 log 1/0.07 + 0.03 log 1/0.03
H(s) = 2.209 bits/msg symbol.
Code Efficiency = ηc = H(s) / L = 2.209/2.03 = 96.04%
Code Redundancy = R ηc = 1- ηc = 3.96%
Q4) You are given four messages X1 , X2 , X3 and X4 with respective probabilities 0.1,0.2,0.3,0.4
A4) Average length L = li
= 0.4 x1 + 0.3 x 2 + 0.2 x 3 + 0.1 x3
L = 1.9 bits/symbol.
Entropy H(s) = log 1/Pi
= 0.4 log 1/0.4 + 0.3 log 1/0.3 + 0.2 log 1/0.2 + 0.1 log 1/0.1
H(s) = 1.846 bits/msg symbol.
Code efficiency ηc = H(s) / L = 1.846/1.9 = 97.15%
Code Redundancy = R ηc = 1- ηc = 2.85%
The code tree can be drawn
The probabilities of 0’s and 1’s in the code are found by using the
P(0)= 1/L no of 0’s in code xi) (Pi)
= 1/1.9 [ 3 x 0.1 + 2x 0.2 + 1x 0.3 + 0(0.4)]
P(0) = 0.5623
P(1) = 1/L no of 1’s in code xi) (Pi)
= 1/1.9 [ 0x0.1+1x0.2+1x0.+1x0.4]
P(1) = 0.4737
Q5) Construct a Shannon-fano ternary code for the following ensenble and find the code efficiency and redundancy.Also draw the corresponding code tree.
S = { S1,S2,S3,S4,S5,S6,S7}
P={ 0.3,0.3,0.12,0.12,0.06,0.06,0.04} with
X= {0,1,2}
A5)
The average length of the ternary tree is given by
L = li -----------------(1)
= 0.3 x 1 + 0.3 x 1 + 0.12 x 2 +0.12 x 2 + 0.06 x3 +0.06x 3 +0.04 x 3
L = 1.56 trinits/msg symbol.
Entropy in bits/msg symbolis given by
H(s) = log 1/Pi
= 0.3 log 1/0.3 x 2 + 0.12 log1/0.12 x2 + 0.06 log 1/0.06 x 2 +0.04 log 1/0.04
= 2.4491 bits/msg symbol.
Now entropy in r-ary units per message symbol is given by
H r (s) = H(s) / log 2 r
r=3
H3 (s) = H(s) / log 2 3 = 2.4419/ log 2 3 = 1.5452.
The terenary coding effeciency is given by
ηc = H3(s) / L = 1.5452/ 1.56 = 0.99 bits/msg symbol.
ηc = 99.05%
Q6) Explain RLE Algorithm.
A6) RLE is probably the easiest compression algorithm there is. It replaces sequences of the same data values within a file by a count number and a single value.
Suppose the following string of data (17 bytes) has to be compressed:
ABBBBBBBBBCDEEEEF
Using RLE compression, the compressed file takes up 10 bytes and could forms like this:
A *8B C D *4E F
The repetitive strings of data are replaced by a control character (*) followed by the number of repeated characters and the repetitive character itself.
The control character is not fixed, it can differ from implementation to implementation.
If the control character itself appears in the file then one extra character is coded.
RLE encoding is only effective if there are sequences of 4 or more repeating characters because three characters are used to conduct RLE so coding two repeating characters would even lead to an increase in file size.
Q7) Explain FHSS.
A7) This is frequency hopping technique, where the users are made to change the frequencies of usage, from one to another in a specified time interval, hence called as frequency hopping. For example, a frequency was allotted to sender 1 for a particular period of time. Now, after a while, sender 1 hops to the other frequency and sender 2 uses the first frequency, which was previously used by sender 1. This is called as frequency reuse.
The frequencies of the data are hopped from one to another in order to provide a secure transmission. The amount of time spent on each frequency hop is called as Dwell time.
Fig: FHSS
Q8) Compare FHSS and DSSS.
A8)
FHSS | DSSS |
Multiple frequencies are used | Single frequency is used |
Hard to find the user’s frequency at any instant of time | User frequency, once allotted is always the same |
Frequency reuse is allowed | Frequency reuse is not allowed |
Sender need not wait | Sender has to wait if the spectrum is busy |
Power strength of the signal is high | Power strength of the signal is low |
Stronger and penetrates through the obstacles | It is weaker compared to FHSS |
It is never affected by interference | It can be affected by interference |
It is cheaper | It is expensive |
This is the commonly used technique | This technique is not frequently used |
Q9) Explain OFDM.
A9)
Q10) Explain Shannon’s encoding algorithm.
A10) The following steps indicate the Shannon’s procedure for generating binary codes.
Step 1:
List the source symbols in the order of non-increasing probabilities
S = {S1,S2,S3……………………Sq} with P = {P1,P2,P3…………………..Pq}
P1 ≥P2 ≥P3 ≥………………….Pq
Step 2
Compute the sequences
= P1 +
= P2 + P1 = P2 +
P3+P2+P1 = P3 +
….. ……..
= Pq + Pq-1 +…………+ P2 + P1 = Pq +
Step 3:
Determine the smallest integer value of “li using the inequality
2 li for all i=1,2,……………………q.
Step 4:
Expand the decimal number in binary form upto li places neglecting expansion beyond li places.
Step 5:
Remove the binary point to get the desired code.