Copyright © 2013 IJECCE, All right reserved
On E-Capacity Region of the Wiretap Channel
Nasrin Afshar
Institute for Informatics Automation Problems of the Armenian National Academy of Sciences Yerevan, Armenia
Abstract — We study the E-capacity region of the wiretap channel. A random coding bound for E-capacity region is obtained. When E tends to 0, the proposed inner bound converges to the capacity region obtained by Csiszár and Körner.
Keywords – E-Capacity, Random Coding Bound, The Method of Types.
I.
I
NTRODUCTIONThe wiretap channel was introduced by Wyner in 1975 [12]. The purpose of study wiretap channel is to maximize the rate of reliable communication from the source to the legitimate receiver, while the wiretapper learns as little as possible about the source output. Wyner has determined the achievable rate-equivocation region when both the main and the wiretap channels are discrete memoryless. Later Csiszár and Körner generalized Wyner’s wiretap channel [2], the model of which is depicted in Fig. 1. The capacity region of this model of wiretap channel was obtained in [2].
The E-capacity function C(E) for a discrete memoryless channel (DMC) was first introduced in 1967 [5]. The function C(E), which is inverse to Shannon's reliability function E(R), presents optimal dependence between rate
R on given reliability E [5] - [8]. Furthermore, the concept of the E-capacity is a generalization to the Shannon's channel capacity. In this paper, we study the E-capacity region of the generalized wiretap channel. A random coding bound is derived for E-capacity region. When
→0, the inner estimate proposed in this paper coincides with the capacity region of the wiretap channel obtained in [2].
II.
N
OTATIONS ANDP
ROBLEMS
TATEMENTLet capital letters represent random variables (RV's), and specific realization of them are denoted by the corresponding lower case letters. Random vectors of dimension N will be denoted by bold-face letters. For any finite set X, the cardinality of X is denoted by |X|.
We investigate a DMC with a finite input alphabet set
X, and finite output alphabet setX. LetZ be the output alphabet of the wiretap per. The wiretap channel is defined by the pair | , | of conditional probability distributions | :X→Y and | :X→Z, where
| (y|x) ≜ =1 | ( | ),
| (z|x) ≜ =1 | ( | ).
Fig.1. The model of generalized wiretap channel
The vector x = ( 1,…, ) X N
is the input codeword, y = ( 1,…, ) Y
N
and z = ( 1,…, ) Z N
are the output vectors of length N.
Let MN be the message set. The message m MN is communicated reliably to legitimate receiver at rate R. The message set MN is split into two parts, one part is denoted by MN,which can be decoded by the legitimate receiver and the wiretapper at rate R0 and the remaining part is represented by LN, which can be decoded by only the legitimate receiver at rate R1 and must be kept as secret as possible from the wiretapper. The level of ignorance is measured by equivocation rate Re, which is the uncertainty of the wiretapper with respect to the message L .
Let U0 and U1 be some finite sets and
, , 0, 1, , and be RVs with values in MN, LN, U0, U1, X, Y and Z, correspondingly. The notation
0→ 1→ →( , ) means that these RVs form a
Markov chain in this order.
A stochastic encoder with block length N is specified
by a matrix of conditional probabilities (x| , ), where x X , M , L and x X x , = 1.
A code is a triple of mappings ( , 1, 2), where :M ×
L →X is a stochastic encoder and 1:Y →M ×L and 2:Z →M are deterministic decoders.
The maximal probabilities of erroneous transmission of the pair of messages ( , ) M ×L by the channels | and | using a code ( , 1, 2) are defined, respectively,
1 , 1, | ≜
max
, x , | (( 1
−1 , ) |x)
x X
,
2 , 2, | ≜
max
, x , | (( 2
−1 ) |x)
x X
.
The level of ignorance of the eavesdropper with respect to the private message is measured by the equivocation rate 1/ , | Z .
A code , 1, 2 is characterized by rates 0, 1, , where 0 and 1 are the transmission rates for M and L , respectively,
0=
1
log M , 1=
1
log |L |,
Copyright © 2013 IJECCE, All right reserved
Let = 1, 2 , > 0, = 1, 2, be given. A rate
equivocation triple 0, 1, is named E-achievable for the wiretap channel iff
lim →∞1logM = 0,
lim →∞1logL = 1, (1)
and maximal probabilities of error decrease exponentially with reliabilities 1 and 2, respectively,
1 , 1, | exp − 1 , (2) 2 , 2, | exp − 2 , (3) and the equivocation rate satisfies
lim →∞1 , | Z . (4)
The E-capacity region C(E) for maximal error
probabilities is defined as the set of all E-achievable rates 0, 1, .
III.
R
ANDOMC
ODINGB
OUND FORE
-
CAPACITYR
EGION OF THEW
IRETAPC
HANNELLet 0be PD of RV 0, 1| 0 be conditional PD of RV 1 for given value �0, and | 1 be conditional PD of RV for given value �1. Define = 0○ 1| 0 | 1. Let | ≜{ | : X, Y} be conditional PD of RV for given value x, and | ≜{ | : X, Z} conditional PD of RV for given value . For the finite sets and , the set of all conditional PDs | is denoted by P(Y|X). The conditional entropy of RV relative to RV is denoted by , | ( | ). The notation
( | || | | ) denotes the divergence between
conditional PDs | , | P YX given PD . Some definitions from the method of types follow (see [3], [10]). Let (�0|u0) be the number of occurrences of symbol �0 in vector u0 U0 and (�0, |u0, x) be the number of repetitions of pair (�0, ) in vector pair (u0, x)
U0 ×X . The type of a vector u0 U0 is the PD 0 on 0,
0≜{ 0 �0 = �0u0 , �0 U0}.
The set of all u0 U0 with type 0 is denoted by T
0 0 and the set of all x X of conditional type | 0with respect to u0 T 0 0 isT 0, | 0 |u0 . The set of all y with conditional type | is denoted by T , | ( |u0,�) and is called | -shell of u0 and . The set of all conditional types of y Y given x
T
0, | 0( |u0) is denoted by ( , ).
If | and | are conditional types of, respectively, y and z given x T ( ), then for y T , | ( | �) and for z T , | ( |�) we have
| (y|�)=
exp{− [ ( | || | | ) + , | ( | )]}, (5)
| (z|�)=
exp{− [ ( | || | | ) + , | ( | )]}, (6)
To formulate the problem we consider the joint PDs
0○ 1| 0 | 1 | ≜
{ 0 1| 0 | 1 | (�0,�1, , ) =
0(�0) 1| 0(�1|�0) | 1 ( |�1) | ( | )},
0 1| 0 | 1 | ≜
{ 0 1| 0 | 1 | (�0,�1, , ) =
0(�0) 1| 0(�1|�0) | 1 ( |�1) | ( | )}, (7)
Let 0 → 1 → →( , ) be a Markov chain. We
define the following functions appearing in our inner estimates of E-capacity region:
0∗( , 1, 2)
≜
min{ min
| : ( | || | | ) 1
| , | 0∧ +
( | || | − 1|+,
min
| : ( | || | | ) 2
| , | 0∧ +
( | || | − 2|+}, (8)
1∗( , 1)
≜
min
| : ( | || | | ) 1
| , | 1∧ | 0 +
( | || | − 1|+, (9)
∗( , 1)
≜
min
| : ( | || | | ) 1
| , | 1∧ | 0 +
( | || | − 1|+− , | 1∧ | 0 . (10)
Theorem 1.
For 1 0, 2 0, the inner bound forE-capacity region of the generalized wiretap channel
is:
R∗ ≜
{( , ): for 0 → 1 → →( , )
0 0∗( , 1, 2)+ 1∗( , 1), (11)
0 ∗( , 1), }. (12)
Corollary 1.
When E→0, the E-achievable region RW∗ E converges to the capacity region C.Proof of Theorem 1. We shall prove that the rate pair
( , ) in the region specified in Theorem 1 is E -achievable.
Let us consider the new regions R1∗ and R2∗ . The region R2∗ can be obtained by replacing 1 by in (8) -(10).
R1∗ ≜ {( 0, 1, ): for 0 → 1 → →( , )
0 1 1∗( , 1),
0 0 0∗( , 1, 2),
0 ∗( , 1), },
R2∗ ≜ {( , ): for 0 → →( , )
= 0+ 1, (13)
0 1 1∗( , 1), (14)
0 0 0∗( , 1, 2), (15)
0 ∗( , 1), }. (16)
Lemma 1.
If the region R1∗ E is E-achievable, then RW∗ E is E-achievable.Copyright © 2013 IJECCE, All right reserved
Lemma 2.
If the regionR2∗ E is E-achievable, then R1∗ E is E-achievable.Proof: Proof of Lemma 2 is similar to the proof of Lemma
4 in [2].
So to prove Theorem 1, it is sufficient to prove R2∗ is E-achievable. We shall prove that for any > 0 and for any 1 0, 2 0 and sufficiently large N there exists a code with
M = exp{ min{
min
| : ( | || | | ) 1
| , | 0∧ +
( | || | − 1− |+,
min
| : ( | || | | ) 2
| , | 0∧ +
( | || | − 2− |+}}, (17)
L = exp{
min
| : ( | || | | ) 1
| , | ∧ | 0 +
( | || | − 1− |+}, (18) such that
1 log M L 0+ 1− ,
1 log M 0− ,
and maximal probabilities of error satisfy (3) and equivocation rate satisfies (4). The proof is given in two steps. In step 1 the existence of a code with certain properties is proved. Estimation of the equivocation rate is presented in step 2.
Step 1:
Lemma 3.
If U0 → X →(Y, Z), then for every N > 0 and for any E1 0, E2 0 there exists a codebook{xm ,a,b}⊂ XN, where m, a, b run over index sets MN, A, B, and stochastic encoder f and decoders g1, g2 with the following properties.
a) The sets A≜{1,…, A} and B≜ 1,…, B are defined
≜exp{ [ min
| : ( | || | | ) 1
| , | ∧ | 0 +
( | || | − 1−4|+−
( , | ∧ | 0 −4)]},
≜exp{ [( , | ∧ | 0 − 4)]}, (19)
b) 1 , 1, | exp − 1 ,
2 , 2, | exp − 2 .
Proof of Lemma 3.
The code construction is the same as code generation in [1]. M vectors u0 are drawn uniformly, independently from T0 0 . Let | 0 be a conditional type of x X for given u0 . For each u0 , generate × codewords which are uniformly and independently drawn from T
0, | 0 |u0 , where and satisfy (19).
Let us denote code words with x , , , where M ,
A, B. Define the codebook
C= { x , , } M , A, B . (20)
Let J be such a set that
1 log|J| = 1−1 log|A|.
The function � : B →J which makes B partitioned into |J| subsets �−1( ), with nearly equal size. Then the encoder ∶M × L → C is defined as follows
, = x , , ,
where is chosen uniformly from the set �−1( ).
Thus for the pair of messages ( , ), the bin index
, , is chosen as the encoding scheme and the vector x , , C is sent through the channels. Then the channels output y and z are decoded as follows.
We apply the divergence minimization decoding rule for 1 and 2. Suppose that the codeword ( , ) = ( , , ) is transmitted, and ( ′, ′) is received at the legitimate receiver and ′′ is received at the eavesdropper.
Every y first is decoded to ( ′, ′, ′) as follows
′, ′, ′ =
argmin
| ′ :y T
, ′| ( |�0 ′, x ′, ′,′)
( ′| || | ,
then applying the function � ′ = ′, decoder 1 finds
′, ′.
Every is decoded to such ′′ that for some ′′, ′′,
′′= argmin
| ′ :z T
, ′| ( |�0 ′′, x ′′, ′′, ′′)
( ′| || | .
Decoder 1 makes an error, if the pair of messages , is transmitted but there exists ′, ′ ≠( , ), such that for some ′|
y T , | ( |�0 , x , , ) T , ′| ( |�0 ′, x ′, ′, ′), and
( ′| || | ( | || | . (21)
Decoder 2 makes an error, if the message is transmitted but there exists ′′≠ , such that for some
|
′ , , ′′, , ′′
z T , | ( |�0 , x , , ) T , ′| ( |�0 ′′, x ′′, ′′, ′′), and
( ′| || | ( | || | . (22)
We define the following sets for decoding error at the legitimate and the eavesdropping receivers. If ′ is not equal to , the following set is the set of all possible vectors at the legitimate receiver which can lead to error: B1( | , ′| )≜ T ,
| ( |�0 ,
x , , )
′≠ ′, ′ T , ′| ( | �0 ′, x ′, ′, ′), (23) and if ′ is equal to but ′≠ , the following set is the set of all possible received vectors at the legitimate receiver which can lead to error:
B2( | , ′| )≜ T ,
| ( |�0 ,
x , , )
Copyright © 2013 IJECCE, All right reserved If the eavesdropper received ′′≠ , the following set is
the set of all received vectors at the eavesdropping receiver which can lead to error:
B3 | , ′| ≜ T ,
| �0
, x , ,
′′≠ ′′, ′′ T , ′| �0 ′′, x ′′, ′′, ′′ . (25) Proof of probabilities of decoding error at legitimate and eavesdropping receivers decrease exponentially with exponents 1and 2, respectively, is similar to the proof of 1( , 1, | ) 1 and 2( , 2, | ) 2 in Lemma 4 [1]. Thus the proof of Lemma 3 is complete.
Step 2: Estimation of equivocation rate
We now estimate the equivocation rate of the message at eavesdropper. Let X be the input RV of the channel corresponding to the random messages , and encoder
. The possible values of X are codewords x , , . Let be the RV defined as the first coordinate of the actual value of X. Then
, | ( |Z) , | ( |Z, ) =
, | ( , Z| )− , | (Z| ) =
, | ( , Z, X| ) − , | (X| , , Z)−
, | (Z| ) =
, | ( ,�| ) + , | (Z| , ,X)−
, | (�| , ,Z) − , | (Z| )
, | (�| ) + , | (Z|�)−
, | (�| , ,Z) − , | (Z| ). (26) We bound the terms in the last line separately. Given
= , X has |A| × |B| possible values. According to the definition of encoding mapping function we have
Pr{X=x| = }
Pr{X=x′| = } 2,
for all x,x′ {x , , } , . Then
− Pr{X = x| = }
x X
log Pr{X = x| = }
− Pr X=x =
x X
(1 + log Pr X=x′ = ).
So
− Pr{X=x| = }
x X
log Pr{X=x| = }
− 1 + log Pr X=x′ = .
Thus according to the definition of entropy for given we have
, | � =
−1−log Pr =x′ = =
−1 − log 1
|A| × |B|=
−1 + log |A| × |B| =
−1 + log |A| + log |B|,
where (a) is concluded from uniform distribution of codewords. Hence for given we obtain
1
, | �
1
logA + logB −1 =
min
| : ( | || | | ) 1
| , | 1∧ | 0 +
( | || | − 1−
4|
+−1 . (27)
Since
1
, | Z � =
1
Pr{�=�} , | Z �=�
x X
=
1
Pr �=�
X x X
�)
( − |
Z
log | ) =
Pr �=� ( = )
X x X
( − |
Z
log | ) =
Pr �=� , | =
x X
, | ,
where ( | ) denotes the number of symbols in x that equal . Thus
1/ , | Z � = , | . (28)
The third term in (26), , | (�| , ,Z), can be upper bounded by Fano’s inequality as follows.
Let us define
� , , ≜
x , , , if there is a unique b such that
z, x , , TεN
N
| 0 | 0 ,
arbitrary, otherwise,
where is determined by based on the encoding scheme given in Lemma 3.
It is easy to prove that
Pr X ≠ x , ,Z , (29)
where →0 as → ∞.
Thus according to (29) and using Fano' s inequality, we obtain
1
, | X , ,Z
1
(1 + Pr X ≠ x , ,Z log exp 0 A × B
1
1 + log exp 0 A × B . (30)
Finally to compute the forth term in (26), , | (Z| ), we define an RV Z with values in T , | ( |u0 ) such that ( | || | | ) 1.
z ≜
z, if z T , | �0 ,
arbitrary, otherwise.
We observe that
, | (Z| ) , | (Z,� | ) =
, | Z ,� + , | �
Copyright © 2013 IJECCE, All right reserved The first term in (31) can be estimated by Fano' s
inequality as follows
1
, | Z � =
1
Pr = , | (Z| = ,� )
( )
1
Pr = (1 + Pr{Z≠ � | = } logZ ) =1
+ Pr =
(Pr z T , | u0 = logZ)
( )
1/ + exp − 2 log Z, (32)
1/ , | (� | ) =
1
Pr = , | � =
Pr = , | ( | 0) , | 0 . (33)
From (32) and (33) we obtain
1/ , | (Z| )
1/ + exp − 2 log Z + , | 0 . (34)
By substituting (27), (28), (30) and (34) into (26), we obtain
lim
→∞
1
, | ( |Z)
lim
→∞[ | : ( |min|| | | ) 1
| , | 1∧ | 0 +
( | || | − 1−
4|
+−2 +
, |
− , | 0 −exp{− 2} log Z −
1
1 + log exp 0 A × B ] =
lim
→∞[ | : ( |min|| | | ) 1
| , | 1∧ | 0 +
( | || | − 1−
4|
+−2−
, | ( ∧ | 0)−exp{− 2} logZ −
1
1 + log exp 0 A × B ].
Since the inequality is valid for large enough we conclude,
lim
→∞
1
, | ( |Z)
min
| : ( | || | | ) 1
| , | 1∧ | 0 +
( | || | − 1−
4|
+−
, | ∧ 0 .
Thus from the definition of , for large enough the following inequality is concluded
min
| : ( | || | | ) 1
| , | 1∧ | 0 +
( | || | − 1−
4|
+−
, | ∧ 0 .
(35) Hence the region R2∗( ) is E-achievable due to (35) and Lemma 3. The proof of Theorem 1 is complete.
IV.
C
ONCLUSIONIn this paper, a random coding bound for E-capacity of the wiretap channel is derived. When → 0, the random coding bound coincides with the capacity region of the wiretap channel obtained by Csiszár and Körner.
A
CKNOWLEDGMENTThe author would like to thank Professor Evgueni Haroutunian for his helpful suggestions and comments.
R
EFERENCES[1] N. Afshar, E. Haroutunian and M. Haroutunian, “Random coding bound for E-capacity region of the broadcast channel with confidential messages,” A Journal of Communication in Information and Systems, vol. 12, no. 2, 2012, pp. 131-154. [2] I. Csiszár and J. Körner, “Broadcast channels with confidential
messages,” IEEE Transactions on Information Theory, vol. IT-24, no. 3, 1978, pp. 339-348.
[3] I. Csiszár and J. Körner, Information Theory: Coding Theorems for Discrete Memoryless Systems, New York, Wiley, 1981. [4] A. El Gamal and Y-H Kim, Network Information Theory,
Cambridge University Press, New York, 2011.
[5] E. A. Haroutunian, “Upper estimate of transmission rate for memoryless channel with countable number of output signals under given error probability exponent,” (in Russian) 3rd All Union Conference on Theory of Information Transmission and Coding, Uzhgorod, Publishing House of the Uzbek Academy of Sciences, 1967, pp. 83-86.
[6] E. A. Haroutunian, “Estimates of the error exponent for the semi-continuous memoryless channel,” (in Russian) Probl. Pered. Inform., vol. 4, 1968, pp. 37-48.
[7] E. A. Haroutunian, “Combinatorial method of construction of the upper bound for E-capacity,” (in Russian) Mezhvuz. Sbornic Nouchnikh Trudov, Matematika, Yerevan, vol. 1, 1982, pp. 213-220.
[8] E. A. Haroutunian and B. Belbashir,“Lower bound of the optimal transmission rate depending on given error probability exponent for discrete memoryless channel and for asymmetric broadcast channelt,” (in Russian) Abstracts of Papers of 6th International Symposium on Information Theory, Tashkent, USSR, 1984, pp. 19-21.
[9] E. A. Haroutunian, “On bounds for E-capacity of DMC,” IEEE
Transactions on Information Theory, vol. IT-53, no. 11, 2007, pp. 4210-4220.
[10] E. A. Haroutunian, M. E. Haroutunian and A. N. Harutyunyan,
“Reliability criteria in information theory and in statistical hypothesis testing,” Foundations and Trends in Communications
and Information Theory, vol. 4, 2008, nos. 2-3.
[11] Y. Liang, H. V. Poor and S. Shamai, “Information theoretic security,” Foundations and Trends in Communications and Information Theory, vol. 5, 2009, nos. 4-5.