Source-channel separation in networks

One of the important architectural insights from information theory is the Shannon source-channel separation theorem. For point-to-point channels, the separation theorem shows that one can compress a source separately and have a digital interface with the noisy channel coding; and that such an architecture is (asypmtotically in block size) optimal. Therefore the importance of this is that one can ‘layer’ the architecture by separating the data compression into bits and the ‘physical layer’ of coding for noise. The optimality of this attractive architecture is known to break down in networks, for example for broadcast channels or multiple access channels. Nonetheless, this architecture is the basis for network layering in many of the current network architectures. Therefore, we have been studying the ‘cost’ of separation, that is, how much do we lose through separation. We have also studied special situations where one can demonstrate explicit optimal (hybrid) source-channel coding strategies.

For lossy source coding in general communication networks we have shown that the separation approach is optimal in two general scenarios, and is approximately optimal in a third scenario. These results are shown without explicitly characterizing the achievable distortions, or characterizing the rate-distortion regions of a separation approach. Such implicit characterizations of properties (first demonstrated in a related problem by Koetter, Effros and Medard, 2009) could provide a new tool to gain insight into network information theory problems.

The first general scenario, where we demonstrate optimality of separation, is when memoryless sources at source nodes are arbitrarily correlated, each of which is to be reconstructed at possibly multiple destinations within certain distortions, but the channels in this network are synchronized, orthogonal and memoryless. For discrete networks, this result is a generalization of the result of Koetter, Effros and Medard (2009) for message transmission over noisy graphs, where they demonstrated a separation of network coding and channel coding. In our problem, the extracted pure network source-coding problem, due to the network connectivity, reveals the importance of interaction in such network data compression problems. We believe that this motivates a distinct research direction into interactive network source coding, which has not received a great deal of attention in literature.

The second general scenario, where we demonstrate optimality of separation, is when the memoryless sources are mutually independent, each of which is to be reconstructed only at one destination within a certain distortion, but the channels are general, including finite-memory multi-user channels such as multiple access, broadcast, interference and relay channels.

The third general scenario, where we demonstrate approximate optimality of separation, is a relaxed version of the second scenario. Here we allow each independent source to be reconstructed at multiple destinations, with different distortion requirements. The network is still assumed to be general, i.e., including broadcast, multiple access interference and finite memory, but we restrict our attention to distortion metrics which are difference measures. For this restricted class of distortion measures we demonstrate that the loss of separation is bounded by a constant number of bits. For the important special case of quadratic distortion measures, this constant is shown to be universal across all required distortions, and is upper bounded by 0.5 bits per user requiring the same source.

The above work has been a generalization of previous work on separation over Gaussian broadcast channels, where approximate optimality of separation was established. For a special case of bi-variate Gaussian sources over Gaussian broadcast channels, we have established optimality of a hybrid analog-digital source-channel coding scheme, which we believe is the first such example.