Wireless network information flow

Suppose we want to send information from a source to a destination over a wireless network. There might be (relay) nodes in the network that are able to listen to the transmission and help with the communication. A fundamental question is how to instantiate such cooperative communications. Furthermore, how can we characterize optimal cooperation strategies? In order to architect such cooperation, we can utilize the inherent property of wireless networks, which is that nodes communicate over a shared medium. The consequence (in direct contrast to wired networks) of this is that transmissions are broadcast, and they interfere with each other at a receiver. The fundamental challenge is to model and utilize these complex interactions for co-operative information flow. One strategy is to attempt to create point-to-point links by either scheduling the transmission or ignoring the interference. One can show that such strategies can incur significant losses in terms of communication efficiency (rates, energy, etc.). Since the signals transmitted contain messages and are not noise, one could potentially utilize these interactions to set up cooperation between the wireless nodes to aid the information flow. A fundamental question is how to do so in general relay networks.

We have developed linear deterministic models that capture these interactions and show that for such models, one can obtain an information-theoretic max-flow min-cut result, in analogy to the classic wireline Ford-Fulkerson result. We have extended these ideas to general deterministic functions and showed achievable rates for such networks with broadcasting at the transmitters and interference at the receivers. In particular we show that if the optimizing distribution for the information-theoretic cut-set bound is a product distribution, then we have a complete characterization of the achievable rates for such networks. For linear deterministic finite-field models this is indeed the case, and for such cases we have a generalization of of the max-flow, min-cut theorem. This result applies to arbitrary networks, which could also have cycles.

Using the insights from the deterministic analysis, we develop approximate characterizations for noisy Gaussian networks. We show that the achievable rate, for single unicast or multicast (single source, multiple destinations interested in complete source message), is within a constant number of bits from the information-theoretic cut-set upper bound on the capacity of these networks. This constant depends on the topology of the network, but not the values of the channel gains. Therefore, we uniformly characterize the capacity of Gaussian relay networks within a constant number of bits, for all channel parameters. This result applies to arbitrary Gaussian relay networks.

In order to show the approximate max-flow min-cut characterization for Gaussian networks, we developed a new relaying strategy called quantize-map-forward (QMF). None of the previously known strategies are approximate optimality for multicasting over general networks. The QMF is a simple and robust scheme that quantizes the received signal and maps it forward directly to the transmit signal, to succintly summarize the received signal. The relaying strategy is therefore (almost) oblivious to the network topology and channel conditions. The QMF strategy uses the insights from the deterministic framework and is philosophically the network coding technique generalized to noisy wireless networks.

There are subtle but critical differences between the QMF strategy, with the natural extension of compress-forward to networks for the following reasons. The compress-forward scheme quantizes the received signal and then mapped the digital bin index onto the transmit sequence. This means that we need to make choices on the binning rates at each relay node. However, the QMF scheme directly maps the the quantized sequence to the transmit sequence, and therefore does not make such choices on the binning rates. In fact this gives the scheme a `universality’ property, which allows the same relay operation to work for multiple destinations (multicast) and network situations (compound networks); a property that could fail to hold if specific choices of binning rates were made. Moreover, our scheme unlike the classical compress-forward scheme, does not require the quantized values at the relays to be reconstructed at the destination, while it is attempting to decode the transmitted message. These are the essential differences between our scheme and the traditional compress-forward, or the natural network generalization of it.

We have also extended this approach to multi-source multicast, where many sources are to be received at multiple receivers, where all sources are required.