direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Main network

Lupe

The project CARENET mainly focuses on designing cache-enable networks.

Our network setting is illustrated in following figure. In this networks Access Points (Aps) are distributed randomly in a circular area. APs are connected to IP router through links with infinitive capacity where IP router has access to library of N files. Users are equipped with cache memory to store partially the library andusers have mobility. Users can receive data from nearby APs within a circle.

Project Summary

Wireless communication networks are the essential connectivity tissue of the modern digital age. Wireless data traffic is predicted to increase by almost three orders of magnitude in the next five years. It is unlikely that such increase can be tackled by an incremental “more-of-the-same” approach. This proposal stems from the observation that the killer application for wireless networks is on-demand access to Internet content. CARENET advocates a novel content-aware approach to wireless networks design that can provably solve the scalability problem of current systems, thus supporting the paradigmatic shift “from Gigabits per second for a few to Terabytes per month for all”. CARENET’s vision is to serve an arbitrarily large number of users with bounded transmission resources (bandwidth, number of transmit antennas, and power). The fundamental question is: how can such a per-user throughput scalability be achieved in the presence of on-demand requests, for which users do not access simultaneously the same content? CARENET builds on a novel information theoretic formulation of content-aware networks and on several recent results in information theory, network coding, channel coding, and protocol design, stimulated by the PI’s recent work. Key elements of the proposed content-aware architectures are new caching strategies, where content is stored across the wireless network nodes. These strategies are supported by the ever-growing on-board memory of wireless devices and by the new features of the forthcoming 5G-like technology. Our thesis is that scalability is possible through the novel content-aware design, while it is information theoretically impossible otherwise. Our overarching goal envisions the delivery of one Terabyte per month to each user at an affordable cost and good Quality of Experience, rather than the traditional Gigabit per second peak rates targeted by conventional technology development.

Coded Caching

Lupe

The coded caching scheme proposed by Maddah-Ali and Niesen considers the delivery of items (files) in a given content library of N files to K users through the so-called single bottleneck link network. While the single bottleneck link network is a deterministic error-free model, where the multicast rate to all users is fixed irrespectively of the number of users. All users are equipped with cache memory to store M files. Caching yields per-user throughput scalability for M/N.

There many challenges to design cache-enable networks with aforementioned setting. In next following part, we proposed schemes to tackle these problems:

 

 

 

Interference management

This section focuses on a variant of the Maddah-Ali and Niesen scheme successively proposed for the so-called combination network, where the multicast message is further encoded by an MDS code and the MDS-coded blocks are simultaneously transmitted from different nodes. In the proposed system, each user is equipped with an antenna array and can select to decode a certain number of RRH transmissions, while either zero-force some others. We study the performance of the proposed system when users and RRHs are distributed according to 2-dimensional homogeneous Poisson Point Process and the propagation is affected by Rayleigh fading and distance dependent pathloss.

Related Publications

[1] Mozhgan Bayat, Ratheesh K. Mungara, and Giuseppe Caire. "Coded caching in a cell-free SIMO network." WSA 2018; 22nd International ITG Workshop on Smart Antennas. VDE, 2018.

[2] Mozhgan Bayat, Ratheesh K. Mungara, and Giuseppe Caire. "Achieving Spatial Scalability for Coded Caching via Coded Multipoint Multicasting." IEEE Transactions on Wireless Communications 18.1 (2019): 227-240.

Two-hop Fog Radio Access Network (Fog-RAN) architecture

We first consider that each access point is equipped with a cache. We propose a novel cache-aided Fog Radio Access Network (Fog-RAN) architecture including a Macrocell Base Station (MBS, e.g., the IP Router in the figure), several Small-cell Base Stations (SBSs, e.g., the access points), and many users without caches (e.g., the mobile phones). For this novel Fog-RAN model, the fundamental tradeoff among (a) the amount of cache memory at the SBSs, (b) the load on the fronthaul link between MSB and SBSs, and (c) the aggregate communication load among SBSs, is studied, under the standard worst-case demand scenario. Novel converse and achievable bounds are derived which are with a constant multiplicative gap.

Related Publications

[3] Kai Wan, Daniela Tuninetti, Mingyue Ji, Giuseppe Caire," A Novel Cache-aided Fog-RAN Architectures," accepted in IEEE International Symposium on Information Theory (ISIT), Jan. 2019.

[4] Kai Wan, Daniela Tuninetti, Mingyue Ji, Giuseppe Caire, "On the Fundamental Limits of Two-Hop Fog-RAN Cache-aided Networks," submitted to IEEE Transactions on Information Theory, Apr. 2019.

Quality of Service

We apply the coded caching scheme proposed by Maddah-Ali and Niesen to a multipoint multicasting video paradigm. In this work, we propose a two-hop wireless network for video multicasting, where the common coded multicast message is transmitted through different single antenna Edge Nodes (ENs) to multiple antenna users. Each user can decide to decode any EN by using a zero-forcing receiver. Motivated by Scalable Video Coding (SVC), we consider successive refinement source coding in order to provide a ``softer'' tradeoff between the number of decoded ENs and the source distortion at each user receiver.

Related Publications

[5] Bayat, Mozhgan, Cagkan Yapar, and Giuseppe Caire. "Spatially Scalable Lossy Coded Caching." 2018 15th International Symposium on Wireless Communication Systems (ISWCS). IEEE, 2018

Routing in coded caching

This work extends the coded caching scheme to two-hop relay networks H relays communicating with K users with cache, each of which is connected to a random subset of relays. In this paper we present a novel approach through a linear optimization problem that routes the coded multicast messages efficiently. By Linear Programming (LP), we route each MAN multicast message to the corresponding demanding users.

Related Publications

[6] Bayat, Mozhgan, Kai Wan, and Giuseppe Caire. " Routing-Based Delivery in Combination-Type Networks with Random Topology." 2019 IEEE 20th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC). IEEE, 2019.

D2D communication with caches

We also consider the case where the mobile devices equipped with cache can communication among each other. For such a system setting, we provide the exact characterization of load-memory trade-off, by deriving both the minimum average and the minimum peak sum-loads of links between devices, for a given individual memory size at disposal of each user. We also propose an extension of the presented scheme that provides robustness against random user inactivity.

Related Publications

[7] Cagkan Yapar, Kai Wan, Rafael F. Schaefer, Giuseppe Caire," On D2D Caching with Uncoded Cache Placement, " accepted in IEEE International Symposium on Information Theory (ISIT), Jan. 2019.

[8] Cagkan Yapar, Kai Wan, Rafael F. Schaefer, Giuseppe Caire," On the Optimality of D2D Coded Caching with Uncoded Cache Placement and One-shot Delivery," submitted to IEEE Transactions on Communications, Mar. 2019.

Correlated Sources

The above works assume that the N files in the library are independent. However, in practice overlaps among different files is possible (e.g., videos, image streams, etc.) We then consider cache-aided systems with correlated sources. Given an integer r which is less than the library size, correlation is modelled as follows: each r−subset of files contains a common block. By a novel caching scheme based on interference alignment (i.e., each user can cancel all non-intended ‘symbols’ in all multicast messages), we characterize the optimality of the considered problem within a factor of two.

Related Publications

[9] Kai Wan, Daniela Tuninetti, Mingyue Ji, Giuseppe Caire," On Coded Caching with Correlated Files," accepted in IEEE International Symposium on Information Theory (ISIT), Jan. 2019.

[10] Kai Wan, Daniela Tuninetti, Mingyue Ji, Giuseppe Caire," On the Fundamental Limits of Coded Caching with Correlated Files," to be submitted to IEEE Transactions on Information Theory.

Cache replication at users side to reduce subpacketization order

In the context of coded caching in a K-user network, we show that using a simple semi-decentralized content base replication scheme and exploiting spatial reuse, we can achieve very good file delivery throughput with small file subpacketization order. This is another manifestation of the fact that exploiting spatial reuse is a key approach to tackle the subpacketization order problem of coded caching: a fact that has been already noticed and exploited in several other system scenarios.

[11] Amirreza Asadzedeh, and Giuseppe Caire. "Coded Caching with Small Subpacketization via Spatial Reuse and Content Base Replication .”  Accepted in 2018 IEEE International Symposium on Information Theory (ISIT) 2019.

Acknowledge

Lupe

The CARENET project has received funding from European Research Council (ERC) under the European Union with call details Advanced Grants (AdG), PE7, ERC-2017-ADG.

Zusatzinformationen / Extras

Direktzugang:

Schnellnavigation zur Seite über Nummerneingabe