My research interests are mainly focused on applications of network coding to wireless networks. The main challenge of the projects I worked on was usually to find ways to implement in practice schemes that in theory promised good performance gains. In the past I also worked on dependability of systems, in particular I was interested on how to make system resilient to configuration errrors. Below you can find a short summary of the projects in which I worked and a list of related publications.

Network coding on smartphones

Video streaming on smartphones

In this project we study how to jointly use cellular (infrastructure based) connections and local ad-hoc links between a set of cooperating mobile devices to deliver an improved video streaming experience. In our setup the users of a set of devices are interested in consuming the same content hosted in the cloud. Thanks to local collaboration the devices can cooperatively use their cellular links to download the content at an higher rate than they could without collaboration. In our work we study how to efficiently use the different interfaces to achieve the best possible performance.

  1. Download: Click here
    Abstract: Video streaming is one of the increasingly popular, as well as demanding, applications on smartphones today. In this paper, we consider a group of smartphone users, within proximity of each other, who are interested in watching the same video from the Internet at the same time. The common practice today is that each user downloads the video independently using her own cellular connection, which often leads to poor quality. We design, implement, and evaluate a novel system, MicroCast, that uses the resources on all smartphones of the group in a cooperative way so as to improve the streaming experience. Each phone uses simultaneously two network interfaces: the cellular to connect to the video server and the WiFi to connect to the rest of the group. Key ingredients of our design include the following. First, we propose a scheduling algorithm, MicroDownload, that decides which parts of the video each phone should download from the server, based on the phones’ download rate. Second, we propose a novel all-to-all local dissemination scheme, MicroNC-P2, for sharing content among group members, which outperforms state-of-the-art peer-to-peer schemes in our setting. MicroNC-P2 is designed to exploit WiFi overhearing and network coding, based on a local packet broadcast framework, MicroBroadcast, which we developed specifically for Android phones. We evaluate MicroCast on a testbed consisting of seven Android phones, and we show that it brings significant performance benefits without battery penalty.
  2. Download: Click here
    Abstract: Video applications are increasingly popular over smartphones. However, in current cellular systems, the downlink data rate fluctuates and the loss rate can be quite high. We are interested in the scenario where a group of smartphone users, within proximity of each other, are interested in viewing the same video at the same time and are also willing to cooperate with each other. We propose a system that maximizes the video quality by appropriately using all available resources, namely the cellular connections to the phones as well as the device-to- device links that can be established via Bluetooth or WiFi. Key ingredients of our design are: (i) the cooperation among users, (ii) network coding, and (iii) exploiting broadcast in the mobile- to-mobile links. Our approach is grounded on a network utility maximization formulation of the problem. We present numerical results that demonstrate the benefit of our approach, and we implement a prototype on android phones.

All-to-all communication

In this project we studied how network coding can be used to improve reliability and reduce delay of flooding protocols. Depending on the network topology the use of network coding can increase both reliability and reduce the time necessary to perform the data dissemination.

  1. Download: Click here
    Abstract: Smartphones are an ideal platform for local multiplayer games, thanks to their computational and networking capabilities as well as their popularity and portability. However, existing game engines do not exploit the locality of players to improve game latency. In this paper, we propose MicroPlay, a complete networking framework for local multiplayer mobile games. To the best of our knowledge, this is the first framework that exploits local connections between smartphones, and in particular, the broadcast nature of the wireless medium, to provide smooth, accurate rendering of all players with two desired properties. First, it performs direct-input rendering (i.e., without any inter- or extrapolation of game state) for all players; second, it provides very low game latency. We implement a MicroPlay prototype on Android phones, as well as an example multiplayer car racing game, called Racer, in order to demonstrate MicroPlay’s capabilities. Our experiments show that cars can be rendered smoothly, without any prediction of state, and with only 20-30 ms game latency.
  2. Download: Click here
    Abstract: This paper experimentally studies the reliability and delay of flooding based multicast protocols for a sniper detection application. In particular using an emulator it studies under which conditions protocols based on network coding deliver performance improvements compared to classic flooding. It then presents an implementation of such protocols on mobile phones.

Coding for sensor networks

Coding for security

In some situations data collected by a sensor network needs to be hidden to a passive eavesdropper. Depending on the type of data it may not be necessary to ensure hard guarantees on the secrecy. In many situations it is enough to just make eavesdropping very costly. We propose a scheme that uses overhearing and coding to provide this kind of security:

  1. Download: Click here
    Abstract: We design and evaluate a lightweight encryption protocol suitable for sensor networks, that enables weak security in the presence of passive eavesdroppers. At every communication round, our protocol creates a key between each sensor node and the sink, by appropriately mixing and coding information packets that the nodes passively overhear. Evaluation using the TOSSIM simulator indicates that with our protocol we can gain significant security benefits at low overhead cost.

Identity aware sensor networks

Several sensor network applications may require not only to collect the sensor measurements, but also know the identity of the sensor associated with each measurement. Often these networks contain a large number of sources while each measurement can be expressed with few bits. In such networks the bulk of the transmitted data is constituted by the identities of the sources. In this work we designed joint data­-identity codes based on subspace coding that achieve better energy efficiency and we built a system on top of TinyOS to validate their performance in a realistic setting both using simulators and deploying the protocol on a testbed of 32 sensor nodes.

  1. Download: Click here
    Abstract: In a significant class of sensor-network applications, the identities of the reporting sensors constitute the bulk of the communicated data, whereas the message itself can be as small as a single bit - for instance, in many cases, sensors are used to detect whether and where a certain interesting condition occurred, or to track incremental environmental changes at fixed locations. In such scenarios, the traditional network-protocol paradigm of separately specifying the source identity and the message in distinct fields leads to inefficient communication. This work addresses the question of how communication should happen in such identity-aware sensor networks. We calculate theoretical performance bounds for this type of communication, where ‘performance’ refers to the number of transmitted bits. We propose a communication protocol, where the identity and message of each source are specified jointly using subspace coding. We show through analysis and simulation that our protocol’s performance is close to optimal and compare it to the performance of a traditional protocol, where identity and message are specified separately.
  2. Download: Click here
    Abstract: In a significant class of sensor-network applications, the identities of the reporting sensors constitute the bulk of the communicated data, whereas the message itself can be as small as a single bit - for instance, in many cases, sensors are used to detect whether and where a certain interesting condition occured, or to track incremental environmental changes at fixed locations. In such scenarios, the traditional network-protocol paradigm of separately specifying the source identity and the message in distinct fields leads to inefficient communication. This work addresses the question of how should communication happen in such identity-aware sensor networks. We reexamine the traditional source-identity/message separation and propose a scheme for jointly encoding the two. We use this to develop a communication method for identity-aware sensor networks and show it to be energy efficient, simple to implement, and gracefully adaptable to scenarios frequently encountered in sensor networks - for instance, node failures, or large numbers of nodes where only few are active during each reporting round.
  3. Download: Click here
    Abstract: In networks that employ network coding, two main approaches have been proposed in the literature to allow the receivers to recover the source information: (i) use of coding vectors, that keep track of the linear combinations the received packets contain, and (ii) subspace coding, that dispenses of the need to know the linear combinations, since information is conveyed from the choice of subspaces alone. Both these approaches impose the strong requirement that all source packets get potentially combined. We here present a third approach that relaxes this assumption, and is thus not a special case from either of the previous two. This relaxation allows to employ compressed coding vectors to efficiently convey the coding coefficients, without altering the operation of intermediate network nodes. We develop optimal designs for such vectors.
  4. Download: Click here
    Abstract: In a significant class of sensor-network applications, the identities of the reporting sensors are an essential part of the reported information. For instance, in environmental monitoring, the goal is to reconstruct physical quantities over space and time; these quantities are sampled by the sensors, and the source identity associated with each measurement is necessary for the spatial and temporal reconstruction. In many practical scenarios, source identities constitute the bulk of the communicated data, whereas the message itself can be as small as a single bit. In these scenarios, the traditional network-protocol paradigm of separately specifying the source identity and the message in distinct fields leads to inefficient communication. In this paper, we re-examine this traditional data separation and propose a scheme for joint identity-message encoding; we use this scheme to design a new efficient collection protocol for identity-aware sensor networks. Compared to conventional data collection, our protocol reduces the amount of traffic in the network at least by a factor of two (up to an order of magnitude, in lossy environments), while its performance scales better with network complexity; we show these results both through theoretical analysis and extensive simulations.

SenseCode: network coding for reliable sensor networks

Two main design parameters of sensor networks data collection protocols are reliability and energy efficiency. In this work we propose a practical network coding scheme that allows to achieve new points in the energy­reliability tradeoff. In particular we developed a scheme that allows to improve reliability of existing data collection protocols in the presence of node failures. We implemented this protocol on top of the state of the art collection protocol and we showed substantial performance gains using a sensor network simulator (TOSSIM) and deploying the protocol in a 32 nodes sensor network.

  1. Download: Click here
    Abstract: Designing a communication protocol for sensor networks often involves obtaining the “right” trade-off between energy efficiency and end-to-end packet error rate. In this paper, we show that network coding provides a means to elegantly balance these two goals. We present the design and implementation of SenseCode, a collection protocol for sensor networks and - , to the best of our knowledge, the first such implemented protocol to employ network coding. SenseCode provides a way to gracefully introduce a configurable amount of redundant information in the network, thereby increasing end-to-end packet error rate in the face of packet loss. We compare SenseCode to the best (to our knowledge) existing alternative and show that it reduces end-to-end packet error rate in highly dynamic environments, while consuming a comparable amount of network resources. We have implemented SenseCode as a TinyOS module and evaluate it through extensive TOSSIM simulations.
  2. Download: Click here
    Abstract: Balancing energy efficiency and reliability is a common underlying goal for most information collection protocols in sensor networks. Multipath diversity has emerged as one of the promising techniques to achieve such a balance. In this chapter, we provide a unified framework for the multipath techniques in the literature and discuss their basic benefits and drawbacks. We also discuss emerging techniques from the area of network coding.

Codes for function computation

In many sensor network applications it is necessary to collect only a function of the data sensed by the network nodes. The typical approach however is to collect all the sensed data at the sink and then compute the function. This approach is motivated by the fact that it requires only a general purpose networking protocol that can be re-used for multiple applications. In our work we study a network architecture that has the same property, namely being general purpose, while allowing to transfer less information to the sink by performing in-network computation of the function.

  1. Download: Click here
    Abstract: This paper considers function computation in a network where intermediate nodes perform randomized network coding, through appropriate choice of the subspace codebooks at the source nodes. Unlike traditional network coding for computing functions, that requires intermediate nodes to be aware of the function to be computed, our designs are transparent to the intermediate node operations.
  2. Download: Click here
    Abstract: This paper considers function computation in a network where intermediate nodes perform randomized network coding, through appropriate choice of the subspace codebooks at the source nodes. Unlike traditional network coding for computing functions, that requires intermediate nodes to be aware of the function to be computed, our designs are transparent to the intermediate node operations.
  3. Download: Click here
    Abstract: We consider multiple non-colocated sources communicating over a network to a common sink. We assume that the network operation is fixed, and its end result is to convey a fixed linear deterministic transformation of the source data to the sink. This linear transformation is known both at the sources and at the sink. We are interested in the problem of function computation over such networks. We design communication protocols that can perform computation without modifying the network operation, by appropriately selecting the codebook that the sources employ to map their measurements to the data they send over the network.

Linear coding for broadcast communications

In this project we studied how to improve the performance of broadcasting using linear codes. We studied how to reduce delay and increase the rate of broadcasting by leveraging feedback from the receivers. We proved that finding the optimal strategy is an hard problem and we proposed some heuristic algorithms that give significant benefits compared to existing solutions.

  1. Download: Click here
    Abstract: We consider the problem of minimizing delay when broadcasting over erasure channels with feedback. A sender wishes to communicate the same set of μ messages to several receivers. The sender can broadcast a single message or a combination of messages at each timestep, through separate erasure channels. Receivers provide feedback as to whether the transmission was received. If, at some time step, a receiver cannot identify a new message, delay is incurred. Our notion of delay is motivated by real-time applications that request progressively refined input, such as the successive refinement of an image encoded using multiple description coding. Our setup is novel in that it combines coding techniques with feedback information to the end of minimizing delay. We show that it allows Θ ( μ ) benefits as compared to previous approaches for offline algorithms, while feedback allows online algorithms to achieve smaller delay compared to online algorithms without feedback. Our main complexity result is that the offline minimization problem is N P -hard both under scheduling and coding algorithms. However we show that coding does offer delay and complexity gains over scheduling. We also discuss online heuristics and evaluate their performance through simulations.
  2. Download: Click here
    Abstract: We consider the problem of minimizing delay when broadcasting over erasure channels with feedback. A sender wishes to communicate the same set of mu messages to several receivers over separate erasure channels. The sender can broadcast a single message or a combination (encoding) of messages at each timestep. Receivers provide feedback as to whether the transmission was received. If at some time step a receiver cannot identify a new message, delay is incurred. Our notion of delay is motivated by real-time applications that request progressively refined input, such as the successive refinement of an image encoded using multiple description coding. Our setup is novel because it combines coding techniques with feedback information to the end of minimizing delay. It allows Theta(mu) benefits as compared to previous approaches for offline algorithms, while feedback allows online algorithms to achieve smaller delay than online algorithms without feedback. Our main complexity results are that the offline minimization problem is NP-hard when the sender only schedules single messages and that the general problem remains NP-hard even when coding is allowed. However we show that coding does offer delay and complexity gains over scheduling. We also discuss online heuristics and evaluate their performance through simulations.
  3. Download: Click here
    Abstract: Consider a source broadcasting M packets to N receivers over independent erasure channels, where perfect feedback is available from the receivers to the source, and the source is allowed to use coding. We investigate offline and online algorithms that optimize delay, both through theoretical analysis as well as simulation results.
  4. Download: Click here
    Abstract: We consider the problem of minimizing delay when broadcasting over erasure channels with feedback. A sender wishes to communicate the same set of m messages to several receivers. The sender can broadcast a single message or a combination (encoding) of messages to all receivers at each timestep, through separate erasure channels. Receivers provide feedback as to whether the transmission was received. If, at some time step, a receiver cannot identify a new message, delay is incurred. Our notion of delay is motivated by real-time applications that request progressively refined input, such as the refinements or different parts of an image. Our setup is novel because it combines coding techniques with feedback information to the end of minimizing delay. Uncoded scheduling or use of multiple description (MDS) codes has been well-studied in the literature. We show that our setup allows O( m ) benefits as compared to both previous approaches for offline algorithms, while feedback allows online algorithms to achieve smaller delay compared to online algorithms without feedback. Our main complexity results are that the offline minimization problem is NP-hard when the sender only schedules single messages and that the general problem remains NP-hard even when coding is allowed. However we show that coding does offer complexity gains by exhibiting specific classes of erasure instances that become trivial under coding schemes. We also discuss online heuristics and evaluate their performance through simulations.

Resilience of systems to configuration errors

Human configuration errors are one of the main sources of downtime in large computer system. In this work we built a tool that using human error models rooted in psychology and linguistics generates realistic configuration mistakes. This tool allowed us to find serious flaws in widely used software packages and allowed us to compare resilience of functionally equivalent systems.

  1. Download: Click here
    Abstract: We present ConfErr, a tool for testing and quantifying the resilience of software systems to human-induced configuration errors. ConfErr uses human error models rooted in psychology and linguistics to generate realistic configuration mistakes; it then injects these mistakes and measures their effects, producing a resilience profile of the system under test. The resilience profile, capturing succinctly how sensitive the target software is to different classes of configuration errors, can be used for improving the software or to compare systems to each other. ConfErr is highly portable, because all mutations are performed on abstract representations of the configuration files. Using ConfErr, we found several serious flaws in the MySQL and Postgres databases, Apache web server, and BIND and djbdns name servers; we were also able to directly compare the resilience of functionally-equivalent systems, such as MySQL and Postgres.