Distributed Systems
Middleware Technologies for Distributed Systems
Year 2018-2019
general information materials projects results



  1. Projects must be developed in groups composed of a minimum of two and a maximum of three students.
  2. For the "Distributed systems" course the project is optional and, if correctly developed, contributes by increasing the final score. For the "Middleware technologies for distributed systems" course three projects have to be chosen.
  3. The set of projects described below are valid for this academic year, only. This means that they have to be presented before the last official exam session of this academic year.
  4. Students are expected to demonstrate their projects using their own notebooks (at least two) connected in a LAN (wired or wireless) to show that everything works in a really distributed scenario.
  5. To present their work, students are expected to use a few slides describing the software and run-time architecture of their solution.
  6. Students interested in doing their thesis in the area of distributed systems should contact Prof. Cugola for research projects that will substitute the course project.


Distributed Systems

Consensus in the ancient greek parliament of Paxos

Implement the consensus protocol known as “Paxos” as originally described by L. Lamport in his fundamental paper. Before reading the paper I suggest you to read the following statement by L. Lamport himself, who explains the (funny) history of the paper.

The protocol can be implemented as a Java library, together with a simple application that shows an application of the library, or it can be simulated (measuring its main performance indicators) using OmNet++.

Additional information on the protocol is easy to find on-line, as an example, here is an additional paper by L. Lamport himself, which explains the protocol in a simple way (but carefully read the notes before blindly using it to implement the protocol).


P2P with Chord

Implement the Chord P2P protocol as discussed in its original paper, also including the protocols to stabilize the network when new peers join or leave.

The protocol can be implemented as a Java library, together with a simple application that shows how to use the library on a practical case, or it can be simulated (measuring its main performance indicators) using OmNet++.


Middleware Technologies for Distributed Systems

Akka Actors

Implement a BigData (batch and stream) processing engine using Akka actors. The engine accepts programs that define an arbitrary acyclic graph of operators. Operators take in input <Key, Value> pairs where both the Key and the Value are strings.

Implement the following operators:

The framework takes care of instantiating multiple workers (actors) to perform each operator in parallel on different data partitions (each worker is assigned a subset of the keys), and it handles the communication between operators.

Input data is made available by a single source node and output data is consumed by a single sink node. You can either assume that operators are instantiated for the entire duration of the computation, or that they are dynamically scheduled.

A master node supervises all the worker actors that execute the operators. The master, the source and the sink cannot fail. Furthermore, you can assume disks and read/write operations to disk to be non-faulty, while operators can fail at any time.

Implement fault-tolerance mechanisms to ensure (with low overhead) end-to-end exactly once delivery even in the presence of failures. Note: to ensure exactly once delivery, the computation must be deterministic: for instance, in the case of re-execution, a merge operator should re-process the input elements in the same order.

Implement a REST API to connect with the system, submit new jobs, and monitor the state of the platform.

Optional: assume that the input client and the local disks are not reliable and use Apache Kafka to store input and output (and intermediate state, if needed).


Apache Kafka

Implement a simplified version of the Twitter social network using Kafka to store tweets. Users interact with Twitter using a client that presents a timeline of tweets and allows users to publish new tweets.

Tweets have the following structure (Avro serialization is recommended but not mandatory):
Tweet <Authors, Content, Timestamp, Location, [Tags], [Mentions]>

Clients can be configured to filter the tweets that appear in the timeline by tag, location, or mentions.

Clients can update their timeline either in batch or in streaming (continuous) mode. In batch mode, the timeline is updated upon user request, while in streaming mode it is constantly updated to reflect the tweets produced in the last 5 minutes.

Timelines are ordered from the least to the most recent tweet, according to their timestamp (event time).

The communication with the client must be RESTful, with tweets represented in JSON format.


  1. Subscribe to twitter - POST /users/id
  2. Write a new tweet - POST /tweets JSON Body
  3. Read tweets (polling) - GET /tweets/{filter}/latest
  4. Server-Sent Event (streaming) - POST /tweets/{filter} JSONBody [List of tweets]





Implement the k-means clustering algorithm in OpenMP/MPI, trying to maximize the performance (reduce the execution time) by carefully exploiting the resources within one computing node with multiple processing cores (OpenMP) and across computing nodes (MPI).

Optional: implement the same algorithm in Apache Flink and compare the performance of the two implementations (processing time and scalability) under various workloads.


Apache Flink

Implement the k-means clustering algorithm in Apache Flink.

Optional: implement the same algorithm in OpenMP/MPI and compare the performance of the two implementations (processing time and scalability) under various workloads.


Comparing Flooding protocols in a WSN

The sink of a WSN have to periodically (every 60s) distribute some data to all the other nodes. Implement a flooding protocol to distribute such data. Implement two version of the flooding protocol. A trivial version in which each node simply reforward (in broadcast) the packet it receives, and a more efficient version (similar to the CCBR protocol we studied) in which each node waits for a random delay before reforwarding and only reforwards (in broadcast) if during that delay no other node reforwarded the same packet (opportunistic flooding).

Compare the two protocols in terms of:

  1. The percentage of nodes that actually receive the flooded data
  2. The number of packets used to flood the network

Perform the comparison under different conditions in terms of: total number of nodes, distance among nodes, size of the data to distributed (from 2 bytes up to 20 bytes).