Bio: After receiving a Bachelor (2014) and a Master (2017) degrees in Computer Science at the Università degli Studi di Perugia, in 2017 I moved to the University of Liverpool for a Ph.D. in machine learning, specifically in multi-agent reinforcement learning. After being granted the Ph.D. in 2022, I obtained a 9 months postdoc position at the Centre for Competition Policy at the University of East Anglia, working on a CMA-funded project on the impact of recommender systems on markets competition and their policy making. Then, in 2023, I started a 8 months postdoc position at INSA Lyon, working on optimal-guaranteed planning in partially observable multi-agent systems. Finally, in 2024, I moved to HES-SO Genève to work on the 3-years EU project Hyper-AI.
Interests: I am interested in the field of machine learning, especially in multi-agent reinforcement learning. My current work is on decentralised reinforcement learning for data-related resource management in the edge-cloud continuum, and more broadly in designing novel and better algorithms for decentralized multi-agent reinforcement learning. Other research interests are related to neural networks, deep learning, planning and multi-agent systems.
Topic: Hyper-distributed artificial intelligence platform for network resources automation and management towards more efficient data processing applications
Advisor: Prof. Alexandros Kalousis
Topic: Optimal solution of partially observable multi-agent systems
Advisor: Prof. Jilles Steeve Dibangoye
Topic: Recommender systems and suppliers competition in digital markets
Advisors: Prof. Peter Ormosi, Prof. Amelia Fletcher Prof. Rahul Savani
Topic: Deep learning for multi-agent reinforcement learning and decision making
Supervisors: Dr. Frans Oliehoek, Prof. Rahul Savani
Passed under minor revisions
Thesis: Learning numeracy - binary arithmetic with Neural Turing Machines
Supervisors: Dr. Valentina Poggioni, Dr. Marco Baioletti
Final mark: 110/110 with honors
Thesis: Krylov iterative methods for the geometric mean of two matrices times a vector
Supervisor: Dr. Bruno Iannazzo
Final mark: 110/110 with honors
• arXiv, 23 August 2024 [pdf] [.bib]
Centralized training for decentralized execution paradigm emerged as the state-of-the-art approach to epsilon-optimally solving decentralized partially observable Markov decision processes. However, scalability remains a significant issue. This paper presents a novel and more scalable alternative, namely sequential-move centralized training for decentralized execution. This paradigm further pushes the applicability of Bellman's principle of optimality, raising three new properties. First, it allows a central planner to reason upon sufficient sequential-move statistics instead of prior simultaneous-move ones. Next, it proves that epsilon-optimal value functions are piecewise linear and convex in sufficient sequential-move statistics. Finally, it drops the complexity of the backup operators from double exponential to polynomial at the expense of longer planning horizons. Besides, it makes it easy to use single-agent methods, e.g., SARSA algorithm enhanced with these findings applies while still preserving convergence guarantees. Experiments on two- as well as many-agent domains from the literature against epsilon-optimal simultaneous-move solvers confirm the superiority of the novel approach. This paradigm opens the door for efficient planning and reinforcement learning methods for multi-agent systems.
• arXiv, 15 November 2023 [pdf] [.bib]
Multi-agent planning and reinforcement learning can be challenging when agents cannot see the state of the world or communicate with each other due to communication costs, latency, or noise. Partially Observable Stochastic Games (POSGs) provide a mathematical framework for modelling such scenarios. This paper aims to improve the efficiency of planning and reinforcement learning algorithms for POSGs by identifying the underlying structure of optimal state-value functions. The approach involves reformulating the original game from the perspective of a trusted third party who plans on behalf of the agents simultaneously. From this viewpoint, the original POSGs can be viewed as Markov games where states are occupancy states, \ie posterior probability distributions over the hidden states of the world and the stream of actions and observations that agents have experienced so far. This study mainly proves that the optimal state-value function is a convex function of occupancy states expressed on an appropriate basis in all zero-sum, common-payoff, and Stackelberg POSGs.
• SSRN Working Paper No. 4428125, 47 pages, 27 April 2023 [pdf] [.bib]
Subscription-based platforms offer consumers access to a large selection of content at a fixed subscription fee. Recommender systems can help consumers by reducing the size of this choice set by predicting consumers' preferences. However, their prediction is based on limited information on the consumers and sometimes even on the content, which means that the recommendations are often biased. In this paper we introduce a simple theoretical framework for platforms selling to consumers with a quasi-linear utility function via a recommender system. We simulate a set of different recommender systems and use them in this framework to test our hypothesis that RS biases lead to more concentrated markets, increased entry barriers, and increased homogeneity in the recommendations even where the platform is inherently customer-centric and not self-preferencing. Although encouraging more exploration can reduce these market consolidating effects, they can reduce recommendation relevance in the short-run.
• SSRN Working Paper No. 4319311, 35 pages, 06 January 2023 [pdf] [.bib]
Recommender systems are prevalent across digital platforms. They use machine learning techniques to help consumers make choices by predicting their preferred items. If RS had perfect information about consumer preferences and item attributes, they could recommend the most suitable item for each consumer. However, in practice, recommender systems have incomplete information, and their prediction models can exhibit systemic biases. Our stylised model shows such biases can dampen competition between the suppliers selling through digital platform, arising from the fact that biased recommendations are less closely linked to true preferences. Three specific types of bias are examined and are shown to have subtly different effects. Competition remains stronger where suppliers can compete to gain the benefit of the bias, a form of competition for the market. The worst market outcomes can be avoided if consumers can reject unsuitable recommendations, since this helps to restore the competitive constraint on suppliers. However, a model extension shows that these results no longer necessarily hold with endogenous vertical quality. Importantly, in choosing its recommender system, the platform’s preferences are not typically aligned with those of consumers.
• Neural Computing and Applications (S.I. on Adaptive and Learning Agents 2021), 24 pages, Springer Nature, 11 November 2022 [pdf] [.bib]
• Extended Abstract in Proceedings of the 20th International Conference on Autonomous Agents and Multiagent Systems AAMAS'21, 1475-1477, IFAAMAS, 2021 [pdf] [.bib]
• Best Paper Award at ALA'21, 03-04 May 2021 [pdf] [video] [slides]
Policy gradient methods have become one of the most popular classes of algorithms for multi-agent reinforcement learning. A key challenge, however, that is not addressed by many of these methods is multi-agent credit assignment: assessing an agent’s contribution to the overall performance, which is crucial for learning good policies. We propose a novel algorithm called Dr.Reinforce that explicitly tackles this by combining difference rewards with policy gradients to allow for learning decentralized policies when the reward function is known. By differencing the reward function directly, Dr.Reinforce avoids difficulties associated with learning the Q-function as done by counterfactual multi-agent policy gradients (COMA), a state-of-the-art difference rewards method. For applications where the reward function is unknown, we show the effectiveness of a version of Dr.Reinforce that learns an additional reward network that is used to estimate the difference rewards.
• Autonomous Agents and Multi-Agent Systems 35(25), 53 pages, Springer Nature, 07 June 2021 [pdf] [.bib]
• Extended Abstract in Proceedings of the 18th International Conference on Autonomous Agents and Multiagent Systems AAMAS'19, 1862-1864, IFAAMAS, 2019 [pdf] [.bib]
Recent years have seen the application of deep reinforcement learning techniques to cooperative multi-agent systems, with great empirical success. However, given the lack of theoretical insight, it remains unclear what the employed neural networks are learning, or how we should enhance their learning power to address the problems on which they fail. In this work, we empirically investigate the learning power of various network architectures on a series of one-shot games. Despite their simplicity, these games capture many of the crucial problems that arise in the multi-agent setting, such as an exponential number of joint actions or the lack of an explicit coordination mechanism. Our results extend those in Castellini et al. (Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS’19.International Foundation for Autonomous Agents and Multiagent Systems, pp 1862–1864, 2019) and quantify how well various approaches can represent the requisite value functions, and help us identify the reasons that can impede good performance, like sparsity of the values or too tight coordination requirements.
• arXiv, 04 April 2019 [pdf] [.bib]
One of the main problems encountered so far with recurrent neural networks is that they struggle to retain long-time information dependencies in their recurrent connections. Neural Turing Machines (NTMs) attempt to mitigate this issue by providing the neural network with an external portion of memory, in which information can be stored and manipulated later on. The whole mechanism is differentiable end-to-end, allowing the network to learn how to utilise this long-term memory via SGD. This allows NTMs to infer simple algorithms directly from data sequences. Nonetheless, the model can be hard to train due to a large number of parameters and interacting components and little related work is present. In this work we use a NTM to learn and generalise two arithmetical tasks: binary addition and multiplication. These tasks are two fundamental algorithmic examples in computer science, and are a lot more challenging than the previously explored ones, with which we aim to shed some light on the capabilities on this neural model.
• Proceedings of the International Conference on Web Intelligence WI’17, 195-202, ACM, 2017 [pdf] [.bib]
Gaining followers on the Twitter platform has become a rapid way to increase one’s credibility on this social network, that in the last few years has become a launch pad for new trends and to influence people opinions. So, many people have begun to buy fake followers on underground markets appositely created to sold them. Therefore, identifying fake followers profiles is useful to maintain the balance between real influential people on the network and people who simply exploited this mechanism. This work presents a model based on artificial neural networks able to detect fake Twitter profiles. In particular, a denoising autoencoder has been implemented as anomaly detector trained with a semi-supervised learning approach. The model has been tested on a benchmark already used in literature and results are presented.
• Numerical Algorithms 74(2), 561-571, Springer US, 26 January 2017 [buy] [pdf] [.bib]
In this work, we are presenting an efficient way to compute the geometric mean of two positive definite matrices times a vector. For this purpose, we are inspecting the application of methods based on Krylov spaces to compute the square root of a matrix. These methods, using only matrix-vector products, are capable of producing a good approximation of the result with a small computational cost.