Open positions

We are always looking for new group members with passion, talent, and grit!

Applications for PhD and Postdoc positions

If you are interested in working with us as a PhD student or postdoc, please contact me per email ( capobianco AT diag.uniroma1.it).

Topics:

  • Explainable Artificial Intelligence (XAI)
  • Reinforcement Learning
  • Continual Learning
  • Neurosymbolic AI
  • Applications of these technologies in real-world scenarios

Master Theses for Sapienza University students

If you are a Master student at Sapienza University looking for a Master project, contact me per email ( capobianco AT diag.uniroma1.it) or stop by my office. We are interested mainly on thesis projects about robot learning, knowledge acquisition and learning, reinforcement learning or explainable artificial intelligence, but different topics are welcome! Here you can find a brief list of available topics for master thesis. These are only a limited set, many others are available and we are also open to your proposals. When you contact us, please identify a macro-topic (RL, XAI, CL+XAI, RL+XAI, NeSy).

  • [XAI] Everything related to improving XAI methods.
  • [RL] Everything related to improving RL agents.
  • [XAI] Improving Transparent Explainable Logic Layers: Overcoming limitations of TELL (10.3233/FAIA240579)
  • [XAI+GNN] Prototype-based GNNs: Overcoming limitations of TELL (10.1109/TAI.2022.3222618)
  • [XAI+DD] XAI in Drug Discovery (collaboration with Rome Center for Molecular Design, Dept. Pharmaceutical Chemistry at Sapienza University of Rome): XAI for several tasks in Drug Discovery, such as molecular property prediction and molecular generation (10.1007/s10994-023-06369-y)
  • [RL+XAI] Self-explainable Logic-based Reinforcement Learning: Using Logic-based Layers (TELL) to develop self-explainable RL agents (https://openreview.net/forum?id=JVgRSIafCI,10.3233/FAIA240579, https://openreview.net/forum?id=ZC0PSk6Mc6)
  • [RL+XAI] Self-explainable Reinforcement Learning: Development of Reinforcement Learning models that are explainable by design (https://ceur-ws.org/Vol-3518/paper1.pdf)
  • [CL+XAI] Real-time analysis of explanation drift: Development of metrics to use explanation drift as early detector for catastrophic forgetting (10.1016/j.neucom.2024.127960, 10.1016/j.conb.2022.102609)
  • [CL+XAI] XAI-guided replay per spiking neural networks: Exploit explanations to select samples to store in a replay buffer in order to make it more efficient (10.1109/MLSP55844.2023.10285911)
  • [CL+XAI] Transfer/Continual learning with Graph Concept Whitening (collaboration with Rome Center for Molecular Design, Dept. Pharmaceutical Chemistry at Sapienza University of Rome): Test the GraphCW in transfer/continual learning scenarios. Use the HDAC datasets (11 protein groups, with the first being much larger than the others). This application is of great interest in the pharmaceutical field, as the different proteins are used for very different purposes (e.g., weight loss vs. cancer). Therefore, understanding which properties make a molecule active for one protein (the one related to cancer) rather than another (the one related to weight loss) could make it possible to propose modifications to the proteins so that they become active for the desired target. (10.1007/s10994-023-06369-y)
  • [RL] RL Autocurricula through visual policy inspection with multi-modal LLMs. Automatic curriculum generation for RL using video-LLMs to directly inspect agents policies and assess learning progress. (arxiv.org/abs/2306.01711)
  • [RL+NeSy] Autocurricula of temporally-extended RL tasks for zero-shot instruction-following. Automatic curriculum generation of temporally-extended RL tasks represented with formal specifications (e.g. reward machines), with the aim of producing agents that can generalize zero-shot to unseen specifications. (2010.03950, 1807.06333, 2102.06858, 2010.03934, 2301.07608)
  • [RL+NeSy] Natural language instructions to policies via LLMs+NeSy RL. Training RL agents capable of following natural language instructions by first turning them into formal specifications that are then solved via NeSy RL. (2010.03950, 1807.06333, 2102.06858)
  • [RL+NeSy] Lightweight and transferrable semantic embeddings of temporally extended RL tasks. Semantic-preserving representations for temporally-extended tasks in RL using kernel PCA and semantic similarity. (2010.03950, 1807.06333, paper3.pdf, 2405.14389)
  • [NeSy] Logically informed LLM. Training/ finetuning LLM with background knowledge in formal languages. (paper4.pdf, 2504.13139)