Welcome

Hi, I’m Miquel. I’m doing a Ph.D. focused on Large Language Models (LLMs) and distributed AI—how we train and run these systems across many devices. I like working at the intersection of machine learning and distributed systems and I’m always open to collaboration. If any of this sounds interesting or you have questions, reach out on LinkedIn or at sirera.m@northeastern.edu.

Education

Universitat Politècnica de Catalunya | BSc on Data Science | 2019 - 2023

Northeastern University | PhD on Computer Engineering | 2024 - Current

Experience

Abi Global Health | AI Engineer intern | 2022

Northeastern University | UG Research Assistant | 2023

Download My Resume - PDF

Current projects

  • Building Resilience on Distributed Large Language Models

We’re extending the JARVIS project—a distributed framework for LLMs that splits model layers across edge devices with limited compute. More in My Research.

JARVIS adds resilience to node failures via peer-to-peer communication and layer redundancy, so the system keeps working when things go wrong. We tested it with Google’s Gemma (2B parameters) on 18 software-defined radios in the NSF Colosseum RF emulator and a 7-node Raspberry Pi testbed.

Right now we’re looking at model changes that handle layer loss better, and at using redundancy in middle layers for faster recovery and more robust distributed runs.

  • Extension of Communication-Aware DNN Pruning

We’re extending communication-aware DNN pruning: training networks for distributed deployment so they need less communication while staying accurate. We prune and place neurons in CNNs and MLPs so they run well on the edge under mixed network conditions.

  • Knowledge Editing in Large Language Models

We’re looking at more efficient ways to edit knowledge in LLMs—updating or fixing specific facts and behaviors without full retraining, and without hurting overall performance.