About me

๐Ÿ‘‹ Hi, my name is Rodrigo and Iโ€™m a researcher at Meta AI. Currently, I work in the Behavioural Computing team, led by Maja Pantic, to develop new approaches for generative modeling and self-supervised learning on audio-visual speech.

๐Ÿ“š I completed my BSc in Information Systems and Computer Engineering at Instituto Superior Tรฉcnico, as well as my MSc and PhD in Computing at Imperial College London. Details are available on the Education section.

๐Ÿ–ฅ๏ธ I spent a substantial portion of my PhD interning and working at Meta AI, where I collaborated with multiple teams and developed my PhD research. I also joined Sony R\&D in Tokyo briefly after completing my PhD, as a research intern, where I worked on video-to-audio generation. Details are available on the Experience section.

๐Ÿ”ฌ๐Ÿค– My research is focused on deep learning applied to audio-visual speech (i.e., faces, lip movements and speech). In particular, I am interested in applying self-supervised learning to learn from unlabelled audio-visual speech. I am also interested and have experience in generative modelling, particularly in generating speech using generative adversarial networks (GANs) and diffusion models. Details are available on the Publications section.

๐Ÿ—ฃ๏ธ I have given live talks about my research in multiple conferences including CVPR, Interspeech and ICASSP. I also engage with the machine learning online community directly through platforms such as Twitter and Reddit. Details are available on the Talks and Online Presence sections.

๐ŸŽธ๐ŸŽพ During my free time, I enjoy playing the guitar and bass (usually with a band). I also play squash with my colleagues from Imperial College London on a weekly basis. Details are available on the Extra-curricular section.

๐Ÿ“ƒ My CV is available on the CV section.