Hello! I am Assistant Professor of Neurosymbolic AI in the Institute of Logic, Language, and Computation, University of Amsterdam.
Quick links: Google Scholar; DBLP
How are humans able to abstract from their experiences and understand new situations? How do we form novel concepts from what we already know? I use approaches across multiple disciplines to answer questions like these.
Bio:
Martha Lewis is Assistant Professor at the Institute of Logic, Language, and Computation, University of Amsterdam. She was previously a Lecturer in the School of Engineering Mathematics and Technology at Bristol University, and completed postdocs at the ILLC at the University of Amsterdam, funded by a Veni fellowship, and in the Quantum Group in the Department of Computer Science, University of Oxford. She did her PhD at the University of Bristol, in the Bristol Centre for Complexity Sciences, and before that the Evolutionary and Adaptive Systems (EASy) MSc at the University of Sussex. She is a member of ELLIS and a Fellow of the Netherlands Institute for Advanced Study. Martha’s interests are in compositional approaches to understanding language and reasoning, through multidisciplinary approaches.
Press coverage: Science News, IEEE Spectrum, Communications of the ACM, TechXplore
Research Interests
My research is at the intersection of artificial intelligence (AI) and natural language processing (NLP), and is organised across three interrelated strands:
Compositionality: Humans are extremely good at combining concepts to create new ones, or to interpret novel situations they have not previously encountered. Think of an orange fluffy bicycle. You can immediately do this, even though you’ve probably never seen one. My work looks at how human compositional abilities can be applied and modelled in AI. Examples are: integrating grammar with conceptual spaces, and a hierarchical approach to concept composition.
Abstraction: Part of what makes humans able to apply concepts in new situations is their ability to abstract from specifics. My work in this area looks at the ability of large language models to process abstractions through metaphor, showing that metaphor interpretation is a challenging task for these models, and in building new compositional models for this kind of task (see e.g. here and here). I also look at the abstract reasoning ability of large language models and find that his breaks down in unfamiliar contexts.
Reasoning: Humans are further able to apply abstraction and composition in reasoning tasks. My work in this area uses theoretically motivated and interdisciplinary ideas to build word representations that can be used for reasoning tasks (e.g. here and here, and in understanding the capabilities of deep neural models in reasoning tasks such as analogical reasoning and visual reasoning.