More specifically, my research highlights how low-rank parametrisations used in neural networks impose geometric constraints that make a subset of outputs impossible to predict; such outputs are termed unargmaxable. This raises the following questions:

  • Can we provide formal guarantees about which outputs our models can predict?
  • Can we engineer these constraints to encode inductive biases of interest?
  • More generally, I find structure beautiful and enjoy searching for it to better understand problems. Geometric structure can be so intuitive and illuminating when you can see the pattern, and so obscure and baffling before it clicks. I enjoy training my eye by creating interactive visualisations of geometric representations I am learning about, you can find some more examples here.

    Alongside my PhD, I was also a part-time Research Assistant on Information Extraction from news articles and clinical text under the supervision of Beatrice Alex.

    Selected publications

    Taming the Sigmoid Bottleneck: Provably Argmaxable Sparse Multi-Label Classification. Andreas Grivas, Antonio Vergari and Adam Lopez. Accepted at AAAI 2024.
    See also: Poster, Interactive visualisation
    Low-Rank Softmax Can Have Unargmaxable Classes in Theory but Rarely in Practice. Andreas Grivas, Nikolay Bogoychev and Adam Lopez. ACL 2022 (Oral Presentation)
    See also: Poster, Interactive visualisation
    Not a cute stroke: Analysis of Rule-and Neural Network-based Information Extraction Systems for Brain Radiology Reports. Andreas Grivas, Beatrice Alex, Claire Grover, Richard Tobin and William Whiteley. LOUHI 2020
    What do Character-level Models Learn About Morphology? The Case of Dependency Parsing. Clara Vania, Andreas Grivas, and Adam Lopez. EMNLP 2018

    Dissertations

    • My vimrc and other dotfiles can be found here.
    • My public key is 24A721BB42D9A790