My research is focused on statistical learning theory and machine learning. This includes generalization bounds (mainly of the information-theoretic variety); the application of such bounds in various settings; conformal prediction; online learning; and active learning.

I am currently a postdoc at University College London Centre for Artificial Intelligence, advised by Benjamin Guedj and supported by WASP. Prior to this, I completed my PhD at Chalmers University of Technology as part of the WASP Graduate School AI-track, supervised by Giuseppe Durisi, and studied Physics at the University of Gothenburg.

Preprints

  • Generalization Bounds: Perspectives from Information Theory and PAC-Bayes
    Fredrik Hellström, Giuseppe Durisi, Benjamin Guedj, Maxim Raginsky. Preprint, 2023.

Conference and journal papers

  • Generalization and Informativeness of Conformal Prediction
    Matteo Zecchin, Sangwoo Park, Osvaldo Simeone, Fredrik Hellström. ISIT, 2024.
  • Comparing Comparators in Generalization Bounds
    Fredrik Hellström, Benjamin Guedj. AISTATS, 2024.
  • Adaptive Selective Sampling for Online Prediction with Experts
    Rui M. Castro, Fredrik Hellström, Tim van Erven. NeurIPS 2023.
  • Evaluated CMI Bounds for Meta Learning: Tightness and Expressiveness
    Fredrik Hellström, Giuseppe Durisi. NeurIPS 2022.
  • New Family of Generalization Bounds Using Samplewise Evaluated CMI
    Fredrik Hellström, Giuseppe Durisi. NeurIPS 2022.
  • Fast-rate loss bounds via conditional information measures with applications to neural networks
    Fredrik Hellström, Giuseppe Durisi. ISIT 2021.
  • Generalization bounds via information density and conditional information density
    Fredrik Hellström, Giuseppe Durisi. JSAIT 2020.
  • Generalization error bounds via mth central moments of the information density
    Fredrik Hellström, Giuseppe Durisi. ISIT 2020.
  • New constraints on inelastic dark matter from IceCube
    Riccardo Catena, Fredrik Hellström. JCAP 2018.

Other

  • Information-Theoretic Generalization Bounds: Tightness and Expressiveness
    Fredrik Hellström. PhD Thesis, 2023.
  • Data-Dependent PAC-Bayesian Bounds in the Random-Subset Setting with Applications to Neural Networks
    Fredrik Hellström, Giuseppe Durisi. Workshop on Information-Theoretic Methods for Rigorous, Responsible, and Reliable Machine Learning (ITR3) at ICML 2021.
  • Machine Learning, Compression, and Understanding
    Fredrik Hellström. Popular science presentation.