I am a postdoc at the group of Julien Lesgourgues at RWTH Aachen.
I am interested in novel ways of exploiting Cosmological data to learn about Early and Late Universe Physics.
I am one of the creators and main developers of Cobaya.
Although Planck data so far favour a canonical single-field scenario, hints of features of different kinds are present both in the power spectrum and bispectrum. Those hints are not statistically significant by themselves, but if a simple-enough model can produce them both simultaneously, their combined significance may amount to a detection of early universe physics beyond the standard scenario.
In addition to features, a more precise determination of the scale dependence of primordial scalar perturbations and a detection of primordial tensor modes and non-gaussianity can also reveal the presence of extra degrees of freedom during inflation, in particular that of spectator fields, among which are curvatons. We have proven that using the stochastic formalism to compute the expected distribution of spectator fluctuations, we can tie precise measurements of inflationary observables with the duration of inflation under particular models.
The detection of a stochastic background of gravitational waves (SGWB) can reveal details about the dynamics of the Early Universe: from features of the infationary model to phase transitions. To exploit such potential, one must reconstruct the stochastic GW spectrum from a model independent aproach, while disentangling it from astrophysical foregrounds. As a member of the LISA collaboration, I am part of that effort.
Together with Antony Lewis, I am developing a next-generation cosmological Monte Carlo sampler: Cobaya. Cobaya is written in Python in a modular way, and made as flexible as possible, so that users can define their own priors and likelihoods, and redefine parameters, without touching Cobaya's source, as well as plug in their own modified versions of CAMB and CLASS without further wrapping.
I am developing a cosmological sampler suitable for very slow likelihoods, in order to more efficiently extract constraints from current and future surveys with increasingly larger data output, for which traditional Monte Carlo methods cannot be used due to their cost. This method uses machine learning to compute the location of the next optimal points on which to evaluate the likelihood, such that the maximum amount of information can be learned about the posterior with the smallest amount of evaluations possible.