Introduction

Why apprentice?

We have taken valuable lessons learned from years of experience with Professor to write a parameterisation and tuning application from scratch. The result is a code that is more robust, much cleaner and most importantly, significantly faster. The code is written in Python3 and we use widely available packages for the core of the code, such as numpy and scipy and numba. We gain a lot of performance from re-engineering the objective calculation that is now using vectorisation to a high degree. The generally cleaner code allowed for fast computation of exact gradients as well as hessians. Also the recurrence relation of polynomial structures is improved significantly. Another speed improvements stem from reading data fom sources other than directories full of YODA files, namely hdf5 and json.

We further introduce MPI parallelism where it is meaningful, such as the construction of the approximations of the parallel minimisation of an objective for different start points or different sets of weights. Our benchmark example is a 10 dimensional tuning problem with 3rd order polynomials for which we generally observe a single core speed-up of a factor 100 (see plot below).

This increase in performance generally allows to tackle more complicated problems, in particular higher dimensions.

An additional new feature is the capability to train multivariate rational approximations. This is particularly interesting for any kind of function that shows a 1/x behaviour (which is badly captured with pure polynomials). Applications of that can be found for instance when parameterising cross-sections of BSM models.

Frequency of the objective computation, comparing Professor2 with apprentice as function of the problem size (nbins).

Last updated

Was this helpful?