# A novel Bayesian approach to spectral function reconstruction

###### Abstract

We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the MEM. We present a realistic test of our method in the context of the non-perturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. An improved potential estimation from previously investigated quenched lattice QCD correlators is provided.

###### pacs:

The numerical solution of inverse problems is an active area of research with important applications in science and engineering. In the context of QCD physics, the estimation of spectral functions from Euclidean correlators is of particular interest. A reliable determination of ground and excited state properties of mesons and baryons at zero temperature Sasaki:2005ap from non-perturbative Monte-Calo simulations (lattice QCD) e.g. represents an important bridge between field theory and experiment. At finite temperature, lattice spectral functions allow us to scrutinize the physics of the early universe by elucidating phenomena, such as heavy quarkonium melting Asakawa:2003re and the transport properties Aarts:2007wj of the quark-gluon plasma produced in relativistic heavy-ion collisions.

The most common approach to spectral function reconstruction deployed today, the maximum entropy method (MEM) Jarrell:1996 , is based on Bayesian inference. Nevertheless even after 20 years of application Asakawa:2000tr ; Asakawa:2003re ; Nickel:2006mm ; Rothkopf:2011db , the reliability of the MEM is still under discussion Gunnarsson:2010 ; Rothkopf:2011 ; Burnier:2013fca . Here we introduce a novel Bayesian approach that addresses key issues affecting the MEM: slow convergence of the underlying optimization task, high computational cost for extended search spaces, scale dependence in the prior functional and the Gaussian approximation required in the hyperparameter estimation.

The Bayesian strategy Bishop:2007 relies on an application of the multiplication law for the joint probability distribution of the spectral function of the system under investigation , the measured data and any other prior information

(1) |

We specify in the likelihood probability how the data is obtained, while the prior probability encodes how prior information on itself enters the posterior . The maximum of will ultimately provide us with a point estimate of , which we refer to as the Bayesian solution to the inverse problem.

In the following, we aim at inverting the convolution

(2) |

which connects the spectral function through a known kernel function to the correlation function . In practice the correlator is estimated at points from a sample of Gaussian distributed measurements. After discretization of the frequencies along points spaced by we can compute the corresponding data for each spectrum

(3) |

According to the Gaussian assumption, we use the quadratic distance

(4) |

to assign a probability to the data given a test spectral function. Here denotes the covariance matrix of the datapoints. In addition we know that if , does not reproduce the datapoints within their errorbars, while if the spectrum will contain many unnatural structures arising from overfitting the noise in the data. Hence the most neutral reconstruction will satisfy , which we impose as constraint arising from prior knowledge. Hence our likelihood probability reads

(5) |

where the limit is taken numerically. Note that maximizing this expression alone is still ill-defined, since the parameters are not yet uniquely fixed.

Hence we continue by specifying the prior probability , which acts as a regulator and will allow us to select a unique Bayesian set of ’s. The MEM utilizes the Shanon-Jaynes entropy in its prior probability Jarrell:1996 , which is constructed from four axioms. Similarly we introduce our expression for , after replacing two of these axioms.

Prior information is incorporated explicitly through a function , which, by definition, is the correct spectral function in the absence of data Jarrell:1996 . It is usually obtained through previous reconstructions with less accurate data or from independent estimates. We begin the construction of the functional S with:

## Axiom I: Subset independence

Let us consider two different subsets and along the frequency axis. If prior information imposes constraints on the spectrum within each of these subsets, then the result of the reconstruction should not depend on treating these domains separately or in a combined fashion This relation is satisfied if is written as an integral over frequencies

(6) |

While this axiom coincides with the one used in the MEM we continue by introducing a new:

## Axiom II: Scale invariance

In general does not have to be a probability distribution. Indeed, depending on the observable the spectrum is associated with, its scaling can differ from . We hence require that the choice of units for and shall not change the result of the reconstruction. I.e. we must construct our prior probability using ratios of only

(7) |

Now that the integrand does not carry a dimension, we introduce the dimensionfull hyperparameter to also make the argument of the exponential dimensionless.

## Axiom III: Smoothness of the reconstructed spectra

The only certain information about the spectral function is that it is a positive definite and smooth function. Hence we wish the prior functional to impose these traits on the reconstructed spectrum even if no further prior information is known. I.e. in the case of , a smooth spectrum shall be chosen independently of .

The strategy towards this end relies on penalizing spectra which deviate between two adjacent frequencies and . If changing the ratio at the two frequencies does not change the values of beyond the errorbars then should favor . The penalty between the case where the same value exists at both frequencies and the case where they differ by a small amount , hence has to be independent of r and symmetric in whether :

(8) |

This is precisely the discretized expression for the differential equation , whose solution yields

(9) |

The remaining axiom is identical to the MEM case, since it establishes the Bayesian meaning of .

## Axiom IV: Maximum at the prior

In the absence of data, must become maximal at . Conventionally its value at this point is chosen to vanish

(10) |

The two first conditions fix the constants and up to an overall constant, which we absorb into the hyperparameter . The last condition forces to be positive. Our final result hence reads

(11) |

This new prior distribution is strictly concave and exhibits the same quadratic behavior around the minimum as the Shanon-Jaynes entropy . Hence the uniqueness of its maximum can be established analogously to the MEM Asakawa:2000tr . In the case where or , their contribution to and to the variation is not suppressed, thus we avoid the asymptotic flatness inherent in . Note also that a closed expression for the normalization of is available .

By construction, our prior distribution contains the positive hyperparameter , which we need to treat in a Bayesian fashion. In the MEM, several spectra for different values of , are reconstructed and ultimately averaged over Jarrell:1996 , weighted by the values of the evidence . The calculation of the evidence relies however on a Gaussian approximation, whose validity is not guaranteed.

Here we take a different route MacKay:1999 , and integrate out from the joint probability distribution . I.e. we take into account the influence of all possible prior distributions in the resulting posterior probability , on which we base the reconstruction. Starting from the multiplication law for

we integrate both r.h.s with respect to . Assuming no knowledge on the hyperparameter we set out to find an expression for the independent

(13) |

In the above expression is given by Eq.(5) and is an irrelevant constant. For large values of S, we approximate the integral over through a next-to-leading order resummation of logarithms, while for small S a numerical evaluation is possible.

After integration, no dependence on remains. The presence of is not problematic, for its constant values do not influence the reconstruction result if is properly normalized. Hence does not contain any meaningful external parameters and we can proceed to find its maximum numerically. To this end we deploy the quasi-newton LBFGS algorithm, which allows us to approach by varying each of the parameters individually. An inversion of the Hessian matrix at intermediate steps is not required, which leads to a significant reduction in computational cost compared to the usual Levenberg-Marquardt approach. Note that in contrast to the MEM with , now without asymptotically flat directions, we successfully locate the global extremum of and do not need to stop the algorithm at an artificial cutoff in step size.

One area of application is the static potential between two heavy quarks, which is related to the spectral structure of the rectangular Wilson loop and possibly the Wilson line correlator in Coulomb gauge Rothkopf:2011db . It has been shown that the extraction of such spectra poses a severe challenge to the MEM Burnier:2013fca .

Here we benchmark our approach through reconstruction of known spectra from cutoff regularized () Euclidean correlators, calculated in hard thermal loop (HTL) resummed perturbation theory Burnier:2013fca . Subsequently we attempt to extract from them the known HTL inter-quark potential Laine:2007qy by fitting the position and width of the lowest lying peak Rothkopf:2011db . The ideal HTL data points are perturbed by Gaussian noise with variance , leading to constant relative errors . In preparation for lattice QCD, we assume no prior knowledge on the spectrum and supply a constant prior .

In Fig.1 we present reconstructions (, (left) , , (right)) from mock data at for qualitative comparison. The reference MEM Rothkopf:2011 () based on ideal datapoints Burnier:2013fca fails to reproduce even the Lorentzian shape of the lowest peak in the HTL spectrum (). contains a peak and a large background, both of which our method is able to capture with at . In a single peak is dominant for which at suffices. To showcase possible improvements for future lattice QCD studies, we also present the results for datapoints and (BR128-5).

As quantitative check, we reproduce the known HTL inter-quark potential from the lowest spectral peak. Our method hence needs to yield the correct position and width of a skewed Lorentzian. In the top panel of Fig.2 we show the real part (solid line) obtained from the Wilson Loop (circle) and Wilson line correlators (triangle). Error bars are estimated from three reconstructions with different mock noise of equal strength. With the correct real part is reproduced with and accuracy respectively; especially the strong artificial rise in observed in the MEM in Ref.Burnier:2013fca is absent.

With our method it is also possible to reproduce the width from the Wilson line correlators. To this end we utilize and , which is still realistic in quenched lattice QCD. The resulting reconstruction of with sub deviation is shown in the lower panel of Fig.2 (pentagon). (For we only show , since the background from cusp divergences in Burnier:2013fca will necessitate even better data.)

With these limitations in mind, we apply our method to the Wilson Loop and Wilson lines in quenched lattice QCD Rothkopf:2011db at (, , fm, ). The improved estimate in Fig.3 for both and shows that, as expected at the small distances treated here Burnier:2009bk , their values lie close to the color singlet free energies in Coulomb gauge . While we do not expect the width, i.e. (bottom), to be captured reliably yet at , it is interesting to note that its values appear to be of the same order of magnitude as in the HTL calculation at this temperature.

We have introduced a novel Bayesian approach to spectral function reconstruction. It cures the conceptual and practical issues affecting the MEM by introducing an improved dimensionless prior distribution devoid of asymptotically flat directions in Eq.(11). In the case of a constant prior function and normalization of , no external parameter needs to be adjusted, since we integrate out explicitly the hyperparameter as shown in Eq.(13). Combined with the LBFGS optimizer algorithm, which varies each of the individual parameters , we achieve a significant improvement in the reconstruction of spectra as demonstrated in Fig.1 and Fig.2. Hence we look forward to further applications in lattice QCD (Fig.3) and beyond.

The authors thank T. Hatsuda, S. Sasaki, O. Kaczmarek, S.Y. Kim, P. Petreczky and H.T. Ding for fruitful discussions on the MEM, C.A. Rothkopf for insight on Bayesian inference and the DFG-Heisenberg group of Y. Schröder at Bielefeld Univ. for computing resources. This work was partly supported by the Swiss National Science Foundation (SNF) under grant 200021-140234.

## References

- (1) T. Yamazaki et al., Phys. Rev. D 65, 014501 (2002), K. Sasaki, S. Sasaki and T. Hatsuda, Phys. Lett. B 623, 208 (2005); G. S. Bali, S. Collins and C. Ehmann, Phys. Rev. D 84, 094506 (2011); Z. Fodor, C. Hoelbling, Rev. Mod. Phys. 84, 449 (2012).
- (2) T. Umeda, K. Nomura and H. Matsufuru, Eur. Phys. J. C 39S1, 9 (2005); M. Asakawa, T. Hatsuda, Phys. Rev. Lett. 92, 012001 (2004); S. Datta, F. Karsch, P. Petreczky and I. Wetzorke, Phys. Rev. D 69, 094507 (2004); A. Jakovac, P. Petreczky, K. Petrov and A. Velytsky, Phys. Rev. D 75, 014506 (2007); G. Aarts, C. Allton, M. B. Oktay, M. Peardon and J. -I. Skullerud, Phys. Rev. D 76, 094513 (2007); H. Ohno et al., Phys. Rev. D 84, 094504 (2011); G. Aarts et al., JHEP 1111, 103 (2011); H. T. Ding, A. Francis, O. Kaczmarek, F. Karsch, H. Satz and W. Soeldner, Phys. Rev. D 86, 014509 (2012).
- (3) G. Aarts et al., Phys. Rev. Lett. 99, 022002 (2007); H. -T. Ding et. al., Phys. Rev. D 83, 034504 (2011).
- (4) J. Skilling, S.F. Gull, Lecture Notes-Monograph Series 20, 341 (1991); M. Jarrell and J.E. Gubernatis, Physics Reports, 269, 133-195, (1996);
- (5) M. Asakawa, T. Hatsuda and Y. Nakahara, Prog. Part. Nucl. Phys. 46, 459 (2001);
- (6) D. Nickel, Annals Phys. 322, 1949 (2007);
- (7) A. Rothkopf, T. Hatsuda and S. Sasaki, PoS LAT 2009, 162 (2009); Phys. Rev. Lett. 108, 162001 (2012); Y. Burnier,A. Rothkopf, Phys. Rev. D 86, 051503 (2012). A. Rothkopf, Mod. Phys. Lett. A 28, 1330005 (2013)
- (8) O. Gunnarsson, M.W. Haverkort and G. Sangiovanni, Phys. Rev. B 82, 165125 (2010); A. Rothkopf, PoS LATTICE 2012, 100 (2012);
- (9) A. Rothkopf, J. Comput. Phys. 238, 106 (2013).
- (10) Y. Burnier,A. Rothkopf, Phys. Rev. D 87, 114019 (2013).
- (11) C.M. Bishop, Pattern Recognition and Machine Learning, Springer, 2nd ed. (2007).
- (12) D.J.C. MacKay, Neural Computation 11 1035 (1998).
- (13) M. Laine, O. Philipsen and M. Tassler, JHEP 0709, 066 (2007); A. Beraudo, J. -P. Blaizot and C. Ratti, Nucl. Phys. A 806, 312 (2008); N. Brambilla, J. Ghiglieri, A. Vairo and P. Petreczky, Phys. Rev. D 78, 014017 (2008).
- (14) Y. Burnier, M. Laine and M. Vepsalainen, JHEP 1001, 054 (2010) and references therein.