Skip to main content
Log in

The empirical Christoffel function with applications in data analysis

  • Published:
Advances in Computational Mathematics Aims and scope Submit manuscript

Abstract

We illustrate the potential applications in machine learning of the Christoffel function, or, more precisely, its empirical counterpart associated with a counting measure uniformly supported on a finite set of points. Firstly, we provide a thresholding scheme which allows approximating the support of a measure from a finite subset of its moments with strong asymptotic guaranties. Secondly, we provide a consistency result which relates the empirical Christoffel function and its population counterpart in the limit of large samples. Finally, we illustrate the relevance of our results on simulated and real-world datasets for several applications in statistics and machine learning: (a) density and support estimation from finite samples, (b) outlier and novelty detection, and (c) affine matching.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Recall that the Borel σ-field \(\mathcal {B}([0,1])\) is generated by the intervals (a,b] of [0, 1] (see [2] §1.4.6, p. 27).

References

  1. Aaron, C., Bodart, O.: Local convex hull support and boundary estimation. J. Multivar. Anal. 147, 82–101 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  2. Ash, R.B.: Real Analysis and Probability. Academic Press Harcourt Brace Jovanovich, Publishers, Boston (1972)

    MATH  Google Scholar 

  3. Baíllo, A., Cuevas, A., Justel, A.: Set estimation and nonparametric detection. Can. J. Stat. 28(4), 765–782 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  4. Basu, S., Pollack, R., Roy, M.F.: Computing the first Betti number and the connected components of semi-algebraic sets. In: Proceedings of the thirty-seventh annual ACM symposium on Theory of computing, pp. 304–312 (2005)

  5. Berman, R.J.: Bergman kernels for weighted polynomials and weighted equilibrium measures of \(\mathbb {C}^{n}\). Indiana University Mathematics Journal 58(4), 1921–1946 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  6. Bos, L.: Asymptotics for the Christoffel function for Jacobi like weights on a ball in \(\mathbb {R}\mathbb {R}^{m}\). New Zealand Journal of Mathematics 23(99), 109–116 (1994)

    MathSciNet  MATH  Google Scholar 

  7. Bos, L., Della Vecchia, B., Mastroianni, G.: On the asymptotics of Christoffel functions for centrally symmetric weight functions on the ball in \(\mathbb {R}\mathbb {R}^{d}\). Rendiconti del Circolo Matematico di Palermo 2(52), 277–290 (1998)

    MathSciNet  MATH  Google Scholar 

  8. Chevalier, J.: Estimation du Support et du Contour du Support d’une Loi de probabilité. Annales de l’Institut Henri poincaré, Section B 12(4), 339–364 (1976)

    MATH  Google Scholar 

  9. Cholaquidis, A., Cuevas, A., Fraiman, R.: On poincaré cone property. Ann. Stat. 42(1), 255–284 (2014)

    Article  MATH  Google Scholar 

  10. Coste, M.: An introduction to semialgebraic geometry. Istituti Editoriali e Poligrafici Internazionali (2000)

  11. Cuevas, A., Fraiman, R.: A plug-in approach to support estimation. Ann. Stat. 25, 2300–2312 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  12. Cuevas, A., González-Manteiga, W., Rodríguez-casal, A.: Plug-in estimation of general level sets. Aust. N. Z. J. Stat. 48(1), 7–19 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  13. Davis, J., Goadrich, M.: The relationship between Precision-Recall and ROC curves. In: Proceedings of the 23rd international conference on Machine learning, pp. 233-240. ACM (2006)

  14. De Marchi, S., Sommariva, A.: M. Vianello Multivariate Christoffel functions and hyperinterpolation. Dolomites Research Notes on Approximation 7, 26–3 (2014)

    Google Scholar 

  15. Devroye, L., Wise, G.L.: Detection of abnormal behavior via nonparametric estimation of the support. SIAM J. Appl. Math. 38(3), 480–488 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  16. Dunkl, C.F., Xu, Y.: Orthogonal Polynomials of Several Variables. Cambridge University Press, Cambridge (2001)

    Book  MATH  Google Scholar 

  17. Geffroy, J.: Sur un problème d’estimation géométrique. Publications de l’Institut de Statistique des Universités de Paris 13, 191–210 (1964)

    MathSciNet  MATH  Google Scholar 

  18. Gustafsson, B., Putinar, M., Saff, E., Stylianopoulos, N.: Bergman polynomials on an archipelago: estimates, zeros and shape reconstruction. Adv. Math. 222(4), 1405–1460 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  19. Hardle, W., Park, B.U., Tsybakov, A.B.: Estimation of non-sharp support boundaries. J. Multivar. Anal. 55(2), 205–218 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  20. Helton, J.W., Lasserre, J.B., Putinar, M.: Measures with zeros in the inverse of their moment matrix. Ann. Probab. 36(4), 1453–1471 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  21. Kroò, A., Lubinsky, D.S.: Christoffel functions and universality in the bulk for multivariate orthogonal polynomials. Can. J. Math. 65(3), 600620 (2012)

    MathSciNet  Google Scholar 

  22. Kroó, A., Lubinsky, D.S.: Christoffel functions and universality on the boundary of the ball. Acta Math. Hungar. 140, 117–133 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  23. Lasserre, J.B., Pauwels, E.: Sorting out typicality with the inverse moment matrix SOS polynomial. In: Proceedings of the 30-th Conference on Advances in Neural Information Processing Systems (2016)

  24. Lichman, M.: UCI Machine Learning Repository, http://archive.ics.uci.edu/ml University of California, Irvine, School of Information and Computer Sciences (2013)

  25. Malyshkin, V.G.: Multiple Instance Learning: Christoffel Function Approach to Distribution Regression Problem. arXiv:1511.07085 (2015)

  26. Mammen, E., Tsybakov, A.B.: Asymptotical minimax recovery of sets with smooth boundaries. Ann. Stat. 23(2), 502–524 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  27. Máté, A., Nevai, P.: Bernstein’s Inequality in L p for 0 < p < 1 and (C, 1) Bounds for Orthogonal Polynomials. Ann. Math. 111(1), 145–154 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  28. Máté, A., Nevai, P., Totik, V.: Szegö’s extremum problem on the unit circle. Ann. Math. 134(2), 433–453 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  29. Molchanov, I.S.: A limit theorem for solutions of inequalities. Scand. J. Stat. 25(1), 235–242 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  30. Nevai, P.: Géza Freud, orthogonal polynomials and Christoffel functions. A case study. Journal of Approximation Theory 48(1), 3–167 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  31. Parzen, E.: On estimation of a probability density function and mode. Ann. Math. Stat. 33(3), 1065–1076 (1962)

    Article  MathSciNet  MATH  Google Scholar 

  32. Patschkowski, T., Rohde, A.: Adaptation to lowest density regions with application to support recovery. Ann. Stat. 44(1), 255–287 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  33. Polonik, W.: Measuring mass concentrations and estimating density contour clusters, an excess mass approach. Ann. Stat. 23(3), 855–881 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  34. Rényi, A., Sulanke, R.: ÜBer die konvexe hülle von n zufällig gewählten Punkten. Probab. Theory Relat. Fields 2(1), 75–84 (1963)

    MATH  Google Scholar 

  35. Rigollet, P., Vert, R.: Optimal rates for plug-in estimators of density level sets. Bernoulli 15(4), 1154–1178 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  36. Robbins, H.: A remark on Stirling’s formula. Am. Math. Mon. 62(1), 26–29 (1955)

    MathSciNet  MATH  Google Scholar 

  37. Rosenblatt, M.: Remarks on some nonparametric estimates of a density function. Ann. Math. Stat. 27(3), 832–837 (1956)

    Article  MathSciNet  MATH  Google Scholar 

  38. Schölkopf, B., Platt, J., Shawe-Taylor, J., Smola, A., Williamson, R.: Estimating the support of a high-dimensional distribution. Neural Comput. 13(7), 1443–1471 (2001)

    Article  MATH  Google Scholar 

  39. Singh, A., Scott, C., Nowak, R.: Adaptive Hausdorff estimation of density level sets. Ann. Stat. 37(5B), 2760–2782 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  40. Szegö, G.: Orthogonal polynomials. In: Colloquium publications, AMS, (23), fourth edition (1974)

  41. Totik, V.: Asymptotics for Christoffel functions for general measures on the real line. Journal d’Analyse Mathématique 81(1), 283–303 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  42. Tsybakov, A.B.: On nonparametric estimation of density level sets. Ann. Stat. 25(3), 948–969 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  43. Williams, G., Baxter, R., He, H., Hawkins, S., Gu, L.: A comparative study of RNN for outlier detection in data mining. In: IEEE International Conference on Data Mining (p. 709). IEEE Computer Society (2002)

  44. Xu, Y.: Christoffel functions and Fourier series for multivariate orthogonal polynomials. Journal of Approximation Theory 82(2), 205–239 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  45. Xu, Y.: Asymptotics for orthogonal polynomials and Christoffel functions on a ball. Methods Appl. Anal. 3, 257–272 (1996)

    MathSciNet  MATH  Google Scholar 

  46. Xu, Y.: Asymptotics of the christoffel functions on a simplex in \(\mathbb {R}\mathbb {R}^{d}\). Journal of Approximation Theory 99(1), 122–133 (1999)

    Article  MathSciNet  Google Scholar 

  47. Zeileis, A., Hornik, K., Smola, A., Karatzoglou, A.: Kernlab-an S4 package for kernel methods in R. J. Stat. Softw. 11(9), 1–20 (2004)

    Google Scholar 

Download references

Acknowledgements

The research of the first author was funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement 666981 TAMING).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Edouard Pauwels.

Additional information

Communicated by: Tomas Sauer

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 Precision recall curves from Section 4.4

This section displays the curves from which the AUPR scores were measured in Section 4.4. Christoffel function and kernel density estimation are presented in Fig. 4 and the one-class SVM is presented in Fig. 4. A detailed discussion on the experiment is given in Section 4.4.

Fig. 4
figure 4

Precision recall curves for the network intrusion detection task. Left: Christoffel function with varying degree d. Right: kernel density estimation with Gaussian kernel and varying scale parameter σ

Fig. 5
figure 5

Precision recall curves for the network intrusion detection task. The method used is the one-class SVM with a Gaussian kernel. We vary the scale parameter σ and the SVM parameter ν. We used the SVM solver of the package [47]

1.2 Proof of Theorem 3.3

Proof

In the optimization problem (3.2), the objective function \(P\mapsto \int P^{2}d\mu \) is strongly convex in the vector of coefficients of P because:

$$\int P^{2} d\mu = P^{T} \mathbf{M}_{d}(\mu) P\quad\text{and}\quad \mathbf{M}_{d}(\mu)\succ0, $$

and therefore (3.2) reads min{PTMd(μ)P : PTvd(ξ) = 1}, which is a convex optimization problem with a strongly convex objective function. Slater’s condition holds (only one linear equality constraint) and so the Karush-Kuhn-Tucker (KKT) optimality conditions are both necessary and sufficient. At an optimality solution \(P^{*}_{d}\), they read:

$$P^{*}_{d}(\boldmath{\xi})= 1;\quad 2\mathbf{M}_{d}(\mu) P^{*}_{d} = \theta \mathbf{v}_{d}(\xi), $$

for some scalar 𝜃. Multiplying by \((P^{*}_{d})^{T}\) yields:

$$2\kappa_{\mu,d}(\boldmath{\xi},\boldmath{\xi})^{-1} = 2(P^{*}_{d})^{T}\mathbf{M}_{d}(\mu)P^{*}_{d} = \theta. $$

Hence, necessarily:

$$P^{*}_{d}(X) = \mathbf{v}_{d}(X)^{T} P^{*}_{d} = \frac{\theta}{2} \mathbf{v}_{d}(X)^{T}\mathbf{M}_{d}(\mu)^{-1} \mathbf{v}_{d}(\boldmath{\xi}) = \frac{\kappa_{\mu,d}(X,\boldmath{\xi})}{\kappa_{\mu,d}(\boldmath{\xi},\boldmath{\xi})}, $$

which is (3.3). Next, let \(\mathbf {e}_{\alpha }\in \mathbb {R}^{s(d)}\) be the vector with null coordinates except the entry α which is 1. From the definition of the moment matrix Md(μ),

$$\mathbf{e}_{\alpha}^{T}\mathbf{M}_{d}(\mu) P^{*}_{d} = \int \mathbf{x}^{\alpha} P^{*}(\mathbf{z}) d\mu(\mathbf{z}) = \kappa_{\mu,d}(\boldmath{\xi},\boldmath{\xi})^{-1}\mathbf{e}_{\alpha}^{T}\mathbf{v}_{d}(\boldmath{\xi}) = \kappa_{\mu,d}(\boldmath{\xi},\boldmath{\xi})^{-1} \boldmath{\xi}^{\alpha}, $$

which is (3.5). In particular with α := 0, we recover (3.4):

$$\int P^{*}_{d}(\mathbf{x}) d\mu(\mathbf{x}) = \kappa_{\mu,d}(\boldmath{\xi},\boldmath{\xi})^{-1} = \int P^{*}_{d}(\mathbf{x})^{2} d\mu(\mathbf{x}). $$

1.3 Proof of Theorems 3.9 and 3.12

1.3.1 Lower bound on the Christoffel function inside S

We will heavily rely on results from [6] (note that similar results could be obtained on the box, see for example [44]). In particular, we have the following result.

Lemma 6.1

We have for anyd ≥ 2

$$\begin{array}{@{}rcl@{}} \frac{\kappa_{\lambda_{\mathbf{B}},d}(0,0)}{s(d)} \leq\frac{1}{\omega_{p}} \frac{(d+p + 1)(d+p + 2)}{(d + 1)(d + 2)}\left( 1 + \frac{d+p + 3}{d + 3} \right) \end{array} $$

Proof

Combining Lemma 2 in [6] and the last equation of the proof of Lemma 3 in [6], we have:

$$\begin{array}{@{}rcl@{}} \kappa_{\lambda_{\mathbf{B}},d}(0,0) \leq \frac{1}{\omega_{p}}\left( {p+d + 3 \choose p} + {p+d + 2 \choose p} \right). \end{array} $$

The result follows by using the expression given for s(d) and simplifying factorial terms. □

From this result, we deduce the following bound.

Lemma 6.2

Letδ > 0 andxSsuch that dist(x,S) ≥ δ.Then,

$$\begin{array}{@{}rcl@{}} s(d){\Lambda}_{\mu_{S},d}(\mathbf{x}) \geq \frac{\delta^{p}\omega_{p}}{\lambda(S)}\frac{(d + 1)(d + 2)(d + 3)}{(d+p + 1)(d+p + 2)(2d + p + 6)}. \end{array} $$

Proof

Decompose the measure μS into the sum,

$$\begin{array}{@{}rcl@{}} \mu_{S} &= \frac{\lambda(S\setminus \mathbf{B}_{\delta}(\mathbf{x}))}{\lambda(S)} \mu_{S \setminus \mathbf{B}_{\delta}(\mathbf{x})} + \frac{\lambda(\mathbf{B}_{\delta}(\mathbf{x}))}{\lambda(S)} \mu_{\mathbf{B}_{\delta}(\mathbf{x})}. \end{array} $$

Hence, by monotonicity of the Christoffel function with respect to addition and closure under multiplication by a positive term (this follows directly from Theorem 3.1), we have:

$$\begin{array}{@{}rcl@{}} {\Lambda}_{\mu_{S},d}(\mathbf{x}) \geq \frac{\lambda(\mathbf{B}_{\delta}(\mathbf{x}))}{\lambda(S)} {\Lambda}_{\mu_{\mathbf{B}_{\delta}(\mathbf{x})},d}(\mathbf{x}). \end{array} $$
(6.1)

Next, by affine invariance of the Christoffel function (Theorem 3.5):

$$\begin{array}{@{}rcl@{}} {\Lambda}_{\mu_{\mathbf{B}_{\delta}(\mathbf{x})},d}(\mathbf{x}) = {\Lambda}_{\mu_{\mathbf{B}},d}(0) = \frac{1}{\lambda(\mathbf{B})} {\Lambda} _{\lambda_{\mathbf{B}},d}(0)=\frac{1}{\lambda(\mathbf{B})} \frac{1}{\kappa_{\lambda_{\mathbf{B}},d}(0,0)} , \end{array} $$
(6.2)

where B is the unit Euclidean ball in \(\mathbb {R}^{p}\). The result follows by combining (6.1), (6.2), Lemma 6.1 and the fact that \(\frac {\lambda (\mathbf {B}_{\delta }(\mathbf {x}))}{\lambda (\mathbf {B})} = \delta ^{p}\). □

1.3.2 Upper bound on the Christoffel function outside S

We next exhibit an upper bound on the Christoffel function outside of S. We first provide a useful quantitative refinement of the “Needle polynomial” introduced in [21].

Lemma 6.3

For any\(d \in \mathbb {N}\),d > 0, and anyδ ∈ (0, 1), there exists a p-variatepolynomial of degree 2d, q, such that:

$$q(\mathbf{0}) = 1 ;\quad -1 \leq q \leq 1,\text{ on } \mathbf{B} ;\quad \vert q\vert \leq 2^{1-\delta d} \text{ on } \mathbf{B}\setminus \mathbf{B}_{\delta}(\mathbf{x}). $$

Proof

Let r be the univariate polynomial of degree 2d, defined by:

$$\begin{array}{@{}rcl@{}} r\colon t \to \frac{T_{d}(1+\delta^{2} - t^{2})}{T_{d}(1+\delta^{2})}, \end{array} $$

where Td is the Chebyshev polynomial of the first kind. We have

$$\begin{array}{@{}rcl@{}} r(0) = 1. \end{array} $$
(6.3)

Furthermore, for t ∈ [− 1, 1], we have 0 ≤ 1 + δ2t2 ≤ 1 + δ2. Td has absolute value less than 1 on [− 1, 1] and is increasing on [1,) with Td(1) = 1, so for t ∈ [− 1, 1],

$$\begin{array}{@{}rcl@{}} -1 \leq r(t) \leq 1. \end{array} $$
(6.4)

For |t|∈ [δ, 1], we have δ2 ≤ 1 + δ2t2 ≤ 1, so

$$\begin{array}{@{}rcl@{}} |r(t)| \leq \frac{1}{T_{d}(1+\delta^{2})}. \end{array} $$
(6.5)

Let us bound the last quantity. Recall that for t ≥ 1, we have the following explicit expression:

$$\begin{array}{@{}rcl@{}} T_{d}(t) = \frac{1}{2}\left( \left( t + \sqrt{t^{2}-1} \right)^{d} + \left( t + \sqrt{t^{2}-1} \right)^{-d}\right). \end{array} $$

We have \(1 + \delta ^{2} +\sqrt {(1+\delta ^{2})^{2} - 1} \geq 1 + \sqrt {2} \delta \), which leads to

$$\begin{array}{@{}rcl@{}} T_{d}(1+\delta^{2}) &\geq& \frac{1}{2}\left( 1 + \sqrt{2} \delta \right)^{d}\\ &=& \frac{1}{2} \exp\left( \log\left( 1+\sqrt{2}\delta \right) d \right)\\ &\geq&\frac{1}{2} \exp\left( \log(1+\sqrt{2}) \delta d \right)\\ &\geq&2^{\delta d - 1}, \end{array} $$
(6.6)

where we have used concavity of the log and the fact that \(1+\sqrt {2} \geq 2\). It follows by combining (6.3), (6.4), (6.5), and (6.6), that q: yr(∥yx2) satisfies the claimed properties. □

We recall the following well-known bound for the factorial taken from [36].

Lemma 6.4 (36)

For any \(n \in \mathbb {N}\) , we have:

$$\begin{array}{@{}rcl@{}} \exp\left( \frac{1}{12n + 1} \right) \leq \frac{n!}{\sqrt{2\pi n} n^{n} \exp(-n)} \leq \exp\left( \frac{1}{12n} \right). \end{array} $$

We deduce the following Lemma.

Lemma 6.5

For any\(d \in \mathbb {N}\),d > 0,we have:

$$\begin{array}{@{}rcl@{}} {p+d \choose d} &\leq d^{p} \left( \frac{e}{p} \right)^{p} \exp\left( \frac{p^{2}}{d} \right) \end{array} $$

Proof

This follows from a direct computation using Lemma 6.4.

$$\begin{array}{@{}rcl@{}} {p+d \choose d} &=& \frac{(p+d)!}{p!d!}\\ &\leq& \frac{\exp\left( \frac{1}{24} \right)}{\sqrt{2\pi}} \sqrt{\frac{p+d}{pd}} \frac{(p+d)^{p+d}}{p^{p}d^{d}} \\ &\leq& \frac{\exp\left( \frac{1}{24} \right)}{\sqrt{2\pi}} \sqrt{2} \frac{d^{p}}{p^{p}} \left( 1 + \frac{p}{d} \right)^{p+d} \\ &\leq& \frac{d^{p}}{p^{p}} \exp\left( \frac{p^{2}}{d} + p \right) \end{array} $$

which proves the result. □

Combining the last two lemmas, we get the following bound on the Christoffel function.

Lemma 6.6

LetxSandδbe such that dist(x,S) ≥ δ.Then, for any\(d \in \mathbb {N}\),d > 0,we have:

$$\begin{array}{@{}rcl@{}} s(d){\Lambda}_{\mu_{S}, d}(\mathbf{x}) \leq 2^{3 - \frac{\delta d}{\delta + \text{diam}(S)}} d^{p} \left( \frac{e}{p} \right)^{p} \exp\left( \frac{p^{2}}{d} \right). \end{array} $$

Proof

We may translate the origin of \(\mathbb {R}^{p}\) at x and scale the coordinates by δ + diam(S), this results in x = 0 and distance from x to S is at most \(\delta ^{\prime } = \frac {\delta }{\delta + \text {diam}(S)} \leq 1\). Furthermore, S is contained in the unit Euclidean ball B. Using invariance of the Christoffel function with respect to change of origin and change of basis in \(\mathbb {R}^{p}\) (Theorem 3.5), this affine transformation does not change the value of the Christoffel function. Now, the polynomial described in Lemma 6.3 provides an upper bound on the Christoffel function. Indeed for any \(d^{\prime } \in \mathbb {N}\), we have:

$$\begin{array}{@{}rcl@{}} {\Lambda}_{\mu_{S},2d^{\prime} + 1}(0) \leq {\Lambda}_{\mu_{S},2d^{\prime}}(0) \leq 2^{2 - 2\delta^{\prime} d^{\prime}} \leq 2^{3 - \delta^{\prime}(2d^{\prime} + 1)}, \end{array} $$
(6.7)

where we have used δ≤ 1 to obtain the last inequality. Combining Lemma 6.5 and (6.7), we obtain for any \(d^{\prime } \in \mathbb {N}\):

$$\begin{array}{@{}rcl@{}} s(2d^{\prime}) {\Lambda}_{\mu_{S},2d^{\prime}}(0) &\leq& 2^{3 - 2\delta^{\prime} d^{\prime}} (2d^{\prime})^{p} \left( \frac{e}{p} \right)^{p} \exp\left( \frac{p^{2}}{2d^{\prime}} \right),\\ s(2d^{\prime} + 1) {\Lambda}_{\mu_{S},2d^{\prime} + 1}(0) \!&\!\leq\!&\! 2^{3 - \delta^{\prime}(2d^{\prime} + 1)}(2d^{\prime} + 1)^{p} \left( \frac{e}{p} \right)^{p} \exp\left( \frac{p^{2}}{2d^{\prime}+ 1} \right). \end{array} $$
(6.8)

Since in (6.8) \(d^{\prime }\in \mathbb {N}\) was arbitrary, we obtain in particular:

$$\begin{array}{@{}rcl@{}} s(d) {\Lambda}_{\mu_{S},d}(0) &\leq 2^{3 - \delta^{\prime} d} d^{p} \left( \frac{e}{p} \right)^{p} \exp\left( \frac{p^{2}}{d} \right). \end{array} $$
(6.9)

The result follows from (6.9) by setting \(\delta ^{\prime }= \frac {\delta }{\delta + \text {diam}(S)}\). □

1.3.3 Proof of Theorem 3.9

Proof

Let us first prove that \(\lim _{k \to \infty }d_{H}(S,S_{k}) = 0\). We take care of both expressions in the definition of dH separately. Fix an arbitrary \(k \in \mathbb {N}\), from Assumption 3.7 and Lemma 6.6, for any \(\mathbf {x} \in \mathbb {R}^{p}\) such that dist(x,S) > δk:

$$\begin{array}{@{}rcl@{}} s(d_{k}) {\Lambda}_{\mu_{S},d_{k}}(\mathbf{x}) &\leq& 2^{3 - \frac{\text{dist}(\mathbf{x},S) d_{k}}{\text{dist}(\mathbf{x},S) + \text{diam}(S)}} {d_{k}^{p}} \left( \frac{e}{p} \right)^{p} \exp\left( \frac{p^{2}}{d_{k}} \right)\\ &<& \alpha_{k}. \end{array} $$

From this, we deduce that \(\mathbb {R}^{p}\setminus S_{k} \supseteq \left \{ \mathbf {x} \in \mathbb {R}^{p}:\text {dist}(\mathbf {x},S) > \delta _{k} \right \}\) and thus \(S_{k} \subseteq \left \{ \mathbf {x} \in \mathbb {R}^{p}:\text {dist}(\mathbf {x},S) \leq \delta _{k} \right \}\). Since k was arbitrary, for any \(k \in \mathbb {N}\):

$$\begin{array}{@{}rcl@{}} \sup_{\mathbf{x} \in S_{k}}\text{dist}(\mathbf{x}, S) \leq \delta_{k}. \end{array} $$
(6.10)

Inequality (6.10) allows taking care of one term in the expression of dH. Let us now consider the second term. We would like to show that:

$$\begin{array}{@{}rcl@{}} \sup_{\mathbf{x} \in S}\text{dist}(\mathbf{x}, S_{k}) \to 0\quad\text{ as } k\to \infty. \end{array} $$
(6.11)

Note that the supremum is attained in (6.11). We will prove this by contradiction: for the rest of the proof, M denotes a fixed positive number whose value can change between expressions. Suppose that (6.11) is false. This means that for each \(k \in \mathbb {N}\) (up to a subsequence), we can find xkS which satisfies:

$$\begin{array}{@{}rcl@{}} \text{dist}(\mathbf{x}_{k}, S_{k}) \geq M \end{array} $$
(6.12)

Since xkS and S is compact, the sequence \((\mathbf {x}_{k})_{k\in \mathbb {N}}\) has an accumulation point \(\bar {\mathbf {x}} \in S\), i.e., (up to a subsequence) \(\mathbf {x}_{k} \to \bar {\mathbf {x}}\) as k. Since dist(⋅,Sk) is a Lipschitz function, combining with (6.12), for every \(k \in \mathbb {N}\) (up to a subsequence),

$$\begin{array}{@{}rcl@{}} \text{dist}(\bar{\mathbf{x}}, S_{k}) \geq M. \end{array} $$
(6.13)

We next show that (6.13) contradicts the assumption S = cl(int(S)). From now on, we discard terms not in the subsequence and assume that (6.13) holds for all \(k \in \mathbb {N}\). Combining Lemma 6.2 and Assumption 3.7, for every \(k \in \mathbb {N}\):

$$\begin{array}{@{}rcl@{}} S_{k} \supseteq \left\{ \mathbf{x} \in S: \text{dist}(\mathbf{x}, \partial S) \geq \delta_{k} \right\}. \end{array} $$
(6.14)

Since S = cl(int(S)) and \(\bar {\mathbf {x}} \in S\), consider a sequence \(\{\mathbf {y}_{l}\}_{l \in \mathbb {N}} \subset \text {int}(S)\) such that \(\mathbf {y}_{l} \to \bar {\mathbf {x}}\) as l. Since yl ∈int(S), we have dist(yl,S) > 0 for all l. Up to a rearrangement of the terms, we may assume that dist(yl,S) is decreasing and dist(y0,S) ≥ δ0. For all l, denote by kl the smallest integer such that \(\text {dist}(\mathbf {y}_{l}, \partial S) \geq \delta _{k_{l}}\). We must have kl and we can discard terms so that kl is a valid subsequence. We have constructed a subsequence kl such that for every \(l \in \mathbb {N}\), \(\mathbf {y}_{l} \in S_{k_{l}}\) and \(\mathbf {y}_{l} \to \bar {\mathbf {x}}\). This is in contradiction with (6.13) and hence (6.11) must be true. Combining (6.10) and (6.11), we have that \(\lim _{k \to \infty } d_{H}(S,S_{k}) = 0\).

Let us now prove that \(\lim _{k \to \infty }d_{H}(\partial S,\partial S_{k}) = 0\), we begin with the term supxSkdist(x,S). Fix an arbitrary \(k \in \mathbb {N}\) and \(\bar {\mathbf {x}} \in \partial S_{k}\). We will distinguish the cases \(\bar {\mathbf {x}} \in S\) and \(\bar {\mathbf {x}} \not \in S\). Assume first that \(\bar {\mathbf {x}} \not \in S\). We deduce from (6.10), that:

$$\begin{array}{@{}rcl@{}} \text{dist}(\bar{\mathbf{x}}, \partial S) = \text{dist}(\bar{\mathbf{x}}, S) \leq \delta_{k}. \end{array} $$
(6.15)

Assume now that \(\bar {\mathbf {x}} \in S\). If \(\bar {\mathbf {x}} \in \partial S\), we have \(\text {dist}(\bar {\mathbf {x}}, \partial S) = 0\). Assume that \(\bar {\mathbf {x}} \in \text {int}(S)\). From (6.14), we have that SSk ⊆ {xS : dist(x,S) < δk} and hence cl(SSk) ⊆ {xS : dist(x,S) ≤ δk}. Since \(\bar {\mathbf {x}} \in \partial S_{k} \cap \text {int}(S)\), we have \(\bar {\mathbf {x}} \in \text {cl}(S \setminus S_{k})\) and hence \(\text {dist}(\bar {\mathbf {x}}, \partial S) \leq \delta _{k}\). Combining the two cases \(\bar {x} \in S\) and \(\bar {x} \not \in S\), we have in any case that \(\text {dist}(\bar {\mathbf {x}}, \partial S) \leq \delta _{k}\) and hence:

$$\begin{array}{@{}rcl@{}} \sup_{\mathbf{x} \in \partial S_{k}}\text{dist}(\mathbf{x}, \partial S) \leq \delta_{k}. \end{array} $$
(6.16)

Let us now prove that:

$$\begin{array}{@{}rcl@{}} \sup_{\mathbf{x} \in \partial S}\text{dist}(\mathbf{x}, \partial S_{k}) \to 0\quad\text{as } k\to \infty. \end{array} $$
(6.17)

First, since S is closed by assumption, the supremum is attained for each \(k \in \mathbb {N}\). Assume that (6.17) does not hold, this means there exists a constant M > 0, such that we can find xkS, \(k \in \mathbb {N}\) with dist(xk,Sk) ≥ M. If xkSk infinitely often, then, we would have up to a subsequence xkS and dist(xk,Sk) ≥ M. This is exactly (6.12) and we already proved that it cannot hold true. Hence, xkSk only finitely many times and we may assume by discarding finitely many terms that xkSk for all \(k \in \mathbb {N}\). Let \(\bar {\mathbf {x}} \in \partial S\) be an accumulation point of \((\mathbf {x}_{k})_{k \in \mathbb {N}}\). Since \(\bar {\mathbf {x}} \in \partial S\), there exists \(\bar {\mathbf {y}} \not \in S\) such that \(0 < \text {dist}(\bar {\mathbf {y}},S) \leq \lim \inf _{k \to \infty } \text {dist}(\mathbf {x}_{k}, \partial S_{k}) / 2\). Since xkSk for all k sufficiently large, we have \(\bar {\mathbf {y}} \in S_{k}\) for all k sufficiently large but the fact that \(0 < \text {dist}(\bar {\mathbf {y}},S)\) contradicts (6.10). Hence, (6.17) must hold true and the proof is complete. □

Remark 6.7 (Refinements)

The proof of Theorem 3.9 is based on the following fact:

$$\begin{array}{@{}rcl@{}} \left\{ \mathbf{x} \in \mathbb{R}^{p} : \text{dist}(\mathbf{x}, \bar{S}) \geq \delta_{k} \right\} \subseteq S_{k} \subseteq \left\{ \mathbf{x} \in \mathbb{R}^{p}: \text{dist}(\mathbf{x},S) \leq \delta_{k} \right\}. \end{array} $$

Depending on the regularity of the boundary S of S, it should be possible to get sharper bounds on the distance as a function of δk. This should involve the dependency on δ of the function:

$$\begin{array}{@{}rcl@{}} \delta \to d_{H}\left( \left\{ \mathbf{x} \in \mathbb{R}^{p} : \text{dist}(\mathbf{x}, \bar{S}) \geq \delta \right\}, \partial S\right). \end{array} $$

For example, if the boundary S has bounded curvature, this function is equal to δ for sufficiently small δ. Another example, if \(S \subset \mathbb {R}^{2}\) is the interior region of a nonself-intersecting continuous polygonal loop, then the function is of the order of \(\frac {\delta }{\sin \left (\frac {\theta }{2} \right )}\), where 𝜃 is the smallest angle between two consecutive segments of the loop.

1.3.4 Proof of Theorem 3.12

Proof

Lemma 6.2 holds with μ in place of μS and w in place of \(\frac {1}{\lambda (S)}\). Indeed, we have:

$$\begin{array}{@{}rcl@{}} {\Lambda}_{\mu,d} \geq w_{-} \lambda(\mathbf{B}_{\delta}(\mathbf{x})) {\Lambda}_{\mu_{\mathbf{B}_{\delta}(\mathbf{x})}}, \end{array} $$

and the rest of the proof remains the same with different constants. Similarly, Lemma 6.6 holds with μ in place of μS; indeed, the proof only uses the fact that μS is a probability measure supported on S which is also true for μ. The proof then is identical to that of Theorem 3.9 by reflecting the corresponding change in the constants. □

1.4 Proof of Theorem 3.13

1.4.1 A preliminary Lemma

Lemma 6.8

Letμbe a probability measure supported on a compact set S. Then, forevery\(d \in \mathbb {N}\),d > 0,and every\(\mathbf {x} \in \mathbb {R}^{p}\):

$$\begin{array}{@{}rcl@{}} {\Lambda}_{\mu,d}(\mathbf{x}) \leq \left( \frac{\text{diam}(\text{conv}(S))}{\text{dist}(\mathbf{x},\text{conv}(S)) +\text{diam}(\text{conv}(S))} \right)^{2}. \end{array} $$

Proof

Set y = proj(x,conv(S)), that is ∥yx∥ = dist(x,conv(S)) and:

$$ \mathbf{y} = \displaystyle{\arg\min}_{\mathbf{z} \in \text{conv}(S)} \{ \left\langle \mathbf{z}, \mathbf{y} - \mathbf{x}\right\rangle\}. $$
(6.18)

Consider the affine function:

$$\begin{array}{@{}rcl@{}} \mathbf{z}\mapsto f_{\mathbf{x}}(\mathbf{z}) := \frac{\left\langle \mathbf{x} - \mathbf{z}, \frac{\mathbf{x} - \mathbf{y}}{\|\mathbf{x} - \mathbf{y}\|}\right\rangle}{\|\mathbf{x} - \mathbf{y}\| + \text{diam}(\text{conv}(S))}. \end{array} $$
(6.19)

For any zS, we have:

$$\begin{array}{@{}rcl@{}} f_{\mathbf{x}}(\mathbf{z}) \leq \frac{\|\mathbf{x} - \mathbf{z}\|}{\|\mathbf{x}-\mathbf{y}\| + \text{diam}(\text{conv}(S))} \leq \frac{\|\mathbf{x} - \mathbf{y}\| + \|\mathbf{y} - \mathbf{z}\|}{\|\mathbf{x}-\mathbf{y}\| + \text{diam}(\text{conv}(S))} \leq 1, \end{array} $$
(6.20)

where we have used Cauchy-Schwartz and triangular inequalities. Furthermore, we have for any zS:

$$\begin{array}{@{}rcl@{}} f_{\mathbf{x}}(\mathbf{z}) \geq \min_{\mathbf{z}\in \text{conv}(S)} f_{\mathbf{x}}(\mathbf{z}) = \frac{\|\mathbf{x} - \mathbf{y}\|}{\|\mathbf{x} - \mathbf{y}\| + \text{diam}(\text{conv}(S))}, \end{array} $$
(6.21)

where we have used equation (6.18). Consider the affine function qx: z → 1 − fx(z). We have:

$$\begin{array}{@{}rcl@{}} &q_{\mathbf{x}}(\mathbf{x})= 1\\ &0\leq q_{\mathbf{x}}(\mathbf{z}) \leq \frac{\text{diam}(\text{conv}(S))}{\|\mathbf{x} - \mathbf{y}\| + \text{diam}(\text{conv}(S))} , \text{ for any } \mathbf{z} \in S, \end{array} $$
(6.22)

where the inequalities are obtained by combining (6.20) and (6.21). The result follows from (6.18), (6.22), and Theorem 3.1. □

1.4.2 Proof of Theorem 3.13

Proof

First, let us consider measurability issues. Fix n and d such that Md(μ) is invertible. Let X be a matrix in \(\mathbb {R}^{p\times n}\), we use the shorthand notation:

$$\begin{array}{@{}rcl@{}} {\Lambda}_{\mathbf{X},d}(\mathbf{z}) = \min_{P \in \mathbb{R}_{d}[\mathbf{x}], P(\mathbf{z})= 1} \frac{1}{n}\sum\limits_{i = 1}^{n} P(\mathbf{X}_{i})^{2}, \end{array} $$
(6.23)

where for each i, Xi is the ith column of the matrix X. This corresponds to the empirical Christoffel function with input data given by the columns of X. Consider the function \(F\colon \mathbb {R}^{p\times n}\to [0,1]\) defined as follows:

$$\begin{array}{@{}rcl@{}} F\colon \mathbf{X} \to \sup_{\mathbf{z} \in \mathbb{R}^{p}} \left| {\Lambda}_{\mu,d}(\mathbf{z}) - {\Lambda}_{\mathbf{X},d}(\mathbf{z})\right|. \end{array} $$
(6.24)

It turns out that F is a semi-algebraic function (its graph is a semi-algebraic set). Roughly speaking, a set is semi-algebraic if it can be defined by finitely many polynomial inequalities. We refer the reader to [10] for an introduction to semi-algebraic geometry; we mostly rely on content from Chapter 2. First, the function

$$\begin{array}{@{}rcl@{}} (\mathbf{X},\mathbf{z},P) \to \frac{1}{n}\sum\limits_{i = 1}^{n} P(\mathbf{X}_{i})^{2} \end{array} $$

is semi-algebraic (by identifying the space of polynomials with the Euclidean space of their coefficients) and the set {(P,z) : P(z) = 1} is also semi-algebraic. Constrained partial minimization can be expressed by a first-order formula, and, by Tarski-Seidenberg Theorem (see, e.g., [10, Theorem 2.6]), this operation preserves semi-algebraicity. Hence, the function (X,z) →ΛX,d(z) is semi-algebraic. Furthermore, Theorem 3.1 ensures that Λμ,d(z) = 1/κ(z,z) for any z, where κ(z,z) is a polynomial in z and hence z →Λμ,d(z) is semi-algebraic. Finally, absolute value is semi-algebraic and using a partial minimization argument again, we have that F is a semi-algebraic function.

As a semi-algebraic function, F is Borel measurable. Indeed, using the goodsets principle ([2] §1.5.1, p. 35) it is sufficient to prove that for an arbitrary intervalFootnote 1(a,b] ⊂ [0, 1], \(F^{-1}((a,b])\in \mathcal {B}(\mathbb {R}^{p\times n})\). Any such set is the pre-image of a semi-algebraic set by a semi-algebraic map. As proved in [10, Corollary 2.9], any such set must be semi-algebraic and hence measurable. Thus, with the notations of Theorem 3.13, \(\|{\Lambda }_{\mu _{n},d} - {\Lambda }_{\mu ,d}\|_{\infty }\) is indeed a random variable for each fixed n,d such that Md(μ) is invertible.

We now turn to the proof of the main result of the Theorem. For simplicity, we adopt the following notation for the rest of the proof. For any continuous function \(f: \mathbb {R}^{p} \to \mathbb {R}\), and any subset \(V \subseteq \mathbb {R}^{p}\):

$$\begin{array}{@{}rcl@{}} \|f\|_{V} &:= \sup_{\mathbf{x} \in V}\vert f(\mathbf{x})\vert,\qquad [ \text{ so that } \Vert f\Vert_{\mathbb{R}^{p}}=\Vert f\Vert_{\infty}], \end{array} $$
(6.25)

which could be infinite. We prove that for any 𝜖 > 0:

$$\begin{array}{@{}rcl@{}} P\left( {\lim\sup}_{n}\left\{ \|{\Lambda}_{\mu_{n},d} - {\Lambda}_{\mu,d}\|_{{\mathbb{R}^{p}}} \geq \epsilon \right\} \right) = 0, \end{array} $$
(6.26)

where the probability is taken with respect to the random choice of the sequence of independent samples from μ and the limit supremum is the set theoretic limit of the underlying events.

Fix 𝜖 > 0. Denote by S the compact support of μ. Note that S contains also the support of μn with probability 1. From Lemma 6.8, we have an upper bound on both \({\Lambda }_{\mu _{n},d}\) and Λμ,d of order O (dist(x,conv(S))− 2) which holds with probability 1. Hence, it is possible to find a compact set V𝜖 containing S (with complement \(V_{\epsilon }^{c}=\mathbb {R}^{n}\setminus V_{\epsilon }\)) such that, almost surely:

$$\begin{array}{@{}rcl@{}} \max\left\{ \|{\Lambda}_{\mu_{n},d}\|_{V_{\epsilon}^{c}}, \|{\Lambda}_{\mu,d}\|_{V_{\epsilon}^{c}} \right\} \leq \frac{\epsilon}{2}. \end{array} $$
(6.27)

Next, we have the following equivalence:

$$\begin{array}{@{}rcl@{}} \|{\Lambda}_{\mu_{n},d} - {\Lambda}_{\mu,d}\|_{\mathbb{R}^{p}} \geq \epsilon \quad \Leftrightarrow\quad\left\lbrace \begin{array}{l} \|{\Lambda}_{\mu_{n},d} - {\Lambda}_{\mu,d}\|_{V_{\epsilon}} \geq \epsilon \text{ or}\\ \|{\Lambda}_{\mu_{n},d} - {\Lambda}_{\mu,d}\|_{V_{\epsilon}^{c}}\geq \epsilon \end{array} \right. \end{array} $$
(6.28)

On the other hand, since both functions are nonnegative, from equation (6.27), almost surely:

$$\begin{array}{@{}rcl@{}} \|{\Lambda}_{\mu_{n},d} - {\Lambda}_{\mu,d}\|_{V_{\epsilon}^{c}} \leq \max\left\{ \|{\Lambda} _{\mu_{n},d}\|_{V_{\epsilon}^{c}}, \|{\Lambda}_{\mu,d}\|_{V_{\epsilon}^{c}} \right\}\leq \frac{\epsilon}{2}. \end{array} $$
(6.29)

Hence, the second event in the right-hand side of (6.28) occurs with probability 0. As a consequence, except for a set of events of measure 0, we have:

$$\|{\Lambda}_{\mu_{n},d} - {\Lambda}_{\mu,d}\|_{\mathbb{R}^{p}} \geq \epsilon \Leftrightarrow \|{\Lambda}_{\mu_{n},d} - {\Lambda}_{\mu,d}\|_{V_{\epsilon}} \geq \epsilon, $$

which in turn implies:

$$\begin{array}{@{}rcl@{}} P\left( {\lim\sup}_{n} \left\{\|{\Lambda}_{\mu_{n},d} - {\Lambda}_{\mu,d}\|_{\mathbb{R}^{p}} \right\} \geq \epsilon\right) = P\left( {\lim\sup}_{n} \left\{\|{\Lambda}_{\mu_{n},d} - {\Lambda}_{\mu,d}\|_{V_{\epsilon}} \right\} \geq \epsilon\right).\\ \end{array} $$
(6.30)

By assumption, the moment matrix Md(μ) is invertible and by the strong law of large numbers, almost surely, Md(μn) must be invertible for sufficiently large n. Assume that Md(μn) is invertible, we have:

$$\begin{array}{@{}rcl@{}} \!\!\!\!\!\!\!\!\!\! \|{\Lambda}_{\mu_{n},d} - {\Lambda}_{\mu,d}\|_{V_{\epsilon}} \!& = &\! \sup_{\mathbf{x} \in V_{\epsilon}} \left\{\left|\frac{1}{\mathbf{v}_{d}(\mathbf{x})^{T}\mathbf{M}_{d}(\mu)^{-1}\mathbf{v}_{d}(\mathbf{x})} - \frac{1}{\mathbf{v}_{d}(\mathbf{x})^{T}\mathbf{M}_{d}(\mu_{n})^{-1}\mathbf{v}_{d}(\mathbf{x})}\right|\right\}\\ \!& = &\! \sup_{\mathbf{x} \in V_{\epsilon}} \left\{\left|\frac{\mathbf{v}_{d}(\mathbf{x})^{T}(\mathbf{M}_{d}(\mu_{n})^{-1} - \mathbf{M}_{d}(\mu)^{-1})\mathbf{v}_{d}(\mathbf{x})}{\mathbf{v}_{d}(\mathbf{x})^{T}\mathbf{M}_{d}(\mu)^{-1}\mathbf{v}_{d}(\mathbf{x})\mathbf{v}_{d}(\mathbf{x})^{T}\mathbf{M}_{d}(\mu_{n})^{-1}\mathbf{v}_{d}(\mathbf{x}) }\right|\right\}. \end{array} $$
(6.31)

Using the strong law of large numbers again, continuity of eigenvalues and the fact that for large enough n, Md(μn) is invertible with probability 1, the continuous mapping theorem ensures that almost surely, for n sufficiently large, the smallest eigenvalue of Md(μn)− 1 is close to that of Md(μ)− 1 and hence bounded away from 0. Since the first coordinate of vd(x) is 1, the denominator in (6.31) is bounded away from 0 almost surely for sufficiently large n. In addition, since V𝜖 is compact, vd(x) is bounded on V𝜖 and there exists a constant K such that, almost surely, for sufficiently large n:

$$\begin{array}{@{}rcl@{}} \|{\Lambda}_{\mu_{n},d} - {\Lambda}_{\mu,d}\|_{V_{\epsilon}} \leq K\|\mathbf{M}_{d}(\mu)^{-1} - \mathbf{M}_{d}(\mu_{n})^{-1}\|, \end{array} $$
(6.32)

where the matrix norm in the right-hand side is the operator norm induced by the Euclidean norm. Combining (6.30) and (6.32), we obtain:

$$\begin{array}{@{}rcl@{}} &&P\left( {\lim\sup}_{n} \left\{\|{\Lambda}_{\mu_{n},d} - {\Lambda}_{\mu,d}\|_{\mathbb{R}^{p}} \right\} \geq \epsilon\right) \\ &\leq &P\left( {\lim\sup}_{n} \left\{K\Vert\mathbf{M}_{d}(\mu)^{-1} - \mathbf{M}_{d}(\mu_{n})^{-1}\Vert\geq \epsilon\right\}\right). \end{array} $$
(6.33)

The strong law of large numbers and the continuity of the matrix inverse Md(⋅)− 1 at μ ensure that the right-hand side of (6.33) is 0. This concludes the proof. □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lasserre, J.B., Pauwels, E. The empirical Christoffel function with applications in data analysis. Adv Comput Math 45, 1439–1468 (2019). https://doi.org/10.1007/s10444-019-09673-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10444-019-09673-1

Keywords

Mathematics Subject Classification (2010)