- #1
Jarvis323
- 1,243
- 987
I came across an interesting paper and am curious to hear some discussion about the topic among experts.
https://royalsocietypublishing.org/doi/full/10.1098/rsta.2017.0237
https://web.archive.org/web/2019030...e051/8861658c0ae9ba85d2f767499e4ec11b251e.pdf
Hilbert's 6th Problem
https://en.wikipedia.org/wiki/Hilbert's_sixth_problem
I'm also curious about the results in this paper in light of the other.
https://royalsocietypublishing.org/doi/10.1098/rsta.2009.0152
https://arxiv.org/abs/0906.2530
Blessing of dimensionality: mathematical foundations of the statistical physics of data
Abstract
The concentrations of measure phenomena were discovered as the mathematical background to statistical mechanics at the end of the nineteenth/beginning of the twentieth century and have been explored in mathematics ever since. At the beginning of the twenty-first century, it became clear that the proper utilization of these phenomena in machine learning might transform the curse of dimensionality into the blessing of dimensionality. This paper summarizes recently discovered phenomena of measure concentration which drastically simplify some machine learning problems in high dimension, and allow us to correct legacy artificial intelligence systems. The classical concentration of measure theorems state that i.i.d. random points are concentrated in a thin layer near a surface (a sphere or equators of a sphere, an average or median-level set of energy or another Lipschitz function, etc.). The new stochastic separation theorems describe the thin structure of these thin layers: the random points are not only concentrated in a thin layer but are all linearly separable from the rest of the set, even for exponentially large random sets. The linear functionals for separation of points can be selected in the form of the linear Fisher’s discriminant. All artificial intelligence systems make errors. Non-destructive correction requires separation of the situations (samples) with errors from the samples corresponding to correct behaviour by a simple and robust classifier. The stochastic separation theorems provide us with such classifiers and determine a non-iterative (one-shot) procedure for their construction.
This article is part of the theme issue ‘Hilbert’s sixth problem’.
https://royalsocietypublishing.org/doi/full/10.1098/rsta.2017.0237
https://web.archive.org/web/2019030...e051/8861658c0ae9ba85d2f767499e4ec11b251e.pdf
Hilbert's 6th Problem
6. Mathematical Treatment of the Axioms of Physics. The investigations on the foundations of geometry suggest the problem: To treat in the same manner, by means of axioms, those physical sciences in which already today mathematics plays an important part; in the first rank are the theory of probabilities and mechanics.
https://en.wikipedia.org/wiki/Hilbert's_sixth_problem
I'm also curious about the results in this paper in light of the other.
Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing
Abstract
We review connections between phase transitions in high-dimensional combinatorial geometry and phase transitions occurring in modern high-dimensional data analysis and signal processing. In data analysis, such transitions arise as abrupt breakdown of linear model selection, robust data fitting or compressed sensing reconstructions, when the complexity of the model or the number of outliers increases beyond a threshold. In combinatorial geometry, these transitions appear as abrupt changes in the properties of face counts of convex polytopes when the dimensions are varied. The thresholds in these very different problems appear in the same critical locations after appropriate calibration of variables. These thresholds are important in each subject area: for linear modelling, they place hard limits on the degree to which the now ubiquitous high-throughput data analysis can be successful; for robustness, they place hard limits on the degree to which standard robust fitting methods can tolerate outliers before breaking down; for compressed sensing, they define the sharp boundary of the undersampling/sparsity trade-off curve in undersampling theorems. Existing derivations of phase transitions in combinatorial geometry assume that the underlying matrices have independent and identically distributed Gaussian elements. In applications, however, it often seems that Gaussianity is not required. We conducted an extensive computational experiment and formal inferential analysis to test the hypothesis that these phase transitions are universal across a range of underlying matrix ensembles. We ran millions of linear programs using random matrices spanning several matrix ensembles and problem sizes; visually, the empirical phase transitions do not depend on the ensemble, and they agree extremely well with the asymptotic theory assuming Gaussianity. Careful statistical analysis reveals discrepancies that can be explained as transient terms, decaying with problem size. The experimental results are thus consistent with an asymptotic large-n universality across matrix ensembles; finite-sample universality can be rejected.
https://royalsocietypublishing.org/doi/10.1098/rsta.2009.0152
https://arxiv.org/abs/0906.2530