18.01.2022 um 14:15 Uhr Meetingroom
Hung-Hsu Chou (Aachen)
More is Less: Inducing Sparsity via Overparameterization
In deep learning it is common to overparameterize the neural networks, that is, to use more parameters than training samples. Quite surprisingly training the neural network via (stochastic) gradient descent leads to models that generalize very well, while classical statistics would suggest overfitting. In order to gain understanding of this implicit bias phenomenon we study the special case of sparse recovery (compressive sensing) which is of interest on its own. More precisely, in order to reconstruct a vector from underdetermined linear measurements, we introduce a corresponding overparameterized square loss functional, where the vector to be reconstructed is deeply factorized into several vectors. We show that, under a very mild assumption on the measurement matrix, vanilla gradient flow for the overparameterized loss functional converges to a solution of minimal ℓ1-norm. The latter is well-known to promote sparse solutions. As a by-product, our results significantly improve the sample complexity for compressive sensing in previous works. The theory accurately predicts the recovery rate in numerical experiments. For the proofs, we introduce the concept of solution entropy, which bypasses the obstacles caused by non-convexity and should be of independent interest.
07.12.2021 um 14:15 Uhr, Raum 66/101
Mathias Hockmann (Universität Osnabrück)
Stochastic Optical Reconstruction Microscopy (STORM)
16.11.2021 um 14:15 Uhr, Raum 66/101
Markus Petz (Universität Göttingen)
Reconstruction of Exponential Sums from DFT Data - From ESPRIT to ESPIRA
We introduce a new algorithm to reconstruct exponential sums from sampled function values. To do this, we exploit the rational structure of the DFT values of an exponential sum. Our algorithm uses the recently proposed AAA algorithm, which achieves numerical stability by employing an iterative procedure combined with a barycentric representation of a rational interpolation function. This approach is naturally linked to certain Löwner matrices. We then also propose a second algorithm, which utilizes a matrix pencil constructed from these Löwner matrices, which is similar to the known ESPRIT method.
09.11.2021 at 14:15, Room 66/101
Thomas Lachmann (Universität Linz)
The VC-dimension of axis-parallel boxes on the torus
We discuss that the VC-dimension of the family of d-dimensional axis-parallel boxes and cubes on the dimensional torus are both asymptotically quasi-linear, i.e. of growth d log d for large d. This is especially surprising as the VC-dimension usually grows linearly with d in similar settings. This talk will consist of the ideas that went into proving these results and a bit of storytelling about how the corresponding paper came to be.
09.11.2021 at 16:15, Room 93/E01
Thomas Lachmann (Universität Linz)
On the area of empty axis-parallel rectangles amidst 2-dimensional lattice points)
We discuss the dispersion of lattices in the plane. We introduce a framework we call the continued fraction connection. With this we are able to understand lots of different properties of the dispersion of lattices and its connection to the dispersion in the unit square as well its periodic pendant, i.e. the torus. This talk will consist of the ideas that went into proving these results and a bit of storytelling about how the corresponding paper came to be.
19.10.2021 um 14:15 Uhr Meetingroom
Daniel Rudolf (Universität Passau)
On the spherical dispersion
In the seminar we provide upper and lower bounds on the minimal
spherical dispersion. In particular, we see that the inverse of the
minimal spherical dispersion behaves linearly in the dimension. We also
talk about upper and lower bounds of the expected dispersion for points
chosen independently and uniformly at random from the Euclidean unit
sphere. The content of the talk is partially based on