In 1991, Hornik proved that the collection of single hidden layer feedforward neural net- works (SLFNs) with continuous, bounded, and non-constant activation function a is dense in C(K) where K is a compact set in Rs (see Neural Networks, 4(2), 251-257 (1991)). Meanwhile, he pointed out "Whether or not the continuity assumption can entirely be dropped is still an open quite challenging problem". This paper replies in the affirmative to the problem and proves that for bounded and continuous almost everywhere (a.e.) activation function σ on R, the collection of SLFNs is dense in C(K) if and only if a is un-constant a.e..
This paper addresses the learning algorithm on the unit sphere.The main purpose is to present an error analysis for regression generated by regularized least square algorithms with spherical harmonics kernel.The excess error can be estimated by the sum of sample errors and regularization errors.Our study shows that by introducing a suitable spherical harmonics kernel,the regularization parameter can decrease arbitrarily fast with the sample size.
Since the spherical Gaussian radial function is strictly positive definite, the authors use the linear combinations of translations of the Gaussian kernel to interpolate the scattered data on spheres in this article. Seeing that target functions axe usually outside the native spaces, and that one has to solve a large scaled system of linear equations to obtain combinatorial coefficients of interpolant functions, the authors first probe into some problems about interpolation with Gaussian radial functions. Then they construct quasi- interpolation operators by Gaussian radial function, and get the degrees of approximation. Moreover, they show the error relations between quasi-interpolation and interpolation when they have the same basis functions. Finally, the authors discuss the construction and approximation of the quasi-interpolant with a local support function.
In this paper, the multivariate Bernstein polynomials defined on a simplex are viewed as sampling operators, and a generalization by allowing the sampling operators to take place at scattered sites is studied. Both stochastic and deterministic aspects are applied in the study. On the stochastic aspect, a Chebyshev type estimate for the sampling operators is established. On the deterministic aspect, combining the theory of uniform distribution and the discrepancy method, the rate of approximating continuous fimction and Lp convergence for these operators are studied, respectively.
This paper concerns about the approximation by a class of positive exponential type multiplier operators on the unit sphere Sn of the (n + 1)- dimensional Euclidean space for n ≥2. We prove that such operators form a strongly continuous contraction semigroup of class (l0) and show the equivalence between the approximation errors of these operators and the K-functionals. We also give the saturation order and the saturation class of these operators. As examples, the rth Boolean of the generalized spherical Abel-Poisson operator +Vt^γ and the rth Boolean of the generalized spherical Weierstrass operator +Wt^k for integer r ≥ 1 and reals γ, k∈ (0, 1] have errors ||+r Vt^γ- f||X ω^rγ(f, t^1/γ)X and ||+rWt^kf - f||X ω^2rk(f, t^1/(2k))X for all f ∈ X and 0 ≤t ≤2π, where X is the Banach space of all continuous functions or all L^p integrable functions, 1 ≤p ≤+∞, on S^n with norm ||·||X, and ω^s(f,t)X is the modulus of smoothness of degree s 〉 0 for f ∈X. Moreover, +r^Vt^γ and +rWt^k have the same saturation class if γ= 2k.
We establish a general oracle inequality for regularized risk minimizers with strongly mixing observations, and apply this inequality to support vector machine (SVM) type algorithms. The obtained main results extend the previous known results for independent and identically distributed samples to the case of exponentially strongly mixing observations.