In 1991, Hornik proved that the collection of single hidden layer feedforward neural net- works (SLFNs) with continuous, bounded, and non-constant activation function a is dense in C(K) where K is a compact set in Rs (see Neural Networks, 4(2), 251-257 (1991)). Meanwhile, he pointed out "Whether or not the continuity assumption can entirely be dropped is still an open quite challenging problem". This paper replies in the affirmative to the problem and proves that for bounded and continuous almost everywhere (a.e.) activation function σ on R, the collection of SLFNs is dense in C(K) if and only if a is un-constant a.e..
This paper investigates some approximation properties and learning rates of Lipschitz kernel on the sphere. A perfect convergence rate on the shifts of Lipschitz kernel on the sphere, which is faster than O(n-1/2), is obtained, where n is the number of parameters needed in the approximation. By means of the approximation, a learning rate of regularized least square algorithm with the Lipschitz kernel on the sphere is also deduced.
This paper concerns about the approximation by a class of positive exponential type multiplier operators on the unit sphere Sn of the (n + 1)- dimensional Euclidean space for n ≥2. We prove that such operators form a strongly continuous contraction semigroup of class (l0) and show the equivalence between the approximation errors of these operators and the K-functionals. We also give the saturation order and the saturation class of these operators. As examples, the rth Boolean of the generalized spherical Abel-Poisson operator +Vt^γ and the rth Boolean of the generalized spherical Weierstrass operator +Wt^k for integer r ≥ 1 and reals γ, k∈ (0, 1] have errors ||+r Vt^γ- f||X ω^rγ(f, t^1/γ)X and ||+rWt^kf - f||X ω^2rk(f, t^1/(2k))X for all f ∈ X and 0 ≤t ≤2π, where X is the Banach space of all continuous functions or all L^p integrable functions, 1 ≤p ≤+∞, on S^n with norm ||·||X, and ω^s(f,t)X is the modulus of smoothness of degree s 〉 0 for f ∈X. Moreover, +r^Vt^γ and +rWt^k have the same saturation class if γ= 2k.