Why is it necessary to parse the variance with partial moments? The additional information generated from partial moments permits a level of analysis simply not possible with traditional summary statistics.
Below are some basic equivalences demonstrating partial moments role as the elements of variance.
## [1] 0.09040591
## [1] 0.09040591
## [1] 0.8332328
## [1] 0.8249005
## [1] 0.8332328
# Variance is also the co-variance of itself:
(Co.LPM(1, x, x, mean(x), mean(x)) + Co.UPM(1, x, x, mean(x), mean(x)) - D.LPM(1, 1, x, x, mean(x), mean(x)) - D.UPM(1, 1, x, x, mean(x), mean(x))) * (length(x) / (length(x) - 1))
## [1] 0.8332328
## [1] 0.9128159
## [1] 0.9128159
The first 4 moments are returned with the function
NNS.moments
. For sample statistics, set
population = FALSE
.
## $mean
## [1] 0.09040591
##
## $variance
## [1] 0.8332328
##
## $skewness
## [1] 0.06049948
##
## $kurtosis
## [1] -0.161053
## $mean
## [1] 0.09040591
##
## $variance
## [1] 0.8249005
##
## $skewness
## [1] 0.06235774
##
## $kurtosis
## [1] -0.1069186
NNS.mode
offers support for discrete valued
distributions as well as recognizing multiple modes.
## [1] -0.2365625
## [1] 2 3 4
## [1] -0.04372107
(Co.LPM(1, x, y, mean(x), mean(y)) + Co.UPM(1, x, y, mean(x), mean(y)) - D.LPM(1, 1, x, y, mean(x), mean(y)) - D.UPM(1, 1, x, y, mean(x), mean(y))) * (length(x) / (length(x) - 1))
## [1] -0.04372107
The covariance matrix (Σ) is equal to the sum of the co-partial moments matrices less the divergent partial moments matrices. Σ = CLPM + CUPM − DLPM − DUPM
cov.mtx = PM.matrix(LPM_degree = 1, UPM_degree = 1,target = 'mean', variable = cbind(x, y), pop_adj = TRUE)
cov.mtx
## $cupm
## x y
## x 0.4299250 0.1033601
## y 0.1033601 0.5411626
##
## $dupm
## x y
## x 0.0000000 0.1469182
## y 0.1560924 0.0000000
##
## $dlpm
## x y
## x 0.0000000 0.1560924
## y 0.1469182 0.0000000
##
## $clpm
## x y
## x 0.4033078 0.1559295
## y 0.1559295 0.3939005
##
## $cov.matrix
## x y
## x 0.83323283 -0.04372107
## y -0.04372107 0.93506310
## x y
## x 0.83323283 -0.04372107
## y -0.04372107 0.93506310
## x y
## x 0.83323283 -0.04372107
## y -0.04372107 0.93506310
## [1] -0.04953215
cov.xy = (Co.LPM(1, x, y, mean(x), mean(y)) + Co.UPM(1, x, y, mean(x), mean(y)) - D.LPM(1, 1, x, y, mean(x), mean(y)) - D.UPM(1, 1, x, y, mean(x), mean(y))) * (length(x) / (length(x) - 1))
sd.x = ((UPM(2, mean(x), x) + LPM(2, mean(x), x)) * (length(x) / (length(x) - 1))) ^ .5
sd.y = ((UPM(2, mean(y), y) + LPM(2, mean(y) , y)) * (length(y) / (length(y) - 1))) ^ .5
cov.xy / (sd.x * sd.y)
## [1] -0.04953215
P = ecdf(x)
P(0) ; P(1)
LPM(0, 0, x) ; LPM(0, 1, x)
# Vectorized targets:
LPM(0, c(0, 1), x)
plot(ecdf(x))
points(sort(x), LPM(0, sort(x), x), col = "red")
legend("left", legend = c("ecdf", "LPM.CDF"), fill = c("black", "red"), border = NA, bty = "n")
# Joint CDF:
Co.LPM(0, x, y, 0, 0)
# Vectorized targets:
Co.LPM(0, x, y, c(0, 1), c(0, 1))
# Copula
# Transform x and y so that they are uniform
u_x = LPM.ratio(0, x, x)
u_y = LPM.ratio(0, y, y)
# Value of copula at c(.5, .5)
Co.LPM(0, u_x, u_y, .5, .5)
# Continuous CDF:
NNS.CDF(x, 1)
# CDF with target:
NNS.CDF(x, 1, target = mean(x))
Partial moments are asymptotic area approximations of f(x) akin to the familiar Trapezoidal and Simpson’s rules. More observations, more accuracy…
$$[UPM(1,0,f(x))-LPM(1,0,f(x))]\asymp\frac{[F(b)-F(a)]}{[b-a]}$$ [UPM(1, 0, f(x)) − LPM(1, 0, f(x))] * [b − a] ≍ [F(b) − F(a)]
## [1] 0.3335
0.3333 * [1 − 0] = ∫01x2dx For the total area, not just the definite integral, simply sum the partial moments and multiply by [b − a]: [UPM(1, 0, f(x)) + LPM(1, 0, f(x))] * [b − a] ≍ |∫abf(x)dx|
For example, when ascertaining the probability of an increase in
A given an increase in B, the
Co.UPM(degree_x, degree_y, x, y, target_x, target_y)
target
parameters are set to target_x = 0
and
target_y = 0
and the
UPM(degree, target, variable)
target parameter is also set
to target = 0
.
$$P(A|B)=\frac{Co.UPM(0,0,A,B,0,0)}{UPM(0,0,B)}$$
If the user is so motivated, detailed arguments and proofs are provided within the following: