Title: | Multivariate Dependence with Copulas |
---|---|
Description: | Classes (S4) of commonly used elliptical, Archimedean, extreme-value and other copula families, as well as their rotations, mixtures and asymmetrizations. Nested Archimedean copulas, related tools and special functions. Methods for density, distribution, random number generation, bivariate dependence measures, Rosenblatt transform, Kendall distribution function, perspective and contour plots. Fitting of copula models with potentially partly fixed parameters, including standard errors. Serial independence tests, copula specification tests (independence, exchangeability, radial symmetry, extreme-value dependence, goodness-of-fit) and model selection based on cross-validation. Empirical copula, smoothed versions, and non-parametric estimators of the Pickands dependence function. |
Authors: | Marius Hofert [aut] , Ivan Kojadinovic [aut] , Martin Maechler [aut, cre] , Jun Yan [aut] , Johanna G. Nešlehová [ctb] (evTestK(), <https://orcid.org/0000-0001-9634-4796>), Rebecca Morger [ctb] (fitCopula.ml(): code for free mixCopula weight parameters) |
Maintainer: | Martin Maechler <[email protected]> |
License: | GPL (>= 3) | file LICENCE |
Version: | 1.1-5 |
Built: | 2024-11-21 13:34:35 UTC |
Source: | https://github.com/r-forge/copula |
The copula package provides (S4) classes of commonly used elliptical, (nested) Archimedean, extreme value and other copula families; methods for density, distribution, random number generation, and plots.
Fitting copula models and goodness-of-fit tests. Independence and serial (univariate and multivariate) independence tests, and other copula related tests.
The DESCRIPTION file:
Package: | copula |
Version: | 1.1-5 |
VersionNote: | Last CRAN: 1.1-4 on 2024-08-12, 1.1-3 on 2023-12-07 |
Date: | 2024-08-19 |
Title: | Multivariate Dependence with Copulas |
Authors@R: | c(person("Marius", "Hofert", role = "aut", email = "[email protected]", comment = c(ORCID = "0000-0001-8009-4665")) , person("Ivan", "Kojadinovic", role = "aut", email = "[email protected]", comment = c(ORCID = "0000-0002-2903-1543")) , person("Martin","Maechler", role=c("aut","cre"), email="[email protected]", comment = c(ORCID = "0000-0002-8685-9910")) , person("Jun", "Yan", role = "aut", email = "[email protected]", comment = c(ORCID = "0000-0003-4401-7296")) , person(c("Johanna", "G."), "Nešlehová", role = "ctb", comment = c("evTestK()", ORCID = "0000-0001-9634-4796")) , person("Rebecca", "Morger", role = "ctb", comment = "fitCopula.ml(): code for free mixCopula weight parameters") ) |
Depends: | R (>= 3.5.0) |
Imports: | stats, graphics, methods, stats4, Matrix (>= 1.5-0), lattice, colorspace, gsl, ADGofTest, stabledist (>= 0.6-4), mvtnorm, pcaPP, pspline, numDeriv |
Suggests: | MASS, KernSmooth, sfsmisc, scatterplot3d, Rmpfr, bbmle, knitr, rmarkdown, abind, crop, gridExtra, lcopula, mev, mvnormtest, parallel, partitions, polynom, qrng, randtoolbox, rugarch, Runuran, tseries, VGAM, VineCopula, zoo |
SuggestsNote: | packages {abind, ..., zoo} (last lines above) are only used in vignettes, demos and a few tests. |
VignetteBuilder: | knitr |
Enhances: | nor1mix, copulaData |
SystemRequirements: | pdfcrop (part of TexLive) is required to rebuild the vignettes. |
Description: | Classes (S4) of commonly used elliptical, Archimedean, extreme-value and other copula families, as well as their rotations, mixtures and asymmetrizations. Nested Archimedean copulas, related tools and special functions. Methods for density, distribution, random number generation, bivariate dependence measures, Rosenblatt transform, Kendall distribution function, perspective and contour plots. Fitting of copula models with potentially partly fixed parameters, including standard errors. Serial independence tests, copula specification tests (independence, exchangeability, radial symmetry, extreme-value dependence, goodness-of-fit) and model selection based on cross-validation. Empirical copula, smoothed versions, and non-parametric estimators of the Pickands dependence function. |
License: | GPL (>= 3) | file LICENCE |
Collate: | AllClass.R Classes.R AllGeneric.R Auxiliaries.R aux-acopula.R asymCopula.R mixCopula.R rotCopula.R Copula.R special-func.R amhCopula.R claytonCopula.R frankCopula.R cop_objects.R nacopula.R dC-dc.R amhExpr.R An.R archmCopula.R cCopula.R claytonExpr.R ellipCopula.R empCopula.R empPsi.R acR.R estimation.R evCopula.R evTests.R exchTests.R fgmCopula.R fitCopula.R fitLambda.R fitMvdc.R fixedPar.R frankExpr.R galambosCopula.R galambosExpr-math.R galambosExpr.R ggraph-tools.R pairsRosenblatt.R prob.R gofTrafos.R gofEVTests.R gofCopula.R graphics.R gumbelCopula.R gumbelExpr.R huslerReissCopula.R huslerReissExpr.R indepCopula.R fhCopula.R lowfhCopula.R upfhCopula.R indepTests.R joeCopula.R K.R logseries.R mvdc.R margCopula.R matrix_tools.R normalCopula.R obs.R opower.R plackettCopula.R plackettExpr.R moCopula.R rstable1.R safeUroot.R schlatherCopula.R stable.R timing.R tCopula.R tawnCopula.R tawnExpr.R tevCopula.R varianceReduction.R wrapper.R xvCopula.R zzz.R |
Encoding: | UTF-8 |
URL: | https://copula.r-forge.r-project.org/, https://r-forge.r-project.org/projects/copula/, https://CRAN.r-project.org/package=copula |
BugReports: | https://r-forge.r-project.org/tracker/?func=add&group_id=2140&atid=5417 |
Config/pak/sysreqs: | libgsl0-dev |
Repository: | https://r-forge.r-universe.dev |
RemoteUrl: | https://github.com/r-forge/copula |
RemoteRef: | HEAD |
RemoteSha: | a2f99fcb6ea30b4680e495ac7098e8f556a8f774 |
Author: | Marius Hofert [aut] (<https://orcid.org/0000-0001-8009-4665>), Ivan Kojadinovic [aut] (<https://orcid.org/0000-0002-2903-1543>), Martin Maechler [aut, cre] (<https://orcid.org/0000-0002-8685-9910>), Jun Yan [aut] (<https://orcid.org/0000-0003-4401-7296>), Johanna G. Nešlehová [ctb] (evTestK(), <https://orcid.org/0000-0001-9634-4796>), Rebecca Morger [ctb] (fitCopula.ml(): code for free mixCopula weight parameters) |
Maintainer: | Martin Maechler <[email protected]> |
Index of help topics:
.pairsCond Pairs Plot of a cu.u Object (Internal Use) A..Z Sinc, Zolotarev's, and Other Mathematical Utility Functions An Nonparametric Rank-based Estimators of the Pickands Dependence Function Bernoulli Compute Bernoulli Numbers Copula Density, Evaluation, and Random Number Generation for Copula Functions Eulerian Eulerian and Stirling Numbers of First and Second Kind K Kendall Distribution Function for Archimedean Copulas Mvdc Multivariate Distributions Constructed from Copulas RSpobs Pseudo-Observations of Radial and Uniform Part of Elliptical and Archimedean Copulas SMI.12 SMI Data - 141 Days in Winter 2011/2012 Sibuya Sibuya Distribution - Sampling and Probabilities absdPsiMC Absolute Value of Generator Derivatives via Monte Carlo acopula-class Class "acopula" of Archimedean Copula Families acopula-families Specific Archimedean Copula Families ("acopula" Objects) allComp All Components of a (Inner or Outer) Nested Archimedean Copula archmCopula Construction of Archimedean Copula Class Object archmCopula-class Class "archmCopula" beta. Sample and Population Version of Blomqvist's Beta for Archimedean Copulas cCopula Conditional Distributions and Their Inverses from Copulas cloud2-methods Cloud Plot Methods ('cloud2') in Package 'copula' coeffG Coefficients of Polynomial used for Gumbel Copula contour-methods Methods for Contour Plots in Package 'copula' contourplot2-methods Contour Plot Methods 'contourplot2' in Package 'copula' copula-class Mother Classes "Copula", etc of all Copulas in the Package copula-package Multivariate Dependence Modeling with Copulas corKendall (Fast) Computation of Pairwise Kendall's Taus dDiag Density of the Diagonal of (Nested) Archimedean Copulas describeCop Copula (Short) Description as String dnacopula Density Evaluation for (Nested) Archimedean Copulas ebeta Various Estimators for (Nested) Archimedean Copulas ellipCopula Construction of Elliptical Copula Class Objects ellipCopula-class Class "ellipCopula" of Elliptical Copulas emde Minimum Distance Estimators for (Nested) Archimedean Copulas emle Maximum Likelihood Estimators for (Nested) Archimedean Copulas empCopula The Empirical Copula empCopula-class Class "empCopula" of Empirical Copulas enacopula Estimation Procedures for (Nested) Archimedean Copulas evCopula Construction of Extreme-Value Copula Objects evCopula-class Classes Representing Extreme-Value Copulas evTestA Bivariate Test of Extreme-Value Dependence Based on Pickands' Dependence Function evTestC Large-sample Test of Multivariate Extreme-Value Dependence evTestK Bivariate Test of Extreme-Value Dependence Based on Kendall's Distribution exchEVTest Test of Exchangeability for Certain Bivariate Copulas exchTest Test of Exchangeability for a Bivariate Copula fgmCopula Construction of a fgmCopula Class Object fgmCopula-class Class "fgmCopula" - Multivariate Multiparameter Farlie-Gumbel-Morgenstern Copulas fhCopula Construction of Fréchet-Hoeffding Bound Copula Objects fhCopula-class Class "fhCopula" of Fréchet-Hoeffding Bound Copulas fitCopula Fitting Copulas to Data - Copula Parameter Estimation fitCopula-class Classes of Fitted Multivariate Models: Copula, Mvdc fitLambda Non-parametric Estimators of the Matrix of Tail-Dependence Coefficients fitMvdc Estimation of Multivariate Models Defined via Copulas fixParam Fix a Subset of a Copula Parameter Vector gasoil Daily Crude Oil and Natural Gas Prices from 2003 to 2006 getAcop Get "acopula" Family Object by Name getIniParam Get Initial Parameter Estimate for Copula getTheta Get the Parameter(s) of a Copula gnacopula Goodness-of-fit Testing for (Nested) Archimedean Copulas gofBTstat Various Goodness-of-fit Test Statistics gofCopula Goodness-of-fit Tests for Copulas gofEVCopula Goodness-of-fit Tests for Bivariate Extreme-Value Copulas gofTstat Goodness-of-fit Test Statistics htrafo GOF Testing Transformation of Hering and Hofert iPsi Generator Functions for Archimedean and Extreme-Value Copulas indepCopula Construction of Independence Copula Objects indepCopula-class Class "indepCopula" indepTest Test Independence of Continuous Random Variables via Empirical Copula initOpt Initial Interval or Value for Parameter Estimation of Archimedean Copulas interval Construct Simple "interval" Object interval-class Class "interval" of Simple Intervals khoudrajiCopula Construction of copulas using Khoudraji's device khoudrajiCopula-class Class '"khoudrajiCopula"' and its Subclasses log1mexp Compute f(a) = log(1 +/- exp(-a)) Numerically Optimally loss LOSS and ALAE Insurance Data margCopula Marginal copula of a Copula With Specified Margins mixCopula Create Mixture of Copulas mixCopula-class Class '"mixCopula"' of Copula Mixtures moCopula The Marshall-Olkin Copula moCopula-class Class "moCopula" of Marshall-Olkin Copulas multIndepTest Independence Test Among Continuous Random Vectors Based on the Empirical Copula Process multSerialIndepTest Serial Independence Test for Multivariate Time Series via Empirical Copula mvdc-class Class "mvdc": Multivariate Distributions from Copulas nacFrail.time Timing for Sampling Frailties of Nested Archimedean Copulas nacPairthetas Pairwise Thetas of Nested Archimedean Copulas nacopula-class Class "nacopula" of Nested Archimedean Copulas nesdepth Nesting Depth of a Nested Archimedean Copula ("nacopula") onacopula Constructing (Outer) Nested Archimedean Copulas opower Outer Power Transformation of Archimedean Copulas p2P Tools to Work with Matrices pacR Distribution of the Radial Part of an Archimedean Copula pairs2 Scatter-Plot Matrix ('pairs') for Copula Distributions with Nice Defaults pairsRosenblatt Plots for Graphical GOF Test via Pairwise Rosenblatt Transforms pairwiseCcop Computations for Graphical GOF Test via Pairwise Rosenblatt Transforms persp-methods Methods for Function 'persp' in Package 'copula' plackettCopula Construction of a Plackett Copula plackettCopula-class Class "plackettCopula" of Plackett Copulas plot-methods Methods for 'plot' in Package 'copula' pnacopula Evaluation of (Nested) Archimedean Copulas pobs Pseudo-Observations polylog Polylogarithm Li_s(z) and Debye Functions polynEval Evaluate Polynomials printNacopula Print Compact Overview of a Nested Archimedean Copula ("nacopula") prob Computing Probabilities of Hypercubes qqplot2 Q-Q Plot with Rugs and Pointwise Asymptotic Confidence Intervals rAntitheticVariates Variance-Reduction Methods rF01Frank Sample Univariate Distributions Involved in Nested Frank and Joe Copulas rFFrank Sampling Distribution F for Frank and Joe radSymTest Test of Exchangeability for a Bivariate Copula rdj Daily Returns of Three Stocks in the Dow Jones retstable Sampling Exponentially Tilted Stable Distributions rlog Sampling Logarithmic Distributions rnacModel Random nacopula Model rnacopula Sampling Nested Archimedean Copulas rnchild Sampling Child 'nacopula's rotCopula Construction and Class of Rotated aka Reflected Copulas rstable1 Random numbers from (Skew) Stable Distributions safeUroot One-dimensional Root (Zero) Finding - Extra "Safety" for Convenience serialIndepTest Serial Independence Test for Continuous Time Series Via Empirical Copula setTheta Specify the Parameter(s) of a Copula show-methods Methods for 'show()' in Package 'copula' splom2-methods Methods for Scatter Plot Matrix 'splom2' in Package 'copula' tau Dependence Measures for Bivariate Copulas tauAMH Ali-Mikhail-Haq ("AMH")'s and Joe's Kendall's Tau uranium Uranium Exploration Dataset of Cook & Johnson (1986) wireframe2-methods Perspective Plots via 'wireframe2' xvCopula Model (copula) selection based on 'k'-fold cross-validation
Further information is available in the following vignettes:
AC_Liouville |
Archimedean Liouville Copulas (source) |
AR_Clayton |
MLE and Quantile Evaluation for a Clayton AR(1) Model with Student Marginals (source) |
GIG |
Generalized Inverse Gaussian Archimedean Copulas (source) |
HAXC |
Hierarchical Archimax Copulas (source) |
NALC |
Nested Archimedean Lévy Copulas (source) |
copula_GARCH |
The Copula GARCH Model (source) |
dNAC |
Densities of Two-Level Nested Archimedean Copulas (source) |
empiricial_copulas |
Exploring Empirical Copulas (source) |
logL_visualization |
Log-Likelihood Visualization for Archimedean Copulas (source) |
qrng |
Quasi-Random Numbers for Copula Models (source) |
wild_animals |
Wild Animals: Examples of Nonstandard Copulas (source) |
Frank-Rmpfr |
Numerically stable Frank Copulas via Multiprecision (Rmpfr) (source) |
nacopula-pkg |
Nested Archimedean Copulas Meet R (source) |
rhoAMH-dilog |
Beautiful Spearman's Rho for AMH Copula (source) |
The copula package provides
Classes (S4) of commonly used copulas including
elliptical (normal and t; ellipCopula
),
Archimedean (Clayton, Gumbel, Frank, Joe, and Ali-Mikhail-Haq; ;
archmCopula
and acopula
),
extreme value (Gumbel, Husler-Reiss, Galambos, Tawn, and t-EV; evCopula
),
and other families (Plackett and Farlie-Gumbel-Morgenstern).
Methods for density, distribution, random number generation
(dCopula
, pCopula
and rCopula
);
bivariate dependence measures (rho
, tau
,
etc), perspective and contour plots.
Functions (and methods) for fitting copula models including
variance estimates (fitCopula
).
Independence tests among random variables and vectors.
Serial independence tests for univariate and multivariate continuous time series.
Goodness-of-fit tests for copulas based on multipliers, and the parametric bootstrap, with several transformation options.
Bivariate and multivariate tests of extreme-value dependence.
Bivariate tests of exchangeability.
Now with former package nacopula for working with nested Archimedean copulas. Specifically,
it provides procedures for computing function values and cube
volumes (prob
),
characteristics such as Kendall's tau and tail dependence
coefficients (via family objects, e.g.,
copGumbel
),
efficient sampling algorithms (rnacopula
),
various estimators and goodness-of-fit tests.
The package also contains related univariate distributions and special functions
such as the Sibuya distribution (Sibuya
), the
polylogarithm (polylog
), Stirling and Eulerian numbers
(Eulerian
).
Further information is available in the following vignettes:
nacopula-pkg |
Nested Archimedean Copulas Meet R (../doc/nacopula-pkg.pdf) |
Frank-Rmpfr |
Numerically Stable Frank via Multiprecision in R (../doc/Frank-Rmpfr) |
For a list of exported functions, use help(package = "copula")
.
Yan, J. (2007) Enjoy the Joy of Copulas: With a Package copula. Journal of Statistical Software 21(4), 1–21. https://www.jstatsoft.org/v21/i04/.
Kojadinovic, I. and Yan, J. (2010). Modeling Multivariate Distributions with Continuous Margins Using the copula R Package. Journal of Statistical Software 34(9), 1–20. https://www.jstatsoft.org/v34/i09/.
Hofert, M. and Mächler, M. (2011), Nested Archimedean Copulas Meet R: The nacopula Package., Journal of Statistical Software 39(9), 1–20. https://www.jstatsoft.org/v39/i09/.
Nelsen, R. B. (2006) An introduction to Copulas. Springer, New York.
The following CRAN packages currently use (‘depend on’) copula: CoClust, copulaedas, Depela, HAC, ipptoolbox, vines.
## Some of the more important functions (and their examples) are example(fitCopula)## fitting Copulas example(fitMvdc) ## fitting multivariate distributions via Copulas example(nacopula) ## nested Archimedean Copulas ## Independence Tests: These also draw a 'Dependogram': example(indepTest) ## Testing for Independence example(serialIndepTest) ## Testing for Serial Independence
## Some of the more important functions (and their examples) are example(fitCopula)## fitting Copulas example(fitMvdc) ## fitting multivariate distributions via Copulas example(nacopula) ## nested Archimedean Copulas ## Independence Tests: These also draw a 'Dependogram': example(indepTest) ## Testing for Independence example(serialIndepTest) ## Testing for Serial Independence
.pairsCond()
is an internal function for plotting the pairwise
Rosenblatt transforms, i.e., the pairwise conditional distributions,
as returned by pairwiseCcop()
, via the principal function
pairsRosenblatt()
.
The intention is that pairsRosenblatt()
be called,
rather than this auxiliary function.
.pairsCond(gcu.u, panel = points, colList, col = par("col"), bg = par("bg"), labels, ..., text.panel = textPanel, label.pos = 0.5, cex.labels = NULL, font.labels = 1, gap = 0, axes = TRUE, panel.border = TRUE, key = TRUE, keyOpt = list(space= 2.5, width= 1.5, axis= TRUE, rug.at= numeric(), title= NULL, line= 5), main = NULL, main.centered = FALSE, line.main = if(is.list(main)) 5/4*par("cex.main")* rev(seq_along(main)) else 2, sub = NULL, sub.centered = FALSE, line.sub = 4)
.pairsCond(gcu.u, panel = points, colList, col = par("col"), bg = par("bg"), labels, ..., text.panel = textPanel, label.pos = 0.5, cex.labels = NULL, font.labels = 1, gap = 0, axes = TRUE, panel.border = TRUE, key = TRUE, keyOpt = list(space= 2.5, width= 1.5, axis= TRUE, rug.at= numeric(), title= NULL, line= 5), main = NULL, main.centered = FALSE, line.main = if(is.list(main)) 5/4*par("cex.main")* rev(seq_along(main)) else 2, sub = NULL, sub.centered = FALSE, line.sub = 4)
gcu.u |
(n,d,d)-array of pairwise Rosenblatt-transformed u's as returned by
|
panel |
panel function, as for |
colList |
list of colors and information as returned by
|
col |
instead of |
bg |
instead of |
labels |
pairs() argument; can be missing (in which case a suitable default is chosen or can be "none" [or something else]) |
... |
further arguments, as for |
text.panel , label.pos , cex.labels , font.labels , gap
|
see
|
axes |
logical indicating whether axes are drawn. |
panel.border |
logical indicating whether a border is drawn
around the pairs (to mimic the behavior of |
key |
logical indicating whether a color key is drawn. |
keyOpt |
a
|
main |
title |
main.centered |
logical indicating if the title should be
centered or not; the default |
line.main |
title placement (vertical distance from pairs plot in lines). |
sub |
sub-title |
sub.centered |
logical indicating if the sub-title
should be centered or not; see |
line.sub |
sub-title placement, see |
based on pairs.default() and filled.contour() from R-2.14.1 - used in Hofert and Maechler (2013)
Marius Hofert and Martin Maechler
pairsRosenblatt()
, the prinicipal function, calling
.pairsCond()
.
Computes the absolute values of the th generator derivative
via Monte Carlo simulation.
absdPsiMC(t, family, theta, degree = 1, n.MC, method = c("log", "direct", "pois.direct", "pois"), log = FALSE, is.log.t = FALSE)
absdPsiMC(t, family, theta, degree = 1, n.MC, method = c("log", "direct", "pois.direct", "pois"), log = FALSE, is.log.t = FALSE)
t |
|
family |
Archimedean family (name or object). |
theta |
parameter value. |
degree |
order |
n.MC |
Monte Carlo sample size. |
method |
different methods:
|
log |
if TRUE the logarithm of absdPsi is returned. |
is.log.t |
if TRUE the argument |
The absolute value of the th derivative of the
Laplace-Stieltjes transform
can be approximated via
where .
This approximation is used where
degree
and
n.MC
. Note that this is comparably fast even if
t
contains many evaluation points, since the random variates
only have
to be generated once, not depending on
t
.
numeric
vector of the same length as t
containing
the absolute values of the generator derivatives.
Hofert, M., Mächler, M., and McNeil, A. J. (2013). Archimedean Copulas in High Dimensions: Estimators and Numerical Challenges Motivated by Financial Applications. Journal de la Société Française de Statistique 154(1), 25–63.
t <- c(0:100,Inf) set.seed(1) (ps <- absdPsiMC(t, family="Gumbel", theta=2, degree=10, n.MC=10000, log=TRUE)) ## Note: The absolute value of the derivative at 0 should be Inf for ## Gumbel, however, it is always finite for the Monte Carlo approximation set.seed(1) ps2 <- absdPsiMC(log(t), family="Gumbel", theta=2, degree=10, n.MC=10000, log=TRUE, is.log.t = TRUE) stopifnot(all.equal(ps[-1], ps2[-1], tolerance=1e-14)) ## Now is there an advantage of using "is.log.t" ? sapply(eval(formals(absdPsiMC)$method), function(MM) absdPsiMC(780, family="Gumbel", method = MM, theta=2, degree=10, n.MC=10000, log=TRUE, is.log.t = TRUE)) ## not really better, yet...
t <- c(0:100,Inf) set.seed(1) (ps <- absdPsiMC(t, family="Gumbel", theta=2, degree=10, n.MC=10000, log=TRUE)) ## Note: The absolute value of the derivative at 0 should be Inf for ## Gumbel, however, it is always finite for the Monte Carlo approximation set.seed(1) ps2 <- absdPsiMC(log(t), family="Gumbel", theta=2, degree=10, n.MC=10000, log=TRUE, is.log.t = TRUE) stopifnot(all.equal(ps[-1], ps2[-1], tolerance=1e-14)) ## Now is there an advantage of using "is.log.t" ? sapply(eval(formals(absdPsiMC)$method), function(MM) absdPsiMC(780, family="Gumbel", method = MM, theta=2, degree=10, n.MC=10000, log=TRUE, is.log.t = TRUE)) ## not really better, yet...
This class "acopula"
of Archimedean Copula Families
is mainly used for providing objects of known Archimedean families
with all related functions.
Objects can be created by calls of the form new("acopula", ...)
.
For several well-known Archimedean
copula families, the package copula already provides such
family objects.
name
:A string (class "character"
) describing
the copula family, for example, "AMH" (or simply "A"), "Clayton" ("C"),
"Frank" ("F"), "Gumbel" ("G"), or "Joe" ("J").
theta
:Parameter value, a numeric
, where
NA
means “unspecified”.
psi
, iPsi
:The (Archimedean) generator
(with
(t)=
(-t) being the
generator of the independence copula) and its inverse
(
function
). iPsi
has an optional argument
log
which, if TRUE
returns the logarithm of inverse of the
generator.
absdPsi
:A function
which computes the absolute
value of the derivative of the generator for the given
parameter
theta
and of the given degree
(defaults to 1).
Note that there is no informational loss by computing the absolute value
since the derivatives alternate in sign (the generator derivative
is simply (-1)^degree
absdPsi
). The
number n.MC
denotes the sample size for a Monte Carlo evaluation
approach. If n.MC
is zero (the default), the
generator derivatives are evaluated with their exact formulas. The
optional parameter log
(defaults to FALSE
) indicates whether
or not the logarithmic value is returned.
absdiPsi
:a function
computing the
absolute value of the derivative of the generator inverse
(iPsi()
) for the given parameter theta
. The optional
parameter log
(defaults to FALSE
) indicates whether
the logarithm of the absolute value of the first derivative of
iPsi()
is returned.
dDiag
:a function
computing the density of the
diagonal of the Archimedean copula at u
with parameter
theta
. The parameter log
is as described before.
dacopula
:a function
computing the density
of the Archimedean copula at u
with parameter theta
. The
meanings of the parameters n.MC
and log
are as
described before.
score
:a function
computing the
derivative of the density with respect to the parameter
.
uscore
:a function
computing the
derivative of the density with respect to the each of the arguments.
paraInterval
:Either NULL
or an object of class
"interval"
, which is typically obtained from a call
such as interval("[a,b)")
.
paraConstr
:A function of theta
returning
TRUE
if and only if theta
is a valid parameter value. Note
that paraConstr
is built automatically from the interval,
whenever the paraInterval
slot is valid.
"interval"
.
nestConstr
:A function
, which returns
TRUE
if and only if the two provided parameters theta0
and
theta1
satisfy the sufficient nesting condition for this family.
V0
:A function
which samples n
random
variates from the distribution with Laplace-Stieltjes transform
and parameter
theta
.
dV0
:A function
which computes either the
probability mass function or the probability density function (depending
on the Archimedean family) of the distribution function whose
Laplace-Stieltjes transform equals the generator at the
argument
x
(possibly a vector) for the given parameter
theta
. An optional argument log
indicates whether
the logarithm of the mass or density is computed (defaults to
FALSE
).
V01
:A function
which gets a vector of
realizations of V0
, two parameters theta0
and theta1
which satisfy the sufficient nesting condition, and which
returns a vector of the same length as V0
with random variates from
the distribution function with Laplace-Stieltjes
transform
(see
dV01
) and parameters
theta0
,
theta1
.
dV01
:Similar to dV0
with the difference being that
this function
computes the probability mass or density
function for the Laplace-Stieltjes transform
corresponding to the distribution function .
Arguments are the evaluation point(s) x
, the value(s) V0
,
and the parameters theta0
and theta1
. As for
dV0
, the optional argument log
can be specified
(defaults to FALSE
). Note that if x
is a vector,
V0
must either have length one (in which case V0
is
the same for every component of x
) or V0
must be of
the same length as x
(in which case the components of
V0
correspond to the ones of x
).
tau
, iTau
:Compute Kendall's tau of the bivariate
Archimedean copula with generator as a
function
of theta
, respectively, theta
as a
function of Kendall's tau.
lambdaL
, lambdaU
, lambdaLInv
,
lambdaUInv
:Compute the lower (upper) tail-dependence coefficient
of the bivariate Archimedean copula with generator as a
function
of theta
, respectively, theta
as a
function of the lower (upper) tail-dependence coefficient.
For more details about Archimedean families, corresponding distributions and properties, see the references.
signature(.Object = "acopula")
: is used to
automatically construct the function slot paraConstr
, when
the paraInterval
is provided (typically via
interval()
).
signature("acopula")
: compact overview of the copula.
See those of the families, for example, copGumbel
.
Specific provided copula family objects, for example,
copAMH
, copClayton
, copFrank
,
copGumbel
, copJoe
.
To access these, you may also use getAcop
.
A nested Archimedean copula without
child copulas (see class "nacopula"
)
is a proper Archimedean copula, and hence,
onacopula()
can be used to construct a specific
parametrized Archimedean copula; see the example below.
Alternatively, setTheta
also returns such a
(parametrized) Archimedean copula.
## acopula class information showClass("acopula") ## Information and structure of Clayton copulas copClayton str(copClayton) ## What are admissible parameters for Clayton copulas? copClayton@paraInterval ## A Clayton "acopula" with Kendall's tau = 0.8 : (cCl.2 <- setTheta(copClayton, iTau(copClayton, 0.8))) ## Can two Clayton copulas with parameters theta0 and theta1 be nested? ## Case 1: theta0 = 3, theta1 = 2 copClayton@nestConstr(theta0 = 3, theta1 = 2) ## -> FALSE as the sufficient nesting criterion is not fulfilled ## Case 2: theta0 = 2, theta1 = 3 copClayton@nestConstr(theta0 = 2, theta1 = 3) # TRUE ## For more examples, see help("acopula-families")
## acopula class information showClass("acopula") ## Information and structure of Clayton copulas copClayton str(copClayton) ## What are admissible parameters for Clayton copulas? copClayton@paraInterval ## A Clayton "acopula" with Kendall's tau = 0.8 : (cCl.2 <- setTheta(copClayton, iTau(copClayton, 0.8))) ## Can two Clayton copulas with parameters theta0 and theta1 be nested? ## Case 1: theta0 = 3, theta1 = 2 copClayton@nestConstr(theta0 = 3, theta1 = 2) ## -> FALSE as the sufficient nesting criterion is not fulfilled ## Case 2: theta0 = 2, theta1 = 3 copClayton@nestConstr(theta0 = 2, theta1 = 3) # TRUE ## For more examples, see help("acopula-families")
pacR()
computes the distribution function of the radial
part of an Archimedean copula, given by
The formula (in a slightly more general form) is given by McNeil and G. Nešlehová (2009).
qacR()
computes the quantile function of .
pacR(x, family, theta, d, lower.tail = TRUE, log.p = FALSE, ...) qacR(p, family, theta, d, log.p = FALSE, interval, tol = .Machine$double.eps^0.25, maxiter = 1000, ...)
pacR(x, family, theta, d, lower.tail = TRUE, log.p = FALSE, ...) qacR(p, family, theta, d, log.p = FALSE, interval, tol = .Machine$double.eps^0.25, maxiter = 1000, ...)
x |
numeric vector of nonnegative evaluation points for |
p |
numeric vector of evaluation points of the quantile function. |
family |
Archimedean family. |
theta |
parameter |
d |
dimension |
lower.tail |
|
log.p |
|
interval |
root-search interval. |
tol |
see |
maxiter |
see |
... |
additional arguments passed to the procedure for computing derivatives. |
The distribution function of the radial part evaluated at
x
, or its inverse, the quantile at p
.
McNeil, A. J., G. Nešlehová, J. (2009).
Multivariate Archimedean copulas, -monotone functions and
-norm symmetric distributions. The Annals of Statistics
37(5b), 3059–3097.
## setup family <- "Gumbel" tau <- 0.5 m <- 256 dmax <- 20 x <- seq(0, 20, length.out=m) ## compute and plot pacR() for various d's y <- vapply(1:dmax, function(d) pacR(x, family=family, theta=iTau(archmCopula(family), tau), d=d), rep(NA_real_, m)) plot(x, y[,1], type="l", ylim=c(0,1), xlab = quote(italic(x)), ylab = quote(F[R](x)), main = substitute(italic(F[R](x))~~ "for" ~ d==1:.D, list(.D = dmax))) for(k in 2:dmax) lines(x, y[,k])
## setup family <- "Gumbel" tau <- 0.5 m <- 256 dmax <- 20 x <- seq(0, 20, length.out=m) ## compute and plot pacR() for various d's y <- vapply(1:dmax, function(d) pacR(x, family=family, theta=iTau(archmCopula(family), tau), d=d), rep(NA_real_, m)) plot(x, y[,1], type="l", ylim=c(0,1), xlab = quote(italic(x)), ylab = quote(F[R](x)), main = substitute(italic(F[R](x))~~ "for" ~ d==1:.D, list(.D = dmax))) for(k in 2:dmax) lines(x, y[,k])
Given the nested Archimedean copula x
, return an integer vector
of the indices of all components of the corresponding
outer_nacopula
which are components of x
,
either direct components or components of possible child copulas. This
is typically only used by programmers investigating the exact nesting
structure.
For an outer_nacopula
object
x
, allComp(x)
must be the same as
1:dim(x)
, whereas its “inner” component copulas
will each contain a subset of those indices only.
allComp(x)
allComp(x)
x |
an R object inheriting from class |
An integer
vector of indices of all components
as described in the description above.
C3 <- onacopula("AMH", C(0.7135, 1, C(0.943, 2:3))) allComp(C3) # components are 1:3 allComp(C3@childCops[[1]]) # for the child, only (2, 3)
C3 <- onacopula("AMH", C(0.7135, 1, C(0.943, 2:3))) allComp(C3) # components are 1:3 allComp(C3@childCops[[1]]) # for the child, only (2, 3)
Bivariate and multivariate versions of the nonparametric rank-based
estimators of the Pickands dependence function , studied in
Genest and Segers (2009) and Gudendorf and Segers (2011).
An.biv(x, w, estimator = c("CFG", "Pickands"), corrected = TRUE, ties.method = eval(formals(rank)$ties.method)) An(x, w, ties.method = eval(formals(rank)$ties.method))
An.biv(x, w, estimator = c("CFG", "Pickands"), corrected = TRUE, ties.method = eval(formals(rank)$ties.method)) An(x, w, ties.method = eval(formals(rank)$ties.method))
x |
a data matrix that will be transformed to
pseudo-observations. If |
w |
if |
estimator |
specifies which nonparametric rank-based estimator
of the unknown Pickands dependence function to use in the bivariate
case; can be either
|
corrected |
TRUE means that the bivariate estimators will be corrected to ensure that their value at 0 and 1 is 1. |
ties.method |
|
More details can be found in the references.
An.biv()
returns a vector containing the values of the
estimated Pickands dependence function at the points in w
(and
is the same as former Anfun()
).
The function An
computes simultaneously the three corrected
multivariate estimators studied in Gudendorf and Segers (2011) at the
points in w
and retuns a list whose components are
P |
values of the Pickands estimator at the points in |
CFG |
values of the CFG estimator at the points in |
HT |
values of the Hall-Tajvidi estimator at the points in |
C. Genest and J. Segers (2009). Rank-based inference for bivariate extreme-value copulas. Annals of Statistics 37, 2990–3022.
G. Gudendorf and J. Segers (2011). Nonparametric estimation of multivariate extreme-value copulas. Journal of Statistical Planning and Inference 142, 3073–3085.
evCopula
, A
, and evTestA
.
Further, evTestC
, evTestK
,
exchEVTest
, and gofEVCopula
.
## True Pickands dependence functions curve(A(gumbelCopula(4 ), x), 0, 1) curve(A(gumbelCopula(2 ), x), add=TRUE, col=2) curve(A(gumbelCopula(1.33), x), add=TRUE, col=3) ## CFG estimator curve(An.biv(rCopula(1000, gumbelCopula(4 )), x), lty=2, add=TRUE) curve(An.biv(rCopula(1000, gumbelCopula(2 )), x), lty=2, add=TRUE, col=2) curve(An.biv(rCopula(1000, gumbelCopula(1.33)), x), lty=2, add=TRUE, col=3) ## Pickands estimator curve(An.biv(rCopula(1000, gumbelCopula(4 )), x, estimator="Pickands"), lty=3, add=TRUE) curve(An.biv(rCopula(1000, gumbelCopula(2 )), x, estimator="Pickands"), lty=3, add=TRUE, col=2) curve(An.biv(rCopula(1000, gumbelCopula(1.33)), x, estimator="Pickands"), lty=3, add=TRUE, col=3) legend("bottomleft", paste0("Gumbel(", format(c(4, 2, 1.33)),")"), lwd=1, col=1:3, bty="n") legend("bottomright", c("true", "CFG est.", "Pickands est."), lty=1:3, bty="n") ## Relationship between An.biv and An u <- c(runif(100),0,1) # include 0 and 1 x <- rCopula(1000, gumbelCopula(4)) r <- An(x, cbind(1-u, u)) all.equal(r$CFG, An.biv(x, u)) all.equal(r$P, An.biv(x, u, estimator="Pickands")) ## A trivariate example x <- rCopula(1000, gumbelCopula(4, dim = 3)) u <- matrix(runif(300), 100, 3) w <- u / apply(u, 1, sum) r <- An(x, w) ## Endpoint corrections are applied An(x, cbind(1, 0, 0)) An(x, cbind(0, 1, 0)) An(x, cbind(0, 0, 1))
## True Pickands dependence functions curve(A(gumbelCopula(4 ), x), 0, 1) curve(A(gumbelCopula(2 ), x), add=TRUE, col=2) curve(A(gumbelCopula(1.33), x), add=TRUE, col=3) ## CFG estimator curve(An.biv(rCopula(1000, gumbelCopula(4 )), x), lty=2, add=TRUE) curve(An.biv(rCopula(1000, gumbelCopula(2 )), x), lty=2, add=TRUE, col=2) curve(An.biv(rCopula(1000, gumbelCopula(1.33)), x), lty=2, add=TRUE, col=3) ## Pickands estimator curve(An.biv(rCopula(1000, gumbelCopula(4 )), x, estimator="Pickands"), lty=3, add=TRUE) curve(An.biv(rCopula(1000, gumbelCopula(2 )), x, estimator="Pickands"), lty=3, add=TRUE, col=2) curve(An.biv(rCopula(1000, gumbelCopula(1.33)), x, estimator="Pickands"), lty=3, add=TRUE, col=3) legend("bottomleft", paste0("Gumbel(", format(c(4, 2, 1.33)),")"), lwd=1, col=1:3, bty="n") legend("bottomright", c("true", "CFG est.", "Pickands est."), lty=1:3, bty="n") ## Relationship between An.biv and An u <- c(runif(100),0,1) # include 0 and 1 x <- rCopula(1000, gumbelCopula(4)) r <- An(x, cbind(1-u, u)) all.equal(r$CFG, An.biv(x, u)) all.equal(r$P, An.biv(x, u, estimator="Pickands")) ## A trivariate example x <- rCopula(1000, gumbelCopula(4, dim = 3)) u <- matrix(runif(300), 100, 3) w <- u / apply(u, 1, sum) r <- An(x, w) ## Endpoint corrections are applied An(x, cbind(1, 0, 0)) An(x, cbind(0, 1, 0)) An(x, cbind(0, 0, 1))
Constructs an Archimedean copula class object with its corresponding parameter and dimension.
archmCopula(family, param = NA_real_, dim = 2, ...) claytonCopula(param = NA_real_, dim = 2, use.indepC = c("message", "TRUE", "FALSE")) frankCopula(param = NA_real_, dim = 2, use.indepC = c("message", "TRUE", "FALSE")) gumbelCopula(param = NA_real_, dim = 2, use.indepC = c("message", "TRUE", "FALSE")) amhCopula(param = NA_real_, dim = 2, use.indepC = c("message", "TRUE", "FALSE")) joeCopula(param = NA_real_, dim = 2, use.indepC = c("message", "TRUE", "FALSE"))
archmCopula(family, param = NA_real_, dim = 2, ...) claytonCopula(param = NA_real_, dim = 2, use.indepC = c("message", "TRUE", "FALSE")) frankCopula(param = NA_real_, dim = 2, use.indepC = c("message", "TRUE", "FALSE")) gumbelCopula(param = NA_real_, dim = 2, use.indepC = c("message", "TRUE", "FALSE")) amhCopula(param = NA_real_, dim = 2, use.indepC = c("message", "TRUE", "FALSE")) joeCopula(param = NA_real_, dim = 2, use.indepC = c("message", "TRUE", "FALSE"))
family |
a character string specifying the family of an Archimedean copula. Currently supported families are "clayton", "frank", "amh", "gumbel", and "joe". |
param |
number ( |
dim |
the dimension of the copula. |
... |
further arguments, passed to the individual creator
functions ( |
use.indepC |
a string specifying if the independence copula
|
archmCopula()
is a wrapper for claytonCopula()
,
frankCopula()
, gumbelCopula()
, amhCopula()
and
joeCopula
.
For the mathematical definitions of the respective Archimedean families,
see copClayton
.
For , i.e.
dim = 2
, the AMH, Clayton and Frank copulas
allow to model negative Kendall's tau (tau
) behavior via
negative , for AMH and Clayton
, and for Frank
.
The respective boundary cases are
,
,
(as limit).
For the Ali-Mikhail-Haq copula family ("amhCopula"
), only the
bivariate case is available; however copAMH
has no such
limitation.
Note that in all cases except for Frank and AMH, and and
, the densities (
dCopula()
methods) are
evaluated via the dacopula
slot of the corresponding
acopula
-classed Archimedean copulas, implemented
in a numeric stable way without any restriction on the dimension .
Similarly, the (cumulative) distribution function
(“"the copula"” ) is evaluated via the corresponding
acopula
-classed Archimedean copula's functions in
the psi
and iPsi
slots.
An Archimedean copula object of class "claytonCopula"
,
"frankCopula"
, "gumbelCopula"
, "amhCopula"
, or "joeCopula"
.
R.B. Nelsen (2006), An introduction to Copulas, Springer, New York.
acopula
-classed Archimedean copulas, such as
copClayton
, copGumbel
, etc, notably for
mathematical definitions including the meaning of param
.
fitCopula
for fitting such copulas to data.
clayton.cop <- claytonCopula(2, dim = 3) ## scatterplot3d(rCopula(1000, clayton.cop)) ## negative param (= theta) is allowed for dim = 2 : tau(claytonCopula(-0.5)) ## = -1/3 tauClayton <- Vectorize(function(theta) tau(claytonCopula(theta, dim=2))) plot(tauClayton, -1, 10, xlab=quote(theta), ylim = c(-1,1), n=1025) abline(h=-1:1,v=0, col="#11111150", lty=2); axis(1, at=-1) tauFrank <- Vectorize(function(theta) tau(frankCopula(theta, dim=2))) plot(tauFrank, -40, 50, xlab=quote(theta), ylim = c(-1,1), n=1025) abline(h=-1:1,v=0, col="#11111150", lty=2) ## tauAMH() is function in our package iTau(amhCopula(), -1) # -1 with a range warning iTau(amhCopula(), (5 - 8*log(2)) / 3) # -1 with a range warning ic <- frankCopula(0) # independence copula (with a "message") stopifnot(identical(ic, frankCopula(0, use.indepC = "TRUE")))# indep.copula withOUT message (fC <- frankCopula(0, use.indepC = "FALSE")) ## a Frank copula which corresponds to the indep.copula (but is not) frankCopula(dim = 3)# with NA parameters frank.cop <- frankCopula(3)# dim=2 persp(frank.cop, dCopula) gumbel.cop <- archmCopula("gumbel", 5) stopifnot(identical(gumbel.cop, gumbelCopula(5))) contour(gumbel.cop, dCopula) amh.cop <- amhCopula(0.5) u. <- as.matrix(expand.grid(u=(0:10)/10, v=(0:10)/10, KEEP.OUT.ATTRS=FALSE)) du <- dCopula(u., amh.cop) stopifnot(is.finite(du) | apply(u. == 0, 1,any)| apply(u. == 1, 1,any)) ## A 7-dim Frank copula frank.cop <- frankCopula(3, dim = 7) x <- rCopula(5, frank.cop) ## dCopula now *does* work: dCopula(x, frank.cop) ## A 7-dim Gumbel copula gumbelC.7 <- gumbelCopula(2, dim = 7) dCopula(x, gumbelC.7) ## A 12-dim Joe copula joe.cop <- joeCopula(iTau(joeCopula(), 0.5), dim = 12) x <- rCopula(5, joe.cop) dCopula(x, joe.cop)
clayton.cop <- claytonCopula(2, dim = 3) ## scatterplot3d(rCopula(1000, clayton.cop)) ## negative param (= theta) is allowed for dim = 2 : tau(claytonCopula(-0.5)) ## = -1/3 tauClayton <- Vectorize(function(theta) tau(claytonCopula(theta, dim=2))) plot(tauClayton, -1, 10, xlab=quote(theta), ylim = c(-1,1), n=1025) abline(h=-1:1,v=0, col="#11111150", lty=2); axis(1, at=-1) tauFrank <- Vectorize(function(theta) tau(frankCopula(theta, dim=2))) plot(tauFrank, -40, 50, xlab=quote(theta), ylim = c(-1,1), n=1025) abline(h=-1:1,v=0, col="#11111150", lty=2) ## tauAMH() is function in our package iTau(amhCopula(), -1) # -1 with a range warning iTau(amhCopula(), (5 - 8*log(2)) / 3) # -1 with a range warning ic <- frankCopula(0) # independence copula (with a "message") stopifnot(identical(ic, frankCopula(0, use.indepC = "TRUE")))# indep.copula withOUT message (fC <- frankCopula(0, use.indepC = "FALSE")) ## a Frank copula which corresponds to the indep.copula (but is not) frankCopula(dim = 3)# with NA parameters frank.cop <- frankCopula(3)# dim=2 persp(frank.cop, dCopula) gumbel.cop <- archmCopula("gumbel", 5) stopifnot(identical(gumbel.cop, gumbelCopula(5))) contour(gumbel.cop, dCopula) amh.cop <- amhCopula(0.5) u. <- as.matrix(expand.grid(u=(0:10)/10, v=(0:10)/10, KEEP.OUT.ATTRS=FALSE)) du <- dCopula(u., amh.cop) stopifnot(is.finite(du) | apply(u. == 0, 1,any)| apply(u. == 1, 1,any)) ## A 7-dim Frank copula frank.cop <- frankCopula(3, dim = 7) x <- rCopula(5, frank.cop) ## dCopula now *does* work: dCopula(x, frank.cop) ## A 7-dim Gumbel copula gumbelC.7 <- gumbelCopula(2, dim = 7) dCopula(x, gumbelC.7) ## A 12-dim Joe copula joe.cop <- joeCopula(iTau(joeCopula(), 0.5), dim = 12) x <- rCopula(5, joe.cop) dCopula(x, joe.cop)
Archimedean copula class.
Created by calls of the form new("archmCopula", ...)
or rather
typically by archmCopula()
. Implemented
families are Clayton, Gumbel, Frank, Joe, and Ali-Mikhail-Haq.
exprdist
:Object of class "expression"
:
expressions of the cdf and pdf of the copula. These expressions
are used in function pCopula
and dCopula
.
dimension, parameters
, etc:all inherited from the
super class copula
.
signature(copula = "claytonCopula")
: ...
signature(copula = "claytonCopula")
: ...
signature(copula = "claytonCopula")
: ...
signature(copula = "frankCopula")
: ...
signature(copula = "frankCopula")
: ...
signature(copula = "frankCopula")
: ...
signature(copula = "gumbelCopula")
: ...
signature(copula = "gumbelCopula")
: ...
signature(copula = "gumbelCopula")
: ...
signature(copula = "amhCopula")
: ...
signature(copula = "amhCopula")
: ...
signature(copula = "amhCopula")
: ...
signature(copula = "joeCopula")
: ...
signature(copula = "joeCopula")
: ...
signature(copula = "joeCopula")
: ...
Class "archmCopula"
extends class "copula"
directly. Class "claytonCopula"
, "frankCopula"
,
"gumbelCopula"
, "amhCopula"
and "joeCopula"
extends class "archmCopula"
directly.
"gumbelCopula"
is also of class "evCopula"
.
archmCopula
, for constructing such copula objects;
copula-class
.
These functions compute Kendall's tau, Spearman's rho, and the tail
dependence index for bivariate copulas. iTau
and iRho
,
sometimes called “calibration” functions are the inverses: they
determine (“calibrate”) the copula parameter (which must be
one-dimensional!) given the value of Kendall's tau or Spearman's rho.
tau (copula, ...) rho (copula, ...) lambda(copula, ...) iTau (copula, tau, ...) iRho (copula, rho, ...)
tau (copula, ...) rho (copula, ...) lambda(copula, ...) iTau (copula, tau, ...) iRho (copula, rho, ...)
copula |
an R object of class |
tau |
a numerical value of Kendall's tau in [-1, 1]. |
rho |
a numerical value of Spearman's rho in [-1, 1]. |
... |
currently nothing. |
The calibration functions iTau()
and iRho()
in fact
return a moment estimate of the parameter for one-parameter copulas.
When there are no closed-form expressions for Kendall's tau or Spearman's rho, the calibration functions use numerical approximation techniques (see the last reference). For closed-form expressions, see Frees and Valdez (1998). For the t copula, the calibration function based on Spearman's rho uses the corresponding expression for the normal copula as an approximation.
E.W. Frees and E.A. Valdez (1998) Understanding relationships using copulas. North American Actuarial Journal 2, 1–25.
Iwan Kojadinovic and Jun Yan (2010) Comparison of three semiparametric methods for estimating dependence parameters in copula models. Insurance: Mathematics and Economics 47, 52–63. doi:10.1016/j.insmatheco.2010.03.008
The acopula
class objects have slots,
tau
, lambdaL
, and lambdaU
providing functions for
tau()
, and the two tail indices lambda()
,
and slot iTau
for iTau()
, see the
examples and copGumbel
, etc.
gumbel.cop <- gumbelCopula(3) tau(gumbel.cop) rho(gumbel.cop) lambda(gumbel.cop) iTau(joeCopula(), 0.5) stopifnot(all.equal(tau(gumbel.cop), copGumbel@tau(3)), all.equal(lambda(gumbel.cop), c(copGumbel@lambdaL(3), copGumbel@lambdaU(3)), check.attributes=FALSE), all.equal(iTau (gumbel.cop, 0.681), copGumbel@iTau(0.681)) ) ## let us compute the sample versions x <- rCopula(200, gumbel.cop) cor(x, method = "kendall") cor(x, method = "spearman") ## compare with the true parameter value 3 iTau(gumbel.cop, cor(x, method="kendall" )[1,2]) iRho(gumbel.cop, cor(x, method="spearman")[1,2])
gumbel.cop <- gumbelCopula(3) tau(gumbel.cop) rho(gumbel.cop) lambda(gumbel.cop) iTau(joeCopula(), 0.5) stopifnot(all.equal(tau(gumbel.cop), copGumbel@tau(3)), all.equal(lambda(gumbel.cop), c(copGumbel@lambdaL(3), copGumbel@lambdaU(3)), check.attributes=FALSE), all.equal(iTau (gumbel.cop, 0.681), copGumbel@iTau(0.681)) ) ## let us compute the sample versions x <- rCopula(200, gumbel.cop) cor(x, method = "kendall") cor(x, method = "spearman") ## compare with the true parameter value 3 iTau(gumbel.cop, cor(x, method="kendall" )[1,2]) iRho(gumbel.cop, cor(x, method="spearman")[1,2])
Compute the th Bernoulli number, or
generate all Bernoulli numbers up to the
th,
using diverse
method
s, that is, algorithms.
NOTE the current default methods will be changed – to get better accuracy!
Bernoulli (n, method = c("sumBin", "sumRamanujan", "asymptotic"), verbose = FALSE) Bernoulli.all(n, method = c("A-T", "sumBin", "sumRamanujan", "asymptotic"), precBits = NULL, verbose = getOption("verbose"))
Bernoulli (n, method = c("sumBin", "sumRamanujan", "asymptotic"), verbose = FALSE) Bernoulli.all(n, method = c("A-T", "sumBin", "sumRamanujan", "asymptotic"), precBits = NULL, verbose = getOption("verbose"))
n |
positive integer, indicating the index of the largest (and last) of the Bernoulli numbers needed. |
method |
character string, specifying which method should be
applied. The default for |
precBits |
currently only for |
verbose |
(for |
Bernoulli()
:a number
Bernoulli.all()
:a numeric vector of length n, containing B(n)
Kaneko, Masanobu (2000) The Akiyama-Tanigawa algorithm for Bernoulli numbers; Journal of Integer Sequences 3, article 00.2.9
## The example for the paper MASS::fractions(Bernoulli.all(8, verbose=TRUE)) B10 <- Bernoulli.all(10) MASS::fractions(B10) system.time(B50 <- Bernoulli.all(50))# {does not cache} -- still "no time" system.time(B100 <- Bernoulli.all(100))# still less than a milli second ## Using Bernoulli() is not much slower, but hopefully *more* accurate! ## Check first - TODO system.time(B.1c <- Bernoulli(100))# caches .. system.time(B1c. <- Bernoulli(100))# ==> now much faster stopifnot(identical(B.1c, B1c.)) if(FALSE)## reset the cache: assign("Bern.tab", list(), envir = copula:::.nacopEnv) ## More experiments in the source of the copula package ../tests/Stirling-etc.R
## The example for the paper MASS::fractions(Bernoulli.all(8, verbose=TRUE)) B10 <- Bernoulli.all(10) MASS::fractions(B10) system.time(B50 <- Bernoulli.all(50))# {does not cache} -- still "no time" system.time(B100 <- Bernoulli.all(100))# still less than a milli second ## Using Bernoulli() is not much slower, but hopefully *more* accurate! ## Check first - TODO system.time(B.1c <- Bernoulli(100))# caches .. system.time(B1c. <- Bernoulli(100))# ==> now much faster stopifnot(identical(B.1c, B1c.)) if(FALSE)## reset the cache: assign("Bern.tab", list(), envir = copula:::.nacopEnv) ## More experiments in the source of the copula package ../tests/Stirling-etc.R
Compute the population (beta.()
) and sample
(betan()
) version of Blomqvist's beta for an Archimedean
copula.
See the reference below for definitions and formulas.
beta.(cop, theta, d, scaling=FALSE) betan(u, scaling=FALSE)
beta.(cop, theta, d, scaling=FALSE) betan(u, scaling=FALSE)
cop |
an Archimedean copula (of dimension |
theta |
copula parameter. |
d |
dimension. |
scaling |
logical, if true, the factors 2^(d-1)/(2^(d-1)-1) and 2^(1-d) in Blomqvist's beta are omitted. |
u |
For |
beta.
:a number, being the population version of Blomqvist's beta for the corresponding Archimedean copula;
betan
:a number, being the sample version of Blomqvist's beta for the given data.
Schmid and Schmidt (2007), Nonparametric inference on multivariate versions of Blomqvist's beta and related measures of tail dependence, Metrika 66, 323–354.
beta.(copGumbel, 2.5, d = 5) d.set <- c(2:6, 8, 10, 15, 20, 30) cols <- adjustcolor(colorRampPalette(c("red", "orange", "blue"), space = "Lab")(length(d.set)), 0.8) ## AMH: for(i in seq_along(d.set)) curve(Vectorize(beta.,"theta")(copAMH, x, d = d.set[i]), 0, .999999, main = "Blomqvist's beta(.) for AMH", xlab = quote(theta), ylab = quote(beta(theta, AMH)), add = (i > 1), lwd=2, col=cols[i]) mtext("NB: d=2 and d=3 are the same") legend("topleft", paste("d =",d.set), bty="n", lwd=2, col=cols) ## Gumbel: for(i in seq_along(d.set)) curve(Vectorize(beta.,"theta")(copGumbel, x, d = d.set[i]), 1, 10, main = "Blomqvist's beta(.) for Gumbel", xlab = quote(theta), ylab = quote(beta(theta, Gumbel)), add=(i > 1), lwd=2, col=cols[i]) legend("bottomright", paste("d =",d.set), bty="n", lwd=2, col=cols) ## Clayton: for(i in seq_along(d.set)) curve(Vectorize(beta.,"theta")(copClayton, x, d = d.set[i]), 1e-5, 10, main = "Blomqvist's beta(.) for Clayton", xlab = quote(theta), ylab = quote(beta(theta, Gumbel)), add=(i > 1), lwd=2, col=cols[i]) legend("bottomright", paste("d =",d.set), bty="n", lwd=2, col=cols) ## Joe: for(i in seq_along(d.set)) curve(Vectorize(beta.,"theta")(copJoe, x, d = d.set[i]), 1, 10, main = "Blomqvist's beta(.) for Joe", xlab = quote(theta), ylab = quote(beta(theta, Gumbel)), add=(i > 1), lwd=2, col=cols[i]) legend("bottomright", paste("d =",d.set), bty="n", lwd=2, col=cols) ## Frank: for(i in seq_along(d.set)) curve(Vectorize(beta.,"theta")(copFrank, x, d = d.set[i]), 1e-5, 50, main = "Blomqvist's beta(.) for Frank", xlab = quote(theta), ylab = quote(beta(theta, Gumbel)), add=(i > 1), lwd=2, col=cols[i]) legend("bottomright", paste("d =",d.set), bty="n", lwd=2, col=cols) ## Shows the numeric problems: curve(Vectorize(beta.,"theta")(copFrank, x, d = 29), 35, 42, col="violet")
beta.(copGumbel, 2.5, d = 5) d.set <- c(2:6, 8, 10, 15, 20, 30) cols <- adjustcolor(colorRampPalette(c("red", "orange", "blue"), space = "Lab")(length(d.set)), 0.8) ## AMH: for(i in seq_along(d.set)) curve(Vectorize(beta.,"theta")(copAMH, x, d = d.set[i]), 0, .999999, main = "Blomqvist's beta(.) for AMH", xlab = quote(theta), ylab = quote(beta(theta, AMH)), add = (i > 1), lwd=2, col=cols[i]) mtext("NB: d=2 and d=3 are the same") legend("topleft", paste("d =",d.set), bty="n", lwd=2, col=cols) ## Gumbel: for(i in seq_along(d.set)) curve(Vectorize(beta.,"theta")(copGumbel, x, d = d.set[i]), 1, 10, main = "Blomqvist's beta(.) for Gumbel", xlab = quote(theta), ylab = quote(beta(theta, Gumbel)), add=(i > 1), lwd=2, col=cols[i]) legend("bottomright", paste("d =",d.set), bty="n", lwd=2, col=cols) ## Clayton: for(i in seq_along(d.set)) curve(Vectorize(beta.,"theta")(copClayton, x, d = d.set[i]), 1e-5, 10, main = "Blomqvist's beta(.) for Clayton", xlab = quote(theta), ylab = quote(beta(theta, Gumbel)), add=(i > 1), lwd=2, col=cols[i]) legend("bottomright", paste("d =",d.set), bty="n", lwd=2, col=cols) ## Joe: for(i in seq_along(d.set)) curve(Vectorize(beta.,"theta")(copJoe, x, d = d.set[i]), 1, 10, main = "Blomqvist's beta(.) for Joe", xlab = quote(theta), ylab = quote(beta(theta, Gumbel)), add=(i > 1), lwd=2, col=cols[i]) legend("bottomright", paste("d =",d.set), bty="n", lwd=2, col=cols) ## Frank: for(i in seq_along(d.set)) curve(Vectorize(beta.,"theta")(copFrank, x, d = d.set[i]), 1e-5, 50, main = "Blomqvist's beta(.) for Frank", xlab = quote(theta), ylab = quote(beta(theta, Gumbel)), add=(i > 1), lwd=2, col=cols[i]) legend("bottomright", paste("d =",d.set), bty="n", lwd=2, col=cols) ## Shows the numeric problems: curve(Vectorize(beta.,"theta")(copFrank, x, d = 29), 35, 42, col="violet")
Compute the conditional distribution function
of
given
.
cCopula(u, copula, indices = 1:dim(copula), inverse = FALSE, log = FALSE, drop = FALSE, ...) ## Deprecated (use cCopula() instead): rtrafo(u, copula, indices = 1:dim(copula), inverse = FALSE, log = FALSE) cacopula(u, cop, n.MC = 0, log = FALSE)
cCopula(u, copula, indices = 1:dim(copula), inverse = FALSE, log = FALSE, drop = FALSE, ...) ## Deprecated (use cCopula() instead): rtrafo(u, copula, indices = 1:dim(copula), inverse = FALSE, log = FALSE) cacopula(u, cop, n.MC = 0, log = FALSE)
u |
data |
copula , cop
|
copula, i.e., an object of class
|
indices |
vector of indices |
inverse |
|
n.MC |
integer Monte Carlo sample size; for Archimedean copulas only, used if positive. |
log |
a |
drop |
a |
... |
additional arguments (currently only used if
|
By default and if fed with a sample of the corresponding copula,
cCopula()
computes the Rosenblatt
transform; see Rosenblatt (1952). The involved high-order derivatives
for Archimedean copulas were derived in Hofert et al. (2012).
Sampling, that is, random number generation,
can be achieved by using inverse=TRUE
. In this case,
the inverse Rosenblatt transformation is used, which, for sampling
purposes, is also known as conditional distribution method.
Note that, for Archimedean copulas not being Clayton, this can be slow
as it involves numerical root finding in each (but the first) component.
An -
matrix
(unless n == 1
and
drop
is true, where a -vector is returned) where
is the length of
indices
. This matrix contains the conditional
copula function values
or, if
inverse = TRUE
, their inverses
for all
in
indices
.
For some (but not all) families, this function also makes sense on the boundaries (if the corresponding limits can be computed).
Genest, C., Rémillard, B., and Beaudoin, D. (2009). Goodness-of-fit tests for copulas: A review and a power study. Insurance: Mathematics and Economics 44, 199–213.
Rosenblatt, M. (1952). Remarks on a Multivariate Transformation, The Annals of Mathematical Statistics 23, 3, 470–472.
Hofert, M., Mächler, M., and McNeil, A. J. (2012). Likelihood inference for Archimedean copulas in high dimensions under known margins. Journal of Multivariate Analysis 110, 133–150.
## 1) Sampling from a conditional distribution of a Clayton copula given u_1 ## Define the copula tau <- 0.5 theta <- iTau(claytonCopula(), tau = tau) d <- 2 cc <- claytonCopula(theta, dim = d) n <- 1000 set.seed(271) ## A small u_1 u1 <- 0.05 U <- cCopula(cbind(u1, runif(n)), copula = cc, inverse = TRUE) plot(U[,2], ylab = quote(U[2])) ## A large u_1 u1 <- 0.95 U <- cCopula(cbind(u1, runif(n)), copula = cc, inverse = TRUE) plot(U[,2], ylab = quote(U[2])) ## 2) Sample via conditional distribution method and then apply the ## Rosenblatt transform ## Note: We choose the numerically more involved (and thus slower) ## Gumbel case here ## Define the copula tau <- 0.5 theta <- iTau(gumbelCopula(), tau = tau) d <- 5 gc <- gumbelCopula(theta, dim = d) n <- 200 set.seed(271) U. <- matrix(runif(n*d), ncol = d) # U(0,1)^d ## Transform to Gumbel sample via conditional distribution method U <- cCopula(U., copula = gc, inverse = TRUE) # slow for ACs except Clayton splom2(U) # scatter-plot matrix copula sample ## Rosenblatt transform back to U(0,1)^d (as a check) U. <- cCopula(U, copula = gc) splom2(U.) # U(0,1)^d again ## 3) cCopula() for elliptical copulas tau <- 0.5 theta <- iTau(claytonCopula(), tau = tau) d <- 5 cc <- claytonCopula(theta, dim = d) set.seed(271) n <- 1000 U <- rCopula(n, copula = cc) X <- qnorm(U) # X now follows a meta-Clayton model with N(0,1) marginals U <- pobs(X) # build pseudo-observations fN <- fitCopula(normalCopula(dim = d), data = U) # fit a Gauss copula U.RN <- cCopula(U, copula = fN@copula) splom2(U.RN, cex = 0.2) # visible but not so clearly f.t <- fitCopula(tCopula(dim = d), U) U.Rt <- cCopula(U, copula = f.t@copula) # transform with a fitted t copula splom2(U.Rt, cex = 0.2) # still visible but not so clear ## Inverse (and check consistency) U.N <- cCopula(U.RN, copula = fN @copula, inverse = TRUE) U.t <- cCopula(U.Rt, copula = f.t@copula, inverse = TRUE) tol <- 1e-14 stopifnot( all.equal(U, U.N), all.equal(U, U.t), all.equal(log(U.RN), cCopula(U, copula = fN @copula, log = TRUE), tolerance = tol), all.equal(log(U.Rt), cCopula(U, copula = f.t@copula, log = TRUE), tolerance = tol) ) ## 4) cCopula() for a more sophisticated mixture copula (bivariate case only!) tau <- 0.5 cc <- claytonCopula(iTau(claytonCopula(), tau = tau)) # first mixture component tc <- tCopula(iTau(tCopula(), tau = tau), df = 3) # t_3 copula tc90 <- rotCopula(tc, flip = c(TRUE, FALSE)) # t copula rotated by 90 degrees wts <- c(1/2, 1/2) # mixture weights mc <- mixCopula(list(cc, tc90), w = wts) # mixture copula with one copula rotated set.seed(271) U <- rCopula(n, copula = mc) U. <- cCopula(U, copula = mc) # Rosenblatt transform back to U(0,1)^2 (as a check) plot(U., xlab = quote(U*"'"[1]), ylab = quote(U*"'"[2])) # check for uniformity
## 1) Sampling from a conditional distribution of a Clayton copula given u_1 ## Define the copula tau <- 0.5 theta <- iTau(claytonCopula(), tau = tau) d <- 2 cc <- claytonCopula(theta, dim = d) n <- 1000 set.seed(271) ## A small u_1 u1 <- 0.05 U <- cCopula(cbind(u1, runif(n)), copula = cc, inverse = TRUE) plot(U[,2], ylab = quote(U[2])) ## A large u_1 u1 <- 0.95 U <- cCopula(cbind(u1, runif(n)), copula = cc, inverse = TRUE) plot(U[,2], ylab = quote(U[2])) ## 2) Sample via conditional distribution method and then apply the ## Rosenblatt transform ## Note: We choose the numerically more involved (and thus slower) ## Gumbel case here ## Define the copula tau <- 0.5 theta <- iTau(gumbelCopula(), tau = tau) d <- 5 gc <- gumbelCopula(theta, dim = d) n <- 200 set.seed(271) U. <- matrix(runif(n*d), ncol = d) # U(0,1)^d ## Transform to Gumbel sample via conditional distribution method U <- cCopula(U., copula = gc, inverse = TRUE) # slow for ACs except Clayton splom2(U) # scatter-plot matrix copula sample ## Rosenblatt transform back to U(0,1)^d (as a check) U. <- cCopula(U, copula = gc) splom2(U.) # U(0,1)^d again ## 3) cCopula() for elliptical copulas tau <- 0.5 theta <- iTau(claytonCopula(), tau = tau) d <- 5 cc <- claytonCopula(theta, dim = d) set.seed(271) n <- 1000 U <- rCopula(n, copula = cc) X <- qnorm(U) # X now follows a meta-Clayton model with N(0,1) marginals U <- pobs(X) # build pseudo-observations fN <- fitCopula(normalCopula(dim = d), data = U) # fit a Gauss copula U.RN <- cCopula(U, copula = fN@copula) splom2(U.RN, cex = 0.2) # visible but not so clearly f.t <- fitCopula(tCopula(dim = d), U) U.Rt <- cCopula(U, copula = f.t@copula) # transform with a fitted t copula splom2(U.Rt, cex = 0.2) # still visible but not so clear ## Inverse (and check consistency) U.N <- cCopula(U.RN, copula = fN @copula, inverse = TRUE) U.t <- cCopula(U.Rt, copula = f.t@copula, inverse = TRUE) tol <- 1e-14 stopifnot( all.equal(U, U.N), all.equal(U, U.t), all.equal(log(U.RN), cCopula(U, copula = fN @copula, log = TRUE), tolerance = tol), all.equal(log(U.Rt), cCopula(U, copula = f.t@copula, log = TRUE), tolerance = tol) ) ## 4) cCopula() for a more sophisticated mixture copula (bivariate case only!) tau <- 0.5 cc <- claytonCopula(iTau(claytonCopula(), tau = tau)) # first mixture component tc <- tCopula(iTau(tCopula(), tau = tau), df = 3) # t_3 copula tc90 <- rotCopula(tc, flip = c(TRUE, FALSE)) # t copula rotated by 90 degrees wts <- c(1/2, 1/2) # mixture weights mc <- mixCopula(list(cc, tc90), w = wts) # mixture copula with one copula rotated set.seed(271) U <- rCopula(n, copula = mc) U. <- cCopula(U, copula = mc) # Rosenblatt transform back to U(0,1)^2 (as a check) plot(U., xlab = quote(U*"'"[1]), ylab = quote(U*"'"[2])) # check for uniformity
Function and Methods cloud2()
to draw (lattice)
cloud
plots of two-dimensional distributions from
package copula.
## S4 method for signature 'matrix' cloud2(x, xlim = range(x[,1], finite = TRUE), ylim = range(x[,2], finite = TRUE), zlim = range(x[,3], finite = TRUE), xlab = NULL, ylab = NULL, zlab = NULL, scales = list(arrows = FALSE, col = "black"), par.settings = standard.theme(color = FALSE), ...) ## S4 method for signature 'data.frame' cloud2(x, xlim = range(x[,1], finite = TRUE), ylim = range(x[,2], finite = TRUE), zlim = range(x[,3], finite = TRUE), xlab = NULL, ylab = NULL, zlab = NULL, scales = list(arrows = FALSE, col = "black"), par.settings = standard.theme(color = FALSE), ...) ## S4 method for signature 'Copula' cloud2(x, n, xlim = 0:1, ylim = 0:1, zlim = 0:1, xlab = quote(U[1]), ylab = quote(U[2]), zlab = quote(U[3]), ...) ## S4 method for signature 'mvdc' cloud2(x, n, xlim = NULL, ylim = NULL, zlim = NULL, xlab = quote(X[1]), ylab = quote(X[2]), zlab = quote(X[3]), ...)
## S4 method for signature 'matrix' cloud2(x, xlim = range(x[,1], finite = TRUE), ylim = range(x[,2], finite = TRUE), zlim = range(x[,3], finite = TRUE), xlab = NULL, ylab = NULL, zlab = NULL, scales = list(arrows = FALSE, col = "black"), par.settings = standard.theme(color = FALSE), ...) ## S4 method for signature 'data.frame' cloud2(x, xlim = range(x[,1], finite = TRUE), ylim = range(x[,2], finite = TRUE), zlim = range(x[,3], finite = TRUE), xlab = NULL, ylab = NULL, zlab = NULL, scales = list(arrows = FALSE, col = "black"), par.settings = standard.theme(color = FALSE), ...) ## S4 method for signature 'Copula' cloud2(x, n, xlim = 0:1, ylim = 0:1, zlim = 0:1, xlab = quote(U[1]), ylab = quote(U[2]), zlab = quote(U[3]), ...) ## S4 method for signature 'mvdc' cloud2(x, n, xlim = NULL, ylim = NULL, zlim = NULL, xlab = quote(X[1]), ylab = quote(X[2]), zlab = quote(X[3]), ...)
x |
a |
xlim , ylim , zlim
|
the x-, y- and z-axis limits. |
xlab , ylab , zlab
|
the x-, y- and z-axis labels. |
scales |
|
par.settings |
see |
n |
when |
... |
additional arguments passed to the underlying
|
An object of class “trellis” as returned by cloud()
.
Cloud plots for objects of class "matrix"
,
"data.frame"
, "Copula"
or
"mvdc"
.
The lattice-based splom2-methods
for data, and
wireframe2-methods
and
contourplot2-methods
for functions.
## For 'matrix' objects cop <- gumbelCopula(2, dim = 3) n <- 1000 set.seed(271) U <- rCopula(n, copula = cop) cloud2(U, xlab = quote(U[1]), ylab = quote(U[2]), zlab = quote(U[3])) ## For 'Copula' objects set.seed(271) cloud2(cop, n = n) # same as above ## For 'mvdc' objects mvNN <- mvdc(cop, c("norm", "norm", "exp"), list(list(mean = 0, sd = 1), list(mean = 1), list(rate = 2))) cloud2(mvNN, n = n)
## For 'matrix' objects cop <- gumbelCopula(2, dim = 3) n <- 1000 set.seed(271) U <- rCopula(n, copula = cop) cloud2(U, xlab = quote(U[1]), ylab = quote(U[2]), zlab = quote(U[3])) ## For 'Copula' objects set.seed(271) cloud2(cop, n = n) # same as above ## For 'mvdc' objects mvNN <- mvdc(cop, c("norm", "norm", "exp"), list(list(mean = 0, sd = 1), list(mean = 1), list(rate = 2))) cloud2(mvNN, n = n)
Compute the coefficients involved in the
generator (psi) derivatives and the copula density of Gumbel copulas.
For non-small dimensions , these are numerically challenging to
compute accurately.
coeffG(d, alpha, method = c("sort", "horner", "direct", "dsumSibuya", paste("dsSib", eval(formals(dsumSibuya)$method), sep = ".")), log = FALSE, verbose = FALSE)
coeffG(d, alpha, method = c("sort", "horner", "direct", "dsumSibuya", paste("dsSib", eval(formals(dsumSibuya)$method), sep = ".")), log = FALSE, verbose = FALSE)
d |
number of coefficients, (the copula dimension), d >= 1. |
alpha |
parameter |
method |
a
|
log |
logical determining if the logarithm ( |
verbose |
logical indicating if some information should be shown,
currently for |
a numeric vector of length d
, of values
There are still known numerical problems (with non-"Rmpfr" methods; and
those are slow), e.g., for d=100,
alpha=0.8 and .
As a consequence, the method
s and its defaults may change in
the future, and so the exact implementation of coeffG()
is
still considered somewhat experimental.
a.k <- coeffG(16, 0.55) plot(a.k, xlab = quote(k), ylab = quote(a[k]), main = "coeffG(16, 0.55)", log = "y", type = "o", col = 2) a.kH <- coeffG(16, 0.55, method = "horner") stopifnot(all.equal(a.k, a.kH, tol = 1e-11))# 1.10e-13 (64-bit Lnx, nb-mm4)
a.k <- coeffG(16, 0.55) plot(a.k, xlab = quote(k), ylab = quote(a[k]), main = "coeffG(16, 0.55)", log = "y", type = "o", col = 2) a.kH <- coeffG(16, 0.55, method = "horner") stopifnot(all.equal(a.k, a.kH, tol = 1e-11))# 1.10e-13 (64-bit Lnx, nb-mm4)
Methods for function contour
to draw contour lines aka a
level plot for objects from package copula.
## S4 method for signature 'Copula' contour(x, FUN, n.grid = 26, delta = 0, xlab = quote(u[1]), ylab = quote(u[2]), box01 = TRUE, ...) ## S4 method for signature 'mvdc' contour(x, FUN, xlim, ylim, n.grid = 26, xlab = quote(x[1]), ylab = quote(x[2]), box01 = FALSE, ...)
## S4 method for signature 'Copula' contour(x, FUN, n.grid = 26, delta = 0, xlab = quote(u[1]), ylab = quote(u[2]), box01 = TRUE, ...) ## S4 method for signature 'mvdc' contour(x, FUN, xlim, ylim, n.grid = 26, xlab = quote(x[1]), ylab = quote(x[2]), box01 = FALSE, ...)
x |
|
FUN |
|
n.grid |
the number of grid points used in each dimension. This
can be a vector of length two, giving the number of grid points used
in x- and y-direction, respectively; the function |
delta |
a small number in |
xlab , ylab
|
the x-axis and y-axis labels. |
xlim , ylim
|
the |
box01 |
a logical specifying if a faint rectangle should be drawn
on the boundary of |
... |
further arguments for (the default method of)
|
Contour lines are drawn for "Copula"
or
"mvdc"
objects, see x
in the
Arguments section.
The persp-methods
for “perspective” aka
“3D” plots.
contour(frankCopula(-0.8), dCopula) contour(frankCopula(-0.8), dCopula, delta=1e-6) contour(frankCopula(-1.2), pCopula) contour(claytonCopula(2), pCopula) ## the Gumbel copula density is "extreme" ## --> use fine grid (and enough levels): r <- contour(gumbelCopula(3), dCopula, n=200, nlevels=100) range(r$z)# [0, 125.912] ## Now superimpose contours of three resolutions: contour(r, levels = seq(1, max(r$z), by=2), lwd=1.5) contour(r, levels = (1:13)/2, add=TRUE, col=adjustcolor(1,3/4), lty=2) contour(r, levels = (1:13)/4, add=TRUE, col=adjustcolor(2,1/2), lty=3, lwd=3/4) x <- mvdc(gumbelCopula(3), c("norm", "norm"), list(list(mean = 0, sd =1), list(mean = 1))) contour(x, dMvdc, xlim=c(-2, 2), ylim=c(-1, 3)) contour(x, pMvdc, xlim=c(-2, 2), ylim=c(-1, 3))
contour(frankCopula(-0.8), dCopula) contour(frankCopula(-0.8), dCopula, delta=1e-6) contour(frankCopula(-1.2), pCopula) contour(claytonCopula(2), pCopula) ## the Gumbel copula density is "extreme" ## --> use fine grid (and enough levels): r <- contour(gumbelCopula(3), dCopula, n=200, nlevels=100) range(r$z)# [0, 125.912] ## Now superimpose contours of three resolutions: contour(r, levels = seq(1, max(r$z), by=2), lwd=1.5) contour(r, levels = (1:13)/2, add=TRUE, col=adjustcolor(1,3/4), lty=2) contour(r, levels = (1:13)/4, add=TRUE, col=adjustcolor(2,1/2), lty=3, lwd=3/4) x <- mvdc(gumbelCopula(3), c("norm", "norm"), list(list(mean = 0, sd =1), list(mean = 1))) contour(x, dMvdc, xlim=c(-2, 2), ylim=c(-1, 3)) contour(x, pMvdc, xlim=c(-2, 2), ylim=c(-1, 3))
Methods for contourplot2()
, a version of
contourplot()
from lattice, to draw
contour plots of two dimensional distributions from package copula.
## NB: The 'matrix' and 'data.frame' methods are identical - documenting the former ## S4 method for signature 'matrix' contourplot2(x, aspect = 1, xlim = range(x[,1], finite = TRUE), ylim = range(x[,2], finite = TRUE), xlab = NULL, ylab = NULL, cuts = 16, labels = !region, pretty = !labels, scales = list(alternating = c(1,1), tck = c(1,0)), region = TRUE, ..., col.regions = gray(seq(0.4, 1, length.out = max(100, 4*cuts)))) ## S4 method for signature 'Copula' contourplot2(x, FUN, n.grid = 26, delta = 0, xlim = 0:1, ylim = 0:1, xlab = quote(u[1]), ylab = quote(u[2]), ...) ## S4 method for signature 'mvdc' contourplot2(x, FUN, n.grid = 26, xlim, ylim, xlab = quote(x[1]), ylab = quote(x[2]), ...)
## NB: The 'matrix' and 'data.frame' methods are identical - documenting the former ## S4 method for signature 'matrix' contourplot2(x, aspect = 1, xlim = range(x[,1], finite = TRUE), ylim = range(x[,2], finite = TRUE), xlab = NULL, ylab = NULL, cuts = 16, labels = !region, pretty = !labels, scales = list(alternating = c(1,1), tck = c(1,0)), region = TRUE, ..., col.regions = gray(seq(0.4, 1, length.out = max(100, 4*cuts)))) ## S4 method for signature 'Copula' contourplot2(x, FUN, n.grid = 26, delta = 0, xlim = 0:1, ylim = 0:1, xlab = quote(u[1]), ylab = quote(u[2]), ...) ## S4 method for signature 'mvdc' contourplot2(x, FUN, n.grid = 26, xlim, ylim, xlab = quote(x[1]), ylab = quote(x[2]), ...)
x |
a |
aspect |
the aspect ratio. |
xlim , ylim
|
the x- and y-axis limits. |
xlab , ylab
|
the x- and y-axis labels. If at least one is
|
cuts |
the number of levels; see |
labels , pretty
|
|
scales |
a |
region |
a |
col.regions |
the colors of the regions if |
FUN |
|
n.grid |
the number of grid points used in each dimension. This
can be a vector of length two, giving the number of grid points used
in x- and y-direction, respectively; the function |
delta |
a small number in |
... |
additional arguments passed to the underlying
|
An object of class “trellis” as returned by contourplot()
.
Contourplot plots for objects of class "matrix"
,
"data.frame"
, "Copula"
or
"mvdc"
.
The contour-methods
for drawing perspective plots via base graphics.
The lattice-based wireframe2-methods
to get
perspective plots for functions,
and cloud2-methods
and splom2-methods
for data.
## For 'matrix' objects ## The Frechet--Hoeffding bounds W and M n.grid <- 26 u <- seq(0, 1, length.out = n.grid) grid <- expand.grid("u[1]" = u, "u[2]" = u) W <- function(u) pmax(0, rowSums(u)-1) # lower bound W M <- function(u) apply(u, 1, min) # upper bound M x.W <- cbind(grid, "W(u[1],u[2])" = W(grid)) # evaluate W on 'grid' x.M <- cbind(grid, "M(u[1],u[2])" = M(grid)) # evaluate M on 'grid' contourplot2(x.W) # contour plot of W contourplot2(x.M) # contour plot of M ## For 'Copula' objects cop <- frankCopula(-4) contourplot2(cop, pCopula) # the copula contourplot2(cop, pCopula, xlab = "x", ylab = "y") # adjusting the labels contourplot2(cop, dCopula) # the density ## For 'mvdc' objects mvNN <- mvdc(gumbelCopula(3), c("norm", "norm"), list(list(mean = 0, sd = 1), list(mean = 1))) xl <- c(-2, 2) yl <- c(-1, 3) contourplot2(mvNN, FUN = dMvdc, xlim = xl, ylim = yl, contour = FALSE) contourplot2(mvNN, FUN = dMvdc, xlim = xl, ylim = yl) contourplot2(mvNN, FUN = dMvdc, xlim = xl, ylim = yl, region = FALSE, labels = FALSE) contourplot2(mvNN, FUN = dMvdc, xlim = xl, ylim = yl, region = FALSE) contourplot2(mvNN, FUN = dMvdc, xlim = xl, ylim = yl, col.regions = colorRampPalette(c("royalblue3", "maroon3"), space="Lab"))
## For 'matrix' objects ## The Frechet--Hoeffding bounds W and M n.grid <- 26 u <- seq(0, 1, length.out = n.grid) grid <- expand.grid("u[1]" = u, "u[2]" = u) W <- function(u) pmax(0, rowSums(u)-1) # lower bound W M <- function(u) apply(u, 1, min) # upper bound M x.W <- cbind(grid, "W(u[1],u[2])" = W(grid)) # evaluate W on 'grid' x.M <- cbind(grid, "M(u[1],u[2])" = M(grid)) # evaluate M on 'grid' contourplot2(x.W) # contour plot of W contourplot2(x.M) # contour plot of M ## For 'Copula' objects cop <- frankCopula(-4) contourplot2(cop, pCopula) # the copula contourplot2(cop, pCopula, xlab = "x", ylab = "y") # adjusting the labels contourplot2(cop, dCopula) # the density ## For 'mvdc' objects mvNN <- mvdc(gumbelCopula(3), c("norm", "norm"), list(list(mean = 0, sd = 1), list(mean = 1))) xl <- c(-2, 2) yl <- c(-1, 3) contourplot2(mvNN, FUN = dMvdc, xlim = xl, ylim = yl, contour = FALSE) contourplot2(mvNN, FUN = dMvdc, xlim = xl, ylim = yl) contourplot2(mvNN, FUN = dMvdc, xlim = xl, ylim = yl, region = FALSE, labels = FALSE) contourplot2(mvNN, FUN = dMvdc, xlim = xl, ylim = yl, region = FALSE) contourplot2(mvNN, FUN = dMvdc, xlim = xl, ylim = yl, col.regions = colorRampPalette(c("royalblue3", "maroon3"), space="Lab"))
Specific Archimedean families ("acopula"
objects)
implemented in the package copula.
These families are “classical” as from p. 116 of Nelsen (2007). More specifially, see Table 1 of Hofert (2011).
copAMH copClayton copFrank copGumbel copJoe
copAMH copClayton copFrank copGumbel copJoe
All these are objects of the formal class "acopula"
.
copAMH
:Archimedean family of Ali-Mikhail-Haq with parametric generator
with . The range of
admissible Kendall's tau is
.
Note that the lower and upper tail-dependence coefficients are both zero, that is, this copula family does not allow for tail dependence.
copClayton
:Archimedean family of Clayton with parametric generator
with . The range of
admissible Kendall's tau, as well as that of the lower
tail-dependence coefficient, is (0,1). For dimension
,
is admissible where
negative
allow negative Kendall's taus.
Note that this copula does not allow for upper tail dependence.
copFrank
:Archimedean family of Frank with parametric generator
with . The range of
admissible Kendall's tau is (0,1). Note that this copula family
does not allow for tail dependence.
copGumbel
:Archimedean family of Gumbel with parametric generator
with
. The range of
admissible Kendall's tau, as well as that of the upper
tail-dependence coefficient, is [0,1). Note that this copula does
not allow for lower tail dependence.
copJoe
:Archimedean family of Joe with parametric generator
with . The range of
admissible Kendall's tau, as well as that of the upper
tail-dependence coefficient, is [0,1). Note that this copula does
not allow for lower tail dependence.
Note that staying within one of these Archimedean families, all of
them can be nested if two (generic) generator parameters
,
satisfy
.
A "acopula"
object.
Nelsen, R. B. (2007). An Introduction to Copulas (2nd ed.). Springer.
Hofert, M. (2010). Sampling Nested Archimedean Copulas with Applications to CDO Pricing. Suedwestdeutscher Verlag fuer Hochschulschriften AG & Co. KG.
Hofert, M. (2011). Efficiently sampling nested Archimedean copulas. Computational Statistics & Data Analysis 55, 57–70.
Hofert, M. and Mächler, M. (2011). Nested Archimedean Copulas Meet R: The nacopula Package. Journal of Statistical Software 39(9), 1–20. https://www.jstatsoft.org/v39/i09/.
The class definition, "acopula"
.
onacopula
and setTheta
for such
Archimedean copulas with specific parameters.
getAcop
accesses these families
“programmatically”.
## Print a copAMH object and its structure copAMH str(copAMH) ## Show admissible parameters for a Clayton copula copClayton@paraInterval ## Generate random variates from a Log(p) distribution via V0 of Frank p <- 1/2 copFrank@V0(100, -log(1-p)) ## Plot the upper tail-dependence coefficient as a function in the ## parameter for Gumbel's family curve(copGumbel@lambdaU(x), xlim = c(1, 10), ylim = c(0,1), col = 4) ## Plot Kendall's tau as a function in the parameter for Joe's family curve(copJoe@tau(x), xlim = c(1, 10), ylim = c(0,1), col = 4) ## ------- Plot psi() and tau() - and properties of all families ---- ## The copula families currently provided: (famNms <- ls("package:copula", patt="^cop[A-Z]")) op <- par(mfrow= c(length(famNms), 2), mar = .6+ c(2,1.4,1,1), mgp = c(1.1, 0.4, 0)) for(nm in famNms) { Cf <- get(nm) thet <- Cf@iTau(0.3) curve(Cf@psi(x, theta = thet), 0, 5, xlab = quote(x), ylab="", ylim=0:1, col = 2, main = substitute(list(NAM ~~~ psi(x, theta == TH), tau == 0.3), list(NAM=Cf@name, TH=thet))) I <- Cf@paraInterval Iu <- pmin(10, I[2]) curve(Cf@tau(x), I[1], Iu, col = 3, xlab = bquote(theta %in% .(format(I))), ylab = "", main = substitute(NAM ~~ tau(theta), list(NAM=Cf@name))) } par(op) ## Construct a bivariate Clayton copula with parameter theta theta <- 2 C2 <- onacopula("Clayton", C(theta, 1:2)) C2@copula # is an "acopula" with specific parameter theta curve(C2@copula@psi(x, C2@copula@theta), main = quote("Generator" ~~ psi ~~ " of Clayton A.copula"), xlab = quote(theta1), ylab = quote(psi(theta1)), xlim = c(0,5), ylim = c(0,1), col = 4) ## What is the corresponding Kendall's tau? C2@copula@tau(theta) # 0.5 ## What are the corresponding tail-dependence coefficients? C2@copula@lambdaL(theta) C2@copula@lambdaU(theta) ## Generate n pairs of random variates from this copula U <- rnacopula(n = 1000, C2) ## and plot the generated pairs of random variates plot(U, asp=1, main = "n = 1000 from Clayton(theta = 2)")
## Print a copAMH object and its structure copAMH str(copAMH) ## Show admissible parameters for a Clayton copula copClayton@paraInterval ## Generate random variates from a Log(p) distribution via V0 of Frank p <- 1/2 copFrank@V0(100, -log(1-p)) ## Plot the upper tail-dependence coefficient as a function in the ## parameter for Gumbel's family curve(copGumbel@lambdaU(x), xlim = c(1, 10), ylim = c(0,1), col = 4) ## Plot Kendall's tau as a function in the parameter for Joe's family curve(copJoe@tau(x), xlim = c(1, 10), ylim = c(0,1), col = 4) ## ------- Plot psi() and tau() - and properties of all families ---- ## The copula families currently provided: (famNms <- ls("package:copula", patt="^cop[A-Z]")) op <- par(mfrow= c(length(famNms), 2), mar = .6+ c(2,1.4,1,1), mgp = c(1.1, 0.4, 0)) for(nm in famNms) { Cf <- get(nm) thet <- Cf@iTau(0.3) curve(Cf@psi(x, theta = thet), 0, 5, xlab = quote(x), ylab="", ylim=0:1, col = 2, main = substitute(list(NAM ~~~ psi(x, theta == TH), tau == 0.3), list(NAM=Cf@name, TH=thet))) I <- Cf@paraInterval Iu <- pmin(10, I[2]) curve(Cf@tau(x), I[1], Iu, col = 3, xlab = bquote(theta %in% .(format(I))), ylab = "", main = substitute(NAM ~~ tau(theta), list(NAM=Cf@name))) } par(op) ## Construct a bivariate Clayton copula with parameter theta theta <- 2 C2 <- onacopula("Clayton", C(theta, 1:2)) C2@copula # is an "acopula" with specific parameter theta curve(C2@copula@psi(x, C2@copula@theta), main = quote("Generator" ~~ psi ~~ " of Clayton A.copula"), xlab = quote(theta1), ylab = quote(psi(theta1)), xlim = c(0,5), ylim = c(0,1), col = 4) ## What is the corresponding Kendall's tau? C2@copula@tau(theta) # 0.5 ## What are the corresponding tail-dependence coefficients? C2@copula@lambdaL(theta) C2@copula@lambdaU(theta) ## Generate n pairs of random variates from this copula U <- rnacopula(n = 1000, C2) ## and plot the generated pairs of random variates plot(U, asp=1, main = "n = 1000 from Clayton(theta = 2)")
Density (dCopula
), distribution function (pCopula
), and
random generation (rCopula
) for a copula
object.
dCopula(u, copula, log=FALSE, ...) pCopula(u, copula, ...) rCopula(n, copula, ...)
dCopula(u, copula, log=FALSE, ...) pCopula(u, copula, ...) rCopula(n, copula, ...)
copula |
an R object of class |
u |
a vector of the copula dimension |
log |
logical indicating if the |
n |
(for |
... |
further optional arguments for some methods, e.g.,
|
The density (dCopula
) and distribution function
(pCopula
) methods for Archimedean copulas now use the
corresponding function slots of the Archimedean copula objects, such
as copClayton
, copGumbel
, etc.
If an is outside
we declare the density to be
zero, and this is true even when another
is
NA
or NaN
; see also the
“outside” example.
The distribution function of a t
copula uses pmvt
from
package mvtnorm; similarly, the density (dCopula
) calls
dmvt
from CRAN package mvtnorm.
The normalCopula
methods use dmvnorm
and pmvnorm
from the same package.
The random number generator for an Archimedean copula uses the conditional approach for the bivariate case and the Marshall-Olkin (1988) approach for dimension greater than 2.
dCopula()
gives the density, pCopula()
gives the
distribution function, and rCopula()
generates random variates.
Frees, E. W. and Valdez, E. A. (1998). Understanding relationships using copulas. North American Actuarial Journal 2, 1–25.
Genest, C. and Favre, A.-C. (2007). Everything you always wanted to know about copula modeling but were afraid to ask. Journal of Hydrologic Engineering 12, 347–368.
Joe, H. (1997). Multivariate Models and Dependence Concepts. Chapman and Hall, London.
Marshall, A. W. and Olkin, I. (1988) Families of multivariate distributions. Journal of the American Statistical Association 83, 834–841.
Nelsen, R. B. (2006) An introduction to Copulas. Springer, New York.
the copula
and acopula
classes, the acopula families, acopula-families
.
Constructor functions such as
ellipCopula
, archmCopula
,
fgmCopula
.
norm.cop <- normalCopula(0.5) norm.cop ## one d-vector =^= 1-row matrix, works too : dCopula(c(0.5, 0.5), norm.cop) pCopula(c(0.5, 0.5), norm.cop) u <- rCopula(100, norm.cop) plot(u) dCopula(u, norm.cop) pCopula(u, norm.cop) persp (norm.cop, dCopula) contour(norm.cop, pCopula) ## a 3-dimensional normal copula u <- rCopula(1000, normalCopula(0.5, dim = 3)) if(require(scatterplot3d)) scatterplot3d(u) ## a 3-dimensional clayton copula cl3 <- claytonCopula(2, dim = 3) v <- rCopula(1000, cl3) pairs(v) if(require(scatterplot3d)) scatterplot3d(v) ## Compare with the "nacopula" version : fu1 <- dCopula(v, cl3) fu2 <- copClayton@dacopula(v, theta = 2) Fu1 <- pCopula(v, cl3) Fu2 <- pCopula(v, onacopula("Clayton", C(2.0, 1:3))) ## The density and cumulative values are the same: stopifnot(all.equal(fu1, fu2, tolerance= 1e-14), all.equal(Fu1, Fu2, tolerance= 1e-15)) ## NA and "outside" u[] u <- v[1:12,] ## replace some by values outside (0,1) and some by NA/NaN u[1, 2:3] <- c(1.5, NaN); u[2, 1] <- 2; u[3, 1:2] <- c(NA, -1) u[cbind(4:9, 1:3)] <- c(NA, NaN) f <- dCopula(u, cl3) cbind(u, f) # note: f(.) == 0 at [1] and [3] inspite of NaN/NA stopifnot(f[1:3] == 0, is.na(f[4:9]), 0 < f[10:12])
norm.cop <- normalCopula(0.5) norm.cop ## one d-vector =^= 1-row matrix, works too : dCopula(c(0.5, 0.5), norm.cop) pCopula(c(0.5, 0.5), norm.cop) u <- rCopula(100, norm.cop) plot(u) dCopula(u, norm.cop) pCopula(u, norm.cop) persp (norm.cop, dCopula) contour(norm.cop, pCopula) ## a 3-dimensional normal copula u <- rCopula(1000, normalCopula(0.5, dim = 3)) if(require(scatterplot3d)) scatterplot3d(u) ## a 3-dimensional clayton copula cl3 <- claytonCopula(2, dim = 3) v <- rCopula(1000, cl3) pairs(v) if(require(scatterplot3d)) scatterplot3d(v) ## Compare with the "nacopula" version : fu1 <- dCopula(v, cl3) fu2 <- copClayton@dacopula(v, theta = 2) Fu1 <- pCopula(v, cl3) Fu2 <- pCopula(v, onacopula("Clayton", C(2.0, 1:3))) ## The density and cumulative values are the same: stopifnot(all.equal(fu1, fu2, tolerance= 1e-14), all.equal(Fu1, Fu2, tolerance= 1e-15)) ## NA and "outside" u[] u <- v[1:12,] ## replace some by values outside (0,1) and some by NA/NaN u[1, 2:3] <- c(1.5, NaN); u[2, 1] <- 2; u[3, 1:2] <- c(NA, -1) u[cbind(4:9, 1:3)] <- c(NA, NaN) f <- dCopula(u, cl3) cbind(u, f) # note: f(.) == 0 at [1] and [3] inspite of NaN/NA stopifnot(f[1:3] == 0, is.na(f[4:9]), 0 < f[10:12])
A copula is a multivariate distribution with uniform margins.
The virtual class "Copula"
is the mother (or “super
class”) of all copula classes in the package copula which
encompasses classes of the former packages nacopula and copula.
The virtual class "parCopula"
extends "Copula"
and is
the super class of all copulas that can be fully parametrized and
hence fitted to data. For these, at least the dim()
method
must be well defined.
The virtual class "dimCopula"
extends "Copula"
and has
an explicit slot dimension
, with corresponding trivial
dim()
method.
The virtual class "copula"
extends bot "dimCopula"
and
"parCopula"
and is the mother of all copula classes from former
package copula. It has set of slots for (the dimension and)
parameter vector, see below.
The virtual class "Xcopula"
contains a slot copula
of class
"parCopula"
.
The virtual class "xcopula"
extends "parCopula"
and
"Xcopula"
; an (“actual”) class example are
the rotated copulas, rotCopula
.
Objects are typically created by are by tCopula()
,
evCopula()
, etc.
Class "dimCopula"
and its subclasses, notably
"copula"
, have a slot
dimension
:an "integer"
(of length 1), the
copula dimension .
Class "copula"
(and its subclasses) have additional slots
parameters
:numeric
vector of
parameter values, can be NA (i.e., NA_real_
).
param.names
:"character"
vector of parameter
names (and hence of the same length as parameters
).
param.lowbnd
:lower bounds for the parameters, of
class "numeric"
.
param.upbnd
:upper bounds for the parameters, of class
"numeric"
.
fullname
:deprecated; object of class "character"
, family names
of the copula.
This implementation is still at the experimental stage and is subject to change during the development.
The "copula"
class is extended by the
evCopula
, archmCopula
,
and ellipCopula
classes. Instances of
such copulas can be created via functions evCopula
,
archmCopula
and ellipCopula
.
"plackettCopula"
and fgmCopula
are
special types of copulas which do not belong to either one of the
three classes above.
Help for the (sub)classes
archmCopula
,
ellipCopula
,
evCopula
, and
fgmCopula
.
The Archimedean and nested Archimedean classes (from former package
nacopula), with a more extensive list of slots (partly instead
of methods),
acopula
, and nacopula
.
hc <- evCopula("husler", 1.25) dim(hc) smoothScatter(u <- rCopula(2^11, hc)) lambda (hc) tau (hc) rho(hc) str(hc)
hc <- evCopula("husler", 1.25) dim(hc) smoothScatter(u <- rCopula(2^11, hc)) lambda (hc) tau (hc) rho(hc) str(hc)
For a data matrix x
, compute the Kendall's tau
“correlation” matrix, i.e., all pairwise Kendall's taus
between the columns of x
.
By default and when x
has no missing values
(NA
s), the fast algorithm of
cor.fk()
is used.
corKendall(x, checkNA = TRUE, use = if(checkNA && anyNA(x)) "pairwise" else "everything")
corKendall(x, checkNA = TRUE, use = if(checkNA && anyNA(x)) "pairwise" else "everything")
x |
data, a n x p matrix (or less efficiently a data.frame), or a numeric vector which is treated as n x 1 matrix. |
checkNA |
logical indicating if |
use |
a string to determine the treatment of |
The matrix
of pairwise Kendall's taus, with
K[i,j] := tau(x[,i], x[,j])
.
cor.fk()
from pcaPP (used by default
when there are no missing values (NA
s) in x
).
etau()
or fitCopula(*, method = "itau")
make use of corKendall()
.
## If there are no NA's, corKendall() is faster than cor(*, "kendall") ## and gives the same : system.time(C1 <- cor(swiss, method="kendall")) system.time(C2 <- corKendall(swiss)) stopifnot(all.equal(C1, C2, tol = 1e-5)) ## In the case of missing values (NA), corKendall() reverts to ## cor(*, "kendall", use = "pairwise") {no longer very fast} : swM <- swiss # shorter names and three missings: colnames(swM) <- abbreviate(colnames(swiss), min=6) swM[1,2] <- swM[7,3] <- swM[25,5] <- NA (C3 <- corKendall(swM)) # now automatically uses the same as stopifnot(identical(C3, cor(swM, method="kendall", use="pairwise"))) ## and is quite close to the non-missing "truth": stopifnot(all.equal(unname(C3), unname(C2), tol = 0.06)) # rel.diff.= 0.055 try(corKendall(swM, checkNA=FALSE)) # --> Error ## the error is really from pcaPP::cor.fk(swM)
## If there are no NA's, corKendall() is faster than cor(*, "kendall") ## and gives the same : system.time(C1 <- cor(swiss, method="kendall")) system.time(C2 <- corKendall(swiss)) stopifnot(all.equal(C1, C2, tol = 1e-5)) ## In the case of missing values (NA), corKendall() reverts to ## cor(*, "kendall", use = "pairwise") {no longer very fast} : swM <- swiss # shorter names and three missings: colnames(swM) <- abbreviate(colnames(swiss), min=6) swM[1,2] <- swM[7,3] <- swM[25,5] <- NA (C3 <- corKendall(swM)) # now automatically uses the same as stopifnot(identical(C3, cor(swM, method="kendall", use="pairwise"))) ## and is quite close to the non-missing "truth": stopifnot(all.equal(unname(C3), unname(C2), tol = 0.06)) # rel.diff.= 0.055 try(corKendall(swM, checkNA=FALSE)) # --> Error ## the error is really from pcaPP::cor.fk(swM)
Evaluate the density of the diagonal of a -dimensional (nested)
Archimedean copula. Note that the diagonal of a copula is a cumulative
distribution function. Currently, only Archimedean copulas are implemented.
dDiag(u, cop, log=FALSE)
dDiag(u, cop, log=FALSE)
u |
a numeric vector of evaluation points. |
cop |
a (nested) Archimedean copula object of class
|
log |
logical indicating if the |
A numeric
vector containing the values of the density of the
diagonal of the Archimedean copula at u
.
Hofert, M., Mächler, M., and McNeil, A. J. (2013). Archimedean Copulas in High Dimensions: Estimators and Numerical Challenges Motivated by Financial Applications. Journal de la Société Française de Statistique 154(1), 25–63.
th. <- c(0.1, 0.2, 0.5, 0.8, 1.4, 2., 5.) curve(dDiag(x, cop=onacopulaL("Clayton", list(th.[1], 1:3))), 0, 1, n=1000, ylab="dDiag(x, *)", main="Diagonal densities of Clayton") abline(h=0, lty=3) for(j in 2:length(th.)) curve(dDiag(x, cop=onacopulaL("Clayton", list(th.[j], 1:3))), add=TRUE, col=j, n=1000) legend("topleft", do.call(expression, lapply(th., function(th) substitute(theta == TH, list(TH=th)))), lty = 1, col=seq_along(th.), bty="n")
th. <- c(0.1, 0.2, 0.5, 0.8, 1.4, 2., 5.) curve(dDiag(x, cop=onacopulaL("Clayton", list(th.[1], 1:3))), 0, 1, n=1000, ylab="dDiag(x, *)", main="Diagonal densities of Clayton") abline(h=0, lty=3) for(j in 2:length(th.)) curve(dDiag(x, cop=onacopulaL("Clayton", list(th.[j], 1:3))), add=TRUE, col=j, n=1000) legend("topleft", do.call(expression, lapply(th., function(th) substitute(theta == TH, list(TH=th)))), lty = 1, col=seq_along(th.), bty="n")
Describe a copula
object, i.e., its basic
properties as a string. This is a utility used when
print()
ing or plot()
ting copulas, e.g.,
after a fitting.
describeCop(x, kind = c("short", "very short", "long"), prefix = "", ...)
describeCop(x, kind = c("short", "very short", "long"), prefix = "", ...)
x |
|
kind |
a |
prefix |
a string to be prefixed to the returned string, which can be useful for indentation in describing extended copulas such as Khoudraji copulas. |
... |
further arguments; unused currently. |
a character
string.
signature(x = "archmCopula", kind = "ANY")
..
signature(x = "copula", kind = "character")
..
signature(x = "copula", kind = "missing")
..
signature(x = "ellipCopula", kind = "character")
..
signature(x = "fgmCopula", kind = "ANY")
..
signature(x = "xcopula", kind = "ANY")
..
Copula class definition copula
;
## FIXME
## FIXME
For a (nested) Archimedean copula (object of class nacopula
)
x
, dCopula(u, x)
(or also currently still dnacopula(x, u)
)
evaluates the density of x
at the given vector or matrix u
.
## S4 method for signature 'matrix,nacopula' dCopula(u, copula, log=FALSE, ...) ## *Deprecated*: dnacopula(x, u, log=FALSE, ...)
## S4 method for signature 'matrix,nacopula' dCopula(u, copula, log=FALSE, ...) ## *Deprecated*: dnacopula(x, u, log=FALSE, ...)
copula , x
|
an object of class |
u |
argument of the copula x. Note that u can be a matrix in which case the density is computed for each row of the matrix and the vector of values is returned. |
log |
logical indicating if the |
... |
optional arguments passed to the copula's |
If it exists, the density of an Archimedean copula with
generator
at
is given by
A numeric
vector containing the values of the density of the
Archimedean copula at u
.
dCopula(u, copula)
is a generic function with
methods for all our copula classes, see dCopula
.
Hofert, M., Mächler, M., and McNeil, A. J. (2012). Likelihood inference for Archimedean copulas in high dimensions under known margins. Journal of Multivariate Analysis 110, 133–150.
Hofert, M., Mächler, M., and McNeil, A. J. (2013). Archimedean Copulas in High Dimensions: Estimators and Numerical Challenges Motivated by Financial Applications. Journal de la Société Française de Statistique 154(1), 25–63.
For more details about the derivatives of an Archimedean generator,
see, for example, absdPsi
in class acopula
.
## Construct a twenty-dimensional Gumbel copula with parameter chosen ## such that Kendall's tau of the bivariate margins is 0.25. theta <- copJoe@iTau(.25) C20 <- onacopula("J", C(theta, 1:20)) ## Evaluate the copula density at the point u = (0.5,...,0.5) u <- rep(0.5, 20) dCopula(u, C20) ## the same with Monte Carlo based on 10000 simulated "frailties" dCopula(u, C20, n.MC = 10000) ## Evaluate the exact log-density at several points u <- matrix(runif(100), ncol=20) dCopula(u, C20, log = TRUE) ## Back-compatibility check stopifnot(identical( dCopula (u, C20), suppressWarnings( dnacopula(C20, u))), identical( dCopula (u, C20, log = TRUE), suppressWarnings( dnacopula(C20, u, log = TRUE))))
## Construct a twenty-dimensional Gumbel copula with parameter chosen ## such that Kendall's tau of the bivariate margins is 0.25. theta <- copJoe@iTau(.25) C20 <- onacopula("J", C(theta, 1:20)) ## Evaluate the copula density at the point u = (0.5,...,0.5) u <- rep(0.5, 20) dCopula(u, C20) ## the same with Monte Carlo based on 10000 simulated "frailties" dCopula(u, C20, n.MC = 10000) ## Evaluate the exact log-density at several points u <- matrix(runif(100), ncol=20) dCopula(u, C20, log = TRUE) ## Back-compatibility check stopifnot(identical( dCopula (u, C20), suppressWarnings( dnacopula(C20, u))), identical( dCopula (u, C20, log = TRUE), suppressWarnings( dnacopula(C20, u, log = TRUE))))
Creating elliptical copula objects with corresponding dimension and
parameters, including the dispersion structure (pronounced “Rho”).
ellipCopula (family, param, dim = 2, dispstr = "ex", df = 4, ...) normalCopula(param, dim = 2, dispstr = "ex") tCopula(param, dim = 2, dispstr = "ex", df = 4, df.fixed = FALSE, df.min = 0.01) dispstrToep(perm = NULL, check = TRUE) ## S4 method for signature 'matrix,normalCopula' pCopula(u, copula, algorithm=NULL, keepAttr=FALSE, ...) ## S4 method for signature 'matrix,tCopula' pCopula(u, copula, algorithm=NULL, keepAttr=FALSE, ...)
ellipCopula (family, param, dim = 2, dispstr = "ex", df = 4, ...) normalCopula(param, dim = 2, dispstr = "ex") tCopula(param, dim = 2, dispstr = "ex", df = 4, df.fixed = FALSE, df.min = 0.01) dispstrToep(perm = NULL, check = TRUE) ## S4 method for signature 'matrix,normalCopula' pCopula(u, copula, algorithm=NULL, keepAttr=FALSE, ...) ## S4 method for signature 'matrix,tCopula' pCopula(u, copula, algorithm=NULL, keepAttr=FALSE, ...)
family |
a |
param |
a |
dim |
the dimension of the copula. |
dispstr |
a string specifying the “dispersion structure”,
i.e., type of the symmetric positive definite matrix characterizing the
elliptical copula. Currently available structures are The dispersion structure for Toeplitz can (and often should) now be
specified by |
df |
integer value specifying the number of degrees of freedom of the multivariate t distribution used to construct the t copulas. |
df.fixed |
logical specifying if the degrees of freedom |
df.min |
non-negative number; the strict lower bound for
|
copula |
an R object of class |
u |
a vector of the copula dimension |
algorithm |
|
keepAttr |
|
... |
for the |
perm |
an |
check |
a |
For
ellipCopula()
, normalCopula()
, or tCopula()
:an elliptical copula object of class "normalCopula"
or "tCopula"
.
the character
string "toep"
,
optionally with attribute (see attributes
,
attr
) "perm"
with a permutation of
1:d
, such that (the column permuted) , or in
the data case the column-permuted matrix
U[,p]
has as
dispersion matrix toeplitz(c(1, par))
, with par
the
respective parameter vector of bivariate “correlations”
.
Note that the result of dispstrToep()
is currently stored in the
dispstr
slot of the copula object.
the numerical vector of copula probabilites of
length nrow(u)
.
ellipCopula()
is a wrapper for normalCopula()
and
tCopula()
.
The pCopula()
methods for the normal- and t-copulas
accept optional arguments to be passed to the underlying
(numerical integration) algorithms from package mvtnorm's
pmvnorm
and pmvt
,
respectively, notably algorithm
, see
GenzBretz
, or abseps
which defaults to 0.001
.
p2P()
, and getSigma()
for construction and
extraction of the dispersion matrix or
matrix of
(generalized)
correlations.
normalCopula(c(0.5, 0.6, 0.7), dim = 3, dispstr = "un") t.cop <- tCopula(c(0.5, 0.3), dim = 3, dispstr = "toep", df = 2, df.fixed = TRUE) getSigma(t.cop) # P matrix (with diagonal = 1) stopifnot(all.equal(toeplitz(c(1, .5, .3)), getSigma(t.cop))) ## dispersion "AR1" : nC.7 <- normalCopula(0.8, dim = 7, dispstr = "ar1") getSigma(nC.7) stopifnot(all.equal(toeplitz(.8^(0:6)), getSigma(nC.7))) ## from the wrapper norm.cop <- ellipCopula("normal", param = c(0.5, 0.6, 0.7), dim = 3, dispstr = "un") if(require("scatterplot3d") && dev.interactive(orNone=TRUE)) { ## 3d scatter plot of 1000 random observations scatterplot3d(rCopula(1000, norm.cop)) scatterplot3d(rCopula(1000, t.cop)) } set.seed(12) uN <- rCopula(512, norm.cop) set.seed(2); pN1 <- pCopula(uN, norm.cop) set.seed(3); pN2 <- pCopula(uN, norm.cop) stopifnot(identical(pN1, pN2)) # no longer random for dim = 3 (Xtras <- copula:::doExtras()) if(Xtras) { ## a bit more accurately: set.seed(4); pN1. <- pCopula(uN, norm.cop, abseps = 1e-9) set.seed(5); pN2. <- pCopula(uN, norm.cop, abseps = 1e-9) stopifnot(all.equal(pN1., pN2., 1e-5))# see 3.397e-6 ## but increasing the required precision (e.g., abseps=1e-15) does *NOT* help } ## For smaller copula dimension 'd', alternatives are available and ## non-random, see ?GenzBretz from package 'mvtnorm' : has_mvtn <- "package:mvtnorm" %in% search() #% << (workaround ESS Rd render bug) if(!has_mvtn) require("mvtnorm")# -> GenzBretz(), Miva(), and TVPACK() are available ## Note that Miwa() would become very slow for dimensions 5, 6, .. set.seed(4); pN1.M <- pCopula(uN, norm.cop, algorithm = Miwa(steps = 512)) set.seed(5); pN2.M <- pCopula(uN, norm.cop, algorithm = Miwa(steps = 512)) stopifnot(all.equal(pN1.M, pN2.M, tol= 1e-15))# *no* randomness set.seed(4); pN1.T <- pCopula(uN, norm.cop, algorithm = TVPACK(abseps = 1e-10)) set.seed(5); pN2.T <- pCopula(uN, norm.cop, algorithm = TVPACK(abseps = 1e-14)) stopifnot(all.equal(pN1.T, pN2.T, tol= 1e-15))# *no* randomness (but no effect of 'abseps') if(!has_mvtn) detach("package:mvtnorm")# (revert) ## Versions with unspecified parameters: tCopula() allEQ <- function(u,v) all.equal(u, v, tolerance=0) stopifnot(allEQ(ellipCopula("norm"), normalCopula()), allEQ(ellipCopula("t"), tCopula())) tCopula(dim=3) tCopula(dim=4, df.fixed=TRUE) tCopula(dim=5, disp = "toep", df.fixed=TRUE) normalCopula(dim=4, disp = "un") ## Toeplitz after *permutation* dispersions (new in copula 1.1-0) --------- tpar <- c(7,5,3)/8 # *gives* pos.def.: (ev <- eigen(toeplitz(c(1, tpar)), symmetric=TRUE, only.values=TRUE)$values) stopifnot(ev > 0) N4. <- ellipCopula("normal", dim=4, param=tpar, dispstr = "toep") #"regular" ## reversed order is "the same" for toeplitz structure: N4.pr <- ellipCopula("normal", dim=4, param=tpar, dispstr = dispstrToep(4:1)) N4.p1 <- ellipCopula("normal", dim=4, param=tpar, dispstr = dispstrToep(c(4,1:3))) N4.p2 <- ellipCopula("normal", dim=4, param=tpar, dispstr = dispstrToep(c(4:3,1:2))) N4.p3 <- ellipCopula("normal", dim=4, param=tpar, dispstr = dispstrToep(c(2,4,1,3))) (pm <- attr(N4.p3@dispstr, "perm")) # (2 4 1 3) ip <- c(3,1,4,2) # the *inverse* permutation of (2 4 1 3) = Matrix::invPerm(pm) (Sp3 <- getSigma(N4.p3)) # <-- "permuted toeplitz" Sp3[ip, ip] # re-ordered rows & columns => *is* toeplitz : stopifnot(exprs = { ## permutations pm and ip are inverses: pm[ip] == 1:4 ip[pm] == 1:4 is.matrix(T4 <- toeplitz(c(1, tpar))) identical(getSigma(N4.), T4) identical(getSigma(N4.pr), T4) # 4:1 and 1:4 is "the same" for Rho identical(Sp3[ip, ip] , T4) identical(Sp3, T4[pm,pm]) }) ## Data generation -- NB: The U matrices are equal only "in distribution": set.seed(7); U.p3 <- rCopula(1000, N4.p3) set.seed(7); U. <- rCopula(1000, N4.) stopifnot(exprs = { all.equal(loglikCopula(tpar, u=U.p3, copula= N4.p3), loglikCopula(tpar, u=U.p3[,ip], copula= N4.) -> LL3) all.equal(loglikCopula(tpar, u=U., copula= N4.), loglikCopula(tpar, u=U.[,pm], copula= N4.p3) -> LL.) }) c(LL. , LL3)# similar but different if(Xtras) { fm. <- fitCopula(N4. , U. ) fm.3 <- fitCopula(N4.p3, U.p3) summary(fm.3) stopifnot(all.equal(coef(fm.), coef(fm.3), tol = 0.01))# similar but different }
normalCopula(c(0.5, 0.6, 0.7), dim = 3, dispstr = "un") t.cop <- tCopula(c(0.5, 0.3), dim = 3, dispstr = "toep", df = 2, df.fixed = TRUE) getSigma(t.cop) # P matrix (with diagonal = 1) stopifnot(all.equal(toeplitz(c(1, .5, .3)), getSigma(t.cop))) ## dispersion "AR1" : nC.7 <- normalCopula(0.8, dim = 7, dispstr = "ar1") getSigma(nC.7) stopifnot(all.equal(toeplitz(.8^(0:6)), getSigma(nC.7))) ## from the wrapper norm.cop <- ellipCopula("normal", param = c(0.5, 0.6, 0.7), dim = 3, dispstr = "un") if(require("scatterplot3d") && dev.interactive(orNone=TRUE)) { ## 3d scatter plot of 1000 random observations scatterplot3d(rCopula(1000, norm.cop)) scatterplot3d(rCopula(1000, t.cop)) } set.seed(12) uN <- rCopula(512, norm.cop) set.seed(2); pN1 <- pCopula(uN, norm.cop) set.seed(3); pN2 <- pCopula(uN, norm.cop) stopifnot(identical(pN1, pN2)) # no longer random for dim = 3 (Xtras <- copula:::doExtras()) if(Xtras) { ## a bit more accurately: set.seed(4); pN1. <- pCopula(uN, norm.cop, abseps = 1e-9) set.seed(5); pN2. <- pCopula(uN, norm.cop, abseps = 1e-9) stopifnot(all.equal(pN1., pN2., 1e-5))# see 3.397e-6 ## but increasing the required precision (e.g., abseps=1e-15) does *NOT* help } ## For smaller copula dimension 'd', alternatives are available and ## non-random, see ?GenzBretz from package 'mvtnorm' : has_mvtn <- "package:mvtnorm" %in% search() #% << (workaround ESS Rd render bug) if(!has_mvtn) require("mvtnorm")# -> GenzBretz(), Miva(), and TVPACK() are available ## Note that Miwa() would become very slow for dimensions 5, 6, .. set.seed(4); pN1.M <- pCopula(uN, norm.cop, algorithm = Miwa(steps = 512)) set.seed(5); pN2.M <- pCopula(uN, norm.cop, algorithm = Miwa(steps = 512)) stopifnot(all.equal(pN1.M, pN2.M, tol= 1e-15))# *no* randomness set.seed(4); pN1.T <- pCopula(uN, norm.cop, algorithm = TVPACK(abseps = 1e-10)) set.seed(5); pN2.T <- pCopula(uN, norm.cop, algorithm = TVPACK(abseps = 1e-14)) stopifnot(all.equal(pN1.T, pN2.T, tol= 1e-15))# *no* randomness (but no effect of 'abseps') if(!has_mvtn) detach("package:mvtnorm")# (revert) ## Versions with unspecified parameters: tCopula() allEQ <- function(u,v) all.equal(u, v, tolerance=0) stopifnot(allEQ(ellipCopula("norm"), normalCopula()), allEQ(ellipCopula("t"), tCopula())) tCopula(dim=3) tCopula(dim=4, df.fixed=TRUE) tCopula(dim=5, disp = "toep", df.fixed=TRUE) normalCopula(dim=4, disp = "un") ## Toeplitz after *permutation* dispersions (new in copula 1.1-0) --------- tpar <- c(7,5,3)/8 # *gives* pos.def.: (ev <- eigen(toeplitz(c(1, tpar)), symmetric=TRUE, only.values=TRUE)$values) stopifnot(ev > 0) N4. <- ellipCopula("normal", dim=4, param=tpar, dispstr = "toep") #"regular" ## reversed order is "the same" for toeplitz structure: N4.pr <- ellipCopula("normal", dim=4, param=tpar, dispstr = dispstrToep(4:1)) N4.p1 <- ellipCopula("normal", dim=4, param=tpar, dispstr = dispstrToep(c(4,1:3))) N4.p2 <- ellipCopula("normal", dim=4, param=tpar, dispstr = dispstrToep(c(4:3,1:2))) N4.p3 <- ellipCopula("normal", dim=4, param=tpar, dispstr = dispstrToep(c(2,4,1,3))) (pm <- attr(N4.p3@dispstr, "perm")) # (2 4 1 3) ip <- c(3,1,4,2) # the *inverse* permutation of (2 4 1 3) = Matrix::invPerm(pm) (Sp3 <- getSigma(N4.p3)) # <-- "permuted toeplitz" Sp3[ip, ip] # re-ordered rows & columns => *is* toeplitz : stopifnot(exprs = { ## permutations pm and ip are inverses: pm[ip] == 1:4 ip[pm] == 1:4 is.matrix(T4 <- toeplitz(c(1, tpar))) identical(getSigma(N4.), T4) identical(getSigma(N4.pr), T4) # 4:1 and 1:4 is "the same" for Rho identical(Sp3[ip, ip] , T4) identical(Sp3, T4[pm,pm]) }) ## Data generation -- NB: The U matrices are equal only "in distribution": set.seed(7); U.p3 <- rCopula(1000, N4.p3) set.seed(7); U. <- rCopula(1000, N4.) stopifnot(exprs = { all.equal(loglikCopula(tpar, u=U.p3, copula= N4.p3), loglikCopula(tpar, u=U.p3[,ip], copula= N4.) -> LL3) all.equal(loglikCopula(tpar, u=U., copula= N4.), loglikCopula(tpar, u=U.[,pm], copula= N4.p3) -> LL.) }) c(LL. , LL3)# similar but different if(Xtras) { fm. <- fitCopula(N4. , U. ) fm.3 <- fitCopula(N4.p3, U.p3) summary(fm.3) stopifnot(all.equal(coef(fm.), coef(fm.3), tol = 0.01))# similar but different }
Copulas generated from elliptical multivariate distributions, notably
Normal- and t-copulas (of specific class "normalCopula"
or
"tCopula"
, respectively).
Objects are typically created by ellipCopula()
,
normalCopula()
, or tCopula()
.
dispstr
:"character"
string indicating
how the dispersion matrix is parameterized; one of "ex"
,
"ar1"
, "toep"
, or "un"
, see the dispstr
argument of ellipCopula()
.
dimension
:Object of class "numeric"
, dimension
of the copula.
parameters
:a numeric
, (vector of) the parameter
value(s).
param.names
:character
vector with names
for the parameters
slot, of the same length.
param.lowbnd
:numeric
vector of lower
bounds for the parameters
slot, of the same length.
param.upbnd
:upper bounds for parameters
,
analogous to parm.lowbnd
.
fullname
:deprecated; object of class "character"
, family names
of the copula.
Class "ellipCopula"
extends class copula
directly. Classes "normalCopula"
and "tCopula"
extend
"ellipCopula"
directly.
Many methods are available, notably dCopula
,
pCopula
, and rCopula
.
Use, e.g., methods(class = "tCopula")
to find others.
ellipCopula
which also documents tCopula()
and
normalCopula()
;
copula-class
.
Compute minimum distance estimators for (nested) Archimedean copulas.
emde(u, cop, method=c("mde.chisq.CvM", "mde.chisq.KS", "mde.gamma.CvM", "mde.gamma.KS"), interval=initOpt(cop@copula@name), include.K = FALSE, repara = TRUE, ...)
emde(u, cop, method=c("mde.chisq.CvM", "mde.chisq.KS", "mde.gamma.CvM", "mde.gamma.KS"), interval=initOpt(cop@copula@name), include.K = FALSE, repara = TRUE, ...)
u |
|
cop |
|
method |
a
The four methods are described in Hofert et al. (2013); see also the ‘Details’ section. |
interval |
bivariate vector denoting the interval where optimization takes place. The default is computed as described in Hofert et al. (2013). |
include.K |
logical indicating whether the last component, the
(possibly numerically challenging) Kendall distribution function
|
repara |
logical indicating whether the distance function to be optimized is reparametrized (the default); see the code for more details. |
... |
additional arguments passed to |
First, htrafo
is applied to map the
-matrix of given realizations to a
-matrix or
-matrix, depending on whether the last
component is included (
include.K=TRUE
) or not. Second, using
either the sum of squares of the standard normal quantile function
(method="mde.chisq.CvM"
and
method="mde.chisq.KS"
) or the sum of negative logarithms
(method="mde.gamma.CvM"
and method="mde.gamma.KS"
), a map to a
chi-square or an Erlang distribution is applied, respectively. Finally, a
Cramér-von Mises (method="mde.chisq.CvM"
and
method="mde.gamma.CvM"
) or Kolmogorov-Smirnov
(method="mde.chisq.KS"
and method="mde.gamma.KS"
) distance is
applied. This is repeated in an optimization until the copula parameter is
found such that this distance is minimized.
Note that the same transformations as described above are applied for goodness-of-fit testing; see the ‘See Also’ section).
list
as returned by optimize
, including the
minimum distance estimator.
Hofert, M., Mächler, M., and McNeil, A. J. (2013). Archimedean Copulas in High Dimensions: Estimators and Numerical Challenges Motivated by Financial Applications. Journal de la Société Française de Statistique 154(1), 25–63.
Hering, C. and Hofert, M. (2014), Goodness-of-fit tests for Archimedean copulas in high dimensions, Innovations in Quantitative Risk Management.
enacopula
(wrapper for different estimators),
gofCopula
(wrapper for different goodness-of-fit tests),
htrafo
(transformation to a multivariate uniform
distribution), and
K
(Kendall distribution function).
tau <- 0.25 (theta <- copGumbel@iTau(tau)) # 4/3 d <- 20 (cop <- onacopulaL("Gumbel", list(theta,1:d))) set.seed(1) n <- 200 U <- rnacopula(n, cop) (meths <- eval(formals(emde)$method)) # "mde.chisq.CvM", ... fun <- function(meth, u, cop, theta){ run.time <- system.time(val <- emde(u, cop=cop, method=meth)$minimum) list(value=val, error=val-theta, utime.ms=1000*run.time[[1]]) } (res <- sapply(meths, fun, u=U, cop=cop, theta=theta))
tau <- 0.25 (theta <- copGumbel@iTau(tau)) # 4/3 d <- 20 (cop <- onacopulaL("Gumbel", list(theta,1:d))) set.seed(1) n <- 200 U <- rnacopula(n, cop) (meths <- eval(formals(emde)$method)) # "mde.chisq.CvM", ... fun <- function(meth, u, cop, theta){ run.time <- system.time(val <- emde(u, cop=cop, method=meth)$minimum) list(value=val, error=val-theta, utime.ms=1000*run.time[[1]]) } (res <- sapply(meths, fun, u=U, cop=cop, theta=theta))
Compute (simulated) maximum likelihood estimators for (nested) Archimedean copulas.
emle(u, cop, n.MC=0, optimizer="optimize", method, interval=initOpt(cop@copula@name), start=list(theta=initOpt(cop@copula@name, interval=FALSE, u=u)), ...) .emle(u, cop, n.MC=0, interval=initOpt(cop@copula@name), ...)
emle(u, cop, n.MC=0, optimizer="optimize", method, interval=initOpt(cop@copula@name), start=list(theta=initOpt(cop@copula@name, interval=FALSE, u=u)), ...) .emle(u, cop, n.MC=0, interval=initOpt(cop@copula@name), ...)
u |
|
cop |
|
n.MC |
|
optimizer |
a string or |
method |
only when |
interval |
bivariate vector denoting the interval where optimization takes place. The default is computed as described in Hofert et al. (2012). |
start |
|
... |
additional parameters passed to |
Exact formulas for the generator derivatives were derived in Hofert
et al. (2012). Based on these formulas one can compute the
(log-)densities of the Archimedean copulas. Note that for some
densities, the formulas are numerically highly non-trivial to compute
and considerable efforts were put in to make the computations
numerically feasible even in large dimensions (see the source code of
the Gumbel copula, for example). Both MLE and SMLE showed good
performance in the simulation study conducted by Hofert et
al. (2013) including the challenging 100-dimensional case.
Alternative estimators (see also enacopula
) often used
because of their numerical feasibility, might break down in much
smaller dimensions.
Note: SMLE for Clayton currently faces serious numerical issues and is due to further research. This is only interesting from a theoretical point of view, since the exact derivatives are known and numerically non-critical to evaluate.
an R object of class "mle2"
(and
thus useful for obtaining confidence intervals) with the
(simulated) maximum likelihood estimator.
list
as returned by
optimize()
including the maximum likelihood
estimator (does not confidence intervals but is typically faster).
Hofert, M., Mächler, M., and McNeil, A. J. (2012). Likelihood inference for Archimedean copulas in high dimensions under known margins. Journal of Multivariate Analysis 110, 133–150.
Hofert, M., Mächler, M., and McNeil, A. J. (2013). Archimedean Copulas in High Dimensions: Estimators and Numerical Challenges Motivated by Financial Applications. Journal de la Société Française de Statistique 154(1), 25–63.
mle2
from package bbmle and
mle
from stats4 on which mle2
is
modeled. enacopula
(wrapper for different estimators).
demo(opC-demo)
and demo(GIG-demo)
for
examples of two-parameter families.
tau <- 0.25 (theta <- copGumbel@iTau(tau)) # 4/3 d <- 20 (cop <- onacopulaL("Gumbel", list(theta,1:d))) set.seed(1) n <- 200 U <- rnacopula(n,cop) ## Estimation system.time(efm <- emle(U, cop)) summary(efm) # using bblme's 'mle2' method ## Profile likelihood plot [using S4 methods from bbmle/stats4] : pfm <- profile(efm) ci <- confint(pfm, level=0.95) ci stopifnot(ci[1] <= theta, theta <= ci[2]) plot(pfm) # |z| against theta, |z| = sqrt(deviance) plot(pfm, absVal=FALSE, # z against theta show.points=TRUE) # showing how it's interpolated ## and show the true theta: abline(v=theta, col="lightgray", lwd=2, lty=2) axis(1, pos = 0, at=theta, label=quote(theta[0])) ## Plot of the log-likelihood, MLE and conf.int.: logL <- function(x) -efm@minuslogl(x) # == -sum(copGumbel@dacopula(U, theta=x, log=TRUE)) logL. <- Vectorize(logL) I <- c(cop@copula@iTau(0.1), cop@copula@iTau(0.4)) curve(logL., from=I[1], to=I[2], xlab=quote(theta), ylab="log-likelihood", main="log-likelihood for Gumbel") abline(v = c(theta, efm@coef), col="magenta", lwd=2, lty=2) axis(1, at=c(theta, efm@coef), padj = c(-0.5, -0.8), hadj = -0.2, col.axis="magenta", label= expression(theta[0], hat(theta)[n])) abline(v=ci, col="gray30", lwd=2, lty=3) text(ci[2], extendrange(par("usr")[3:4], f= -.04)[1], "95% conf. int.", col="gray30", adj = -0.1)
tau <- 0.25 (theta <- copGumbel@iTau(tau)) # 4/3 d <- 20 (cop <- onacopulaL("Gumbel", list(theta,1:d))) set.seed(1) n <- 200 U <- rnacopula(n,cop) ## Estimation system.time(efm <- emle(U, cop)) summary(efm) # using bblme's 'mle2' method ## Profile likelihood plot [using S4 methods from bbmle/stats4] : pfm <- profile(efm) ci <- confint(pfm, level=0.95) ci stopifnot(ci[1] <= theta, theta <= ci[2]) plot(pfm) # |z| against theta, |z| = sqrt(deviance) plot(pfm, absVal=FALSE, # z against theta show.points=TRUE) # showing how it's interpolated ## and show the true theta: abline(v=theta, col="lightgray", lwd=2, lty=2) axis(1, pos = 0, at=theta, label=quote(theta[0])) ## Plot of the log-likelihood, MLE and conf.int.: logL <- function(x) -efm@minuslogl(x) # == -sum(copGumbel@dacopula(U, theta=x, log=TRUE)) logL. <- Vectorize(logL) I <- c(cop@copula@iTau(0.1), cop@copula@iTau(0.4)) curve(logL., from=I[1], to=I[2], xlab=quote(theta), ylab="log-likelihood", main="log-likelihood for Gumbel") abline(v = c(theta, efm@coef), col="magenta", lwd=2, lty=2) axis(1, at=c(theta, efm@coef), padj = c(-0.5, -0.8), hadj = -0.2, col.axis="magenta", label= expression(theta[0], hat(theta)[n])) abline(v=ci, col="gray30", lwd=2, lty=3) text(ci[2], extendrange(par("usr")[3:4], f= -.04)[1], "95% conf. int.", col="gray30", adj = -0.1)
Computes the empirical copula (according to a provided method) and auxiliary tools.
empCopula(X, smoothing = c("none", "beta", "checkerboard", "schaake.shuffle"), offset = 0, ties.method = c("max", "average", "first", "last", "random", "min")) C.n(u, X, smoothing = c("none", "beta", "checkerboard"), offset = 0, ties.method = c("max", "average", "first", "last", "random", "min")) dCn(u, U, j.ind = 1:d, b = 1/sqrt(nrow(U)), ...) F.n(x, X, offset = 0, smoothing = c("none", "beta", "checkerboard")) Cn(x, w) ## <-- deprecated! use C.n(w, x) instead! toEmpMargins(U, x, ...)
empCopula(X, smoothing = c("none", "beta", "checkerboard", "schaake.shuffle"), offset = 0, ties.method = c("max", "average", "first", "last", "random", "min")) C.n(u, X, smoothing = c("none", "beta", "checkerboard"), offset = 0, ties.method = c("max", "average", "first", "last", "random", "min")) dCn(u, U, j.ind = 1:d, b = 1/sqrt(nrow(U)), ...) F.n(x, X, offset = 0, smoothing = c("none", "beta", "checkerboard")) Cn(x, w) ## <-- deprecated! use C.n(w, x) instead! toEmpMargins(U, x, ...)
X |
an |
u , w
|
an |
U |
an |
x |
an |
smoothing |
|
ties.method |
|
j.ind |
|
b |
|
offset |
used in scaling the result which is of the form
|
... |
additional arguments passed to |
Given pseudo-observations from a distribution with continuous margins
and copula C, the empirical copula is the (default) empirical
distribution function of these pseudo-observations. It is thus a
natural nonparametric estimator of C. The function C.n()
computes the empirical copula or two alternative smoothed versions of
it: the empirical beta copula or the empirical
checkerboard copula; see Eqs. (2.1) and (4.1) in Segers, Sibuya and
Tsukahara (2017), and the references therein. empCopula()
is the constructor of an object of class empCopula
.
The function dCn()
approximates first-order partial derivatives
of the unknown copula using the empirical copula.
The function F.n()
computes the empirical distribution function
of a multivariate sample. Note that C.n(u, X, smoothing="none",
*)
simply calls F.n(u, pobs(X), *)
after checking u
.
There are several asymptotically equivalent definitions of the
empirical copula. C.n(, smoothing = "none")
is simply defined
as the empirical distribution function computed from the
pseudo-observations, that is,
where ,
, denote the pseudo-observations and
the sample size. Internally,
C.n(,smoothing = "none")
is just a wrapper for F.n()
and is expected to be fed with the
pseudo-observations.
The approximation for the th partial derivative of the unknown
copula
is implemented as, for example, in Rémillard and Scaillet
(2009),
and given by
where denotes the bandwidth and
the empirical copula.
empCopula()
is the constructor for objects of class
empCopula
.
C.n()
returns the empirical copula of the pseudo-observations
X
evaluated at u
(or a smoothed version of it).
dCn()
returns a vector (length(j.ind)
is 1) or a matrix
(with number of columns equal to length(j.ind)
), containing
the approximated first-order partial derivatives of the unknown
copula at u
with respect to the arguments in j.ind
.
F.n()
returns the empirical distribution function of X
evaluated at x
if smoothing = "none"
, the empirical beta
copula evaluated at x
if smoothing = "beta"
and the empirical
checkerboard copula evaluated at x
if smoothing =
"checkerboard"
.
toEmpMargins()
transforms the copula sample U
to the
empirical margins based on the sample x
.
The first version of our empirical copula implementation, Cn()
,
had its two arguments reversed compared to C.n()
, and is
deprecated now. You must swap its arguments to transform to
new code.
The use of the two smoothed versions assumes implicitly no ties in the component samples of the data.
Rüschendorf, L. (1976). Asymptotic distributions of multivariate rank order statistics, Annals of Statistics 4, 912–923.
Deheuvels, P. (1979). La fonction de dépendance empirique et ses propriétés: un test non paramétrique d'indépendance, Acad. Roy. Belg. Bull. Cl. Sci., 5th Ser. 65, 274–292.
Deheuvels, P. (1981). A non parametric test for independence, Publ. Inst. Statist. Univ. Paris 26, 29–50.
Clark, M., Gangopadhyay, S., Hay, L., Rajagopalan, B. and Wilby, R. (2004). The Schaake Shuffle: A Method for Reconstructing Space-Time Variability in Forecasted Precipitation and Temperature Fields. Journal of Hydrometeorology, pages 243-262.
Rémillard, B. and Scaillet, O. (2009). Testing for equality between two copulas. Journal of Multivariate Analysis, 100(3), pages 377-386.
Segers, J., Sibuya, M. and Tsukahara, H. (2017). The Empirical Beta Copula. Journal of Multivariate Analysis, 155, pages 35–51, https://arxiv.org/abs/1607.04430.
Kiriliouk, A., Segers, J. and Tsukahara, H. (2020). Resampling Procedures with Empirical Beta Copulas. https://arxiv.org/abs/1905.12466.
pobs()
for computing pseudo-observations.
## Generate data X (from a meta-Gumbel model with N(0,1) margins) n <- 100 d <- 3 family <- "Gumbel" theta <- 2 cop <- onacopulaL(family, list(theta=theta, 1:d)) set.seed(1) X <- qnorm(rCopula(n, cop)) # meta-Gumbel data with N(0,1) margins ## Evaluate empirical copula u <- matrix(runif(n*d), n, d) # random points were to evaluate the empirical copula ec <- C.n(u, X = X) ## Compare the empirical copula with the true copula pc <- pCopula(u, copula = cop) mean(abs(pc - ec)) # ~= 0.012 -- increase n to decrease this error ## The same for the two smoothed versions beta <- C.n(u, X, smoothing = "beta") mean(abs(pc - beta)) check <- C.n(u, X, smoothing = "checkerboard") mean(abs(pc - check)) ## Compare the empirical copula with F.n(pobs()) U <- pobs(X) # pseudo-observations stopifnot(identical(ec, F.n(u, X = pobs(U)))) # even identical ## Compare the empirical copula based on U at U with the Kendall distribution ## Note: Theoretically, C(U) ~ K, so K(C_n(U, U = U)) should approximately be U(0,1) plot(ecdf(pK(C.n(U, X), cop = cop@copula, d = d)), asp = 1, xaxs="i", yaxs="i") segments(0,0, 1,1, col=adjustcolor("blue",1/3), lwd=5, lty = 2) abline(v=0:1, col="gray70", lty = 2) ## Compare the empirical copula and the true copula on the diagonal C.n.diag <- function(u) C.n(do.call(cbind, rep(list(u), d)), X = X) # diagonal of C_n C.diag <- function(u) pCopula(do.call(cbind, rep(list(u), d)), cop) # diagonal of C curve(C.n.diag, from = 0, to = 1, # empirical copula diagonal main = paste("True vs empirical diagonal of a", family, "copula"), xlab = "u", ylab = quote("True C(u,..,u) and empirical"~C[n](u,..,u))) curve(C.diag, lty = 2, add = TRUE) # add true copula diagonal legend("bottomright", lty = 2:1, bty = "n", inset = 0.02, legend = expression(C, C[n])) ## Approximate partial derivatives w.r.t. the 2nd and 3rd component j.ind <- 2:3 # indices w.r.t. which the partial derivatives are computed ## Partial derivatives based on the empirical copula and the true copula der23 <- dCn(u, U = pobs(U), j.ind = j.ind) der23. <- copula:::dCdu(archmCopula(family, param=theta, dim=d), u=u)[,j.ind] ## Approximation error summary(as.vector(abs(der23-der23.))) ## For an example of using F.n(), see help(mvdc)% ./Mvdc.Rd ## Generate a bivariate empirical copula object (various smoothing methods) n <- 10 # sample size d <- 2 # dimension set.seed(271) X <- rCopula(n, copula = claytonCopula(3, dim = d)) ecop.orig <- empCopula(X) # smoothing = "none" ecop.beta <- empCopula(X, smoothing = "beta") ecop.check <- empCopula(X, smoothing = "checkerboard") ## Sample from these (smoothed) empirical copulas m <- 50 U.orig <- rCopula(m, copula = ecop.orig) U.beta <- rCopula(m, copula = ecop.beta) U.check <- rCopula(m, copula = ecop.check) ## Plot wireframe2(ecop.orig, FUN = pCopula, draw.4.pCoplines = FALSE) wireframe2(ecop.beta, FUN = pCopula) wireframe2(ecop.check, FUN = pCopula) ## Density (only exists when smoothing = "beta") wireframe2(ecop.beta, FUN = dCopula) ## Transform a copula sample to empirical margins set.seed(271) X <- qexp(rCopula(1000, copula = claytonCopula(2))) # multivariate distribution U <- rCopula(917, copula = gumbelCopula(2)) # new copula sample X. <- toEmpMargins(U, x = X) # tranform U to the empirical margins of X plot(X.) # Gumbel sample with empirical margins of X
## Generate data X (from a meta-Gumbel model with N(0,1) margins) n <- 100 d <- 3 family <- "Gumbel" theta <- 2 cop <- onacopulaL(family, list(theta=theta, 1:d)) set.seed(1) X <- qnorm(rCopula(n, cop)) # meta-Gumbel data with N(0,1) margins ## Evaluate empirical copula u <- matrix(runif(n*d), n, d) # random points were to evaluate the empirical copula ec <- C.n(u, X = X) ## Compare the empirical copula with the true copula pc <- pCopula(u, copula = cop) mean(abs(pc - ec)) # ~= 0.012 -- increase n to decrease this error ## The same for the two smoothed versions beta <- C.n(u, X, smoothing = "beta") mean(abs(pc - beta)) check <- C.n(u, X, smoothing = "checkerboard") mean(abs(pc - check)) ## Compare the empirical copula with F.n(pobs()) U <- pobs(X) # pseudo-observations stopifnot(identical(ec, F.n(u, X = pobs(U)))) # even identical ## Compare the empirical copula based on U at U with the Kendall distribution ## Note: Theoretically, C(U) ~ K, so K(C_n(U, U = U)) should approximately be U(0,1) plot(ecdf(pK(C.n(U, X), cop = cop@copula, d = d)), asp = 1, xaxs="i", yaxs="i") segments(0,0, 1,1, col=adjustcolor("blue",1/3), lwd=5, lty = 2) abline(v=0:1, col="gray70", lty = 2) ## Compare the empirical copula and the true copula on the diagonal C.n.diag <- function(u) C.n(do.call(cbind, rep(list(u), d)), X = X) # diagonal of C_n C.diag <- function(u) pCopula(do.call(cbind, rep(list(u), d)), cop) # diagonal of C curve(C.n.diag, from = 0, to = 1, # empirical copula diagonal main = paste("True vs empirical diagonal of a", family, "copula"), xlab = "u", ylab = quote("True C(u,..,u) and empirical"~C[n](u,..,u))) curve(C.diag, lty = 2, add = TRUE) # add true copula diagonal legend("bottomright", lty = 2:1, bty = "n", inset = 0.02, legend = expression(C, C[n])) ## Approximate partial derivatives w.r.t. the 2nd and 3rd component j.ind <- 2:3 # indices w.r.t. which the partial derivatives are computed ## Partial derivatives based on the empirical copula and the true copula der23 <- dCn(u, U = pobs(U), j.ind = j.ind) der23. <- copula:::dCdu(archmCopula(family, param=theta, dim=d), u=u)[,j.ind] ## Approximation error summary(as.vector(abs(der23-der23.))) ## For an example of using F.n(), see help(mvdc)% ./Mvdc.Rd ## Generate a bivariate empirical copula object (various smoothing methods) n <- 10 # sample size d <- 2 # dimension set.seed(271) X <- rCopula(n, copula = claytonCopula(3, dim = d)) ecop.orig <- empCopula(X) # smoothing = "none" ecop.beta <- empCopula(X, smoothing = "beta") ecop.check <- empCopula(X, smoothing = "checkerboard") ## Sample from these (smoothed) empirical copulas m <- 50 U.orig <- rCopula(m, copula = ecop.orig) U.beta <- rCopula(m, copula = ecop.beta) U.check <- rCopula(m, copula = ecop.check) ## Plot wireframe2(ecop.orig, FUN = pCopula, draw.4.pCoplines = FALSE) wireframe2(ecop.beta, FUN = pCopula) wireframe2(ecop.check, FUN = pCopula) ## Density (only exists when smoothing = "beta") wireframe2(ecop.beta, FUN = dCopula) ## Transform a copula sample to empirical margins set.seed(271) X <- qexp(rCopula(1000, copula = claytonCopula(2))) # multivariate distribution U <- rCopula(917, copula = gumbelCopula(2)) # new copula sample X. <- toEmpMargins(U, x = X) # tranform U to the empirical margins of X plot(X.) # Gumbel sample with empirical margins of X
Empirical Copula class.
Created by calls of the form new("empCopula", ...)
or rather
typically by empCopula()
based on a matrix
X
of pseudo-observations. Smoothing options are available, see
there.
X
:matrix
of pseudo-observations based
on which the empirical copula is constructed.
smoothing
:character
string determining
the smoothing method.
offset
:numeric
giving the shift in the
normalizing factor for computing the empirical copula.
ties.method
:a string indicating rank()
's
ties method for computing the empirical copula.
The class constructor are empCopula()
, also for
examples.
A set of ten different estimators, currently for one-parameter Archimedean copulas, of possibly quite high dimensions.
enacopula(u, cop, method = c("mle", "smle", "dmle", "mde.chisq.CvM", "mde.chisq.KS", "mde.gamma.CvM", "mde.gamma.KS", "tau.tau.mean", "tau.theta.mean", "beta"), n.MC = if (method == "smle") 10000 else 0, interval = initOpt(cop@copula@name), xargs = list(), ...)
enacopula(u, cop, method = c("mle", "smle", "dmle", "mde.chisq.CvM", "mde.chisq.KS", "mde.gamma.CvM", "mde.gamma.KS", "tau.tau.mean", "tau.theta.mean", "beta"), n.MC = if (method == "smle") 10000 else 0, interval = initOpt(cop@copula@name), xargs = list(), ...)
u |
|
cop |
|
method |
a
|
n.MC |
only for |
interval |
bivariate vector denoting the interval where
optimization takes place. The default is computed as described in
Hofert et al. (2012). Used for all methods except
|
xargs |
list of additional arguments for the chosen estimation method. |
... |
additional arguments passed to |
enacopula
serves as a wrapper for the different
implemented estimators and provides a uniform framework to utilize
them. For more information, see the single estimators as given in the
section ‘See Also’.
Note that Hofert, Mächler, and McNeil (2013) compared these
estimators. Their findings include a rather poor performance and numerically
challenging problems of some of these estimators. In particular, the
estimators obtained by method="mde.gamma.CvM"
,
method="mde.gamma.KS"
, method="tau.theta.mean"
, and
method="beta"
should be used with care (or not at all). Overall, MLE
performed best (by far).
the estimated parameter, , that is, currently a
number as only one-parameter Archimedean copulas are considered.
Hofert, M., Mächler, M., and McNeil, A. J. (2012). Likelihood inference for Archimedean copulas in high dimensions under known margins. Journal of Multivariate Analysis 110, 133–150.
Hofert, M., Mächler, M., and McNeil, A. J. (2013). Archimedean Copulas in High Dimensions: Estimators and Numerical Challenges Motivated by Financial Applications. Journal de la Société Française de Statistique 154(1), 25–63.
emle
which returns an object of "mle"
providing useful methods not available for other estimators.
demo(opC-demo)
and vignette("GIG", package="copula")
for
examples of two-parameter families.
edmle
for the diagonal maximum likelihood estimator.
emde
for the minimum distance estimators.
etau
for the estimators based on Kendall's tau.
ebeta
for the estimator based on Blomqvist's beta.
tau <- 0.25 (theta <- copGumbel@iTau(tau)) # 4/3 d <- 12 (cop <- onacopulaL("Gumbel", list(theta,1:d))) set.seed(1) n <- 100 U <- rnacopula(n, cop) meths <- eval(formals(enacopula)$method) fun <- function(meth, u, cop, theta) { run.time <- system.time(val <- enacopula(u, cop=cop, method=meth)) list(value=val, error=val-theta, utime.ms=1000*run.time[[1]]) } t(res <- sapply(meths, fun, u=U, cop=cop, theta=theta))
tau <- 0.25 (theta <- copGumbel@iTau(tau)) # 4/3 d <- 12 (cop <- onacopulaL("Gumbel", list(theta,1:d))) set.seed(1) n <- 100 U <- rnacopula(n, cop) meths <- eval(formals(enacopula)$method) fun <- function(meth, u, cop, theta) { run.time <- system.time(val <- enacopula(u, cop=cop, method=meth)) list(value=val, error=val-theta, utime.ms=1000*run.time[[1]]) } t(res <- sapply(meths, fun, u=U, cop=cop, theta=theta))
Various Estimators for (Nested) Archimedean Copulas, namely,
Method-of-moments-like estimator based on (a multivariate version of) Blomqvist'sbeta.
Maximum likelihood estimator based on the diagonal of a (nested) Archimedean copula.
Method-of-moments-like estimators based on (bivariate) Kendall's tau.
ebeta(u, cop, interval = initOpt(cop@copula@name), ...) edmle(u, cop, interval = initOpt(cop@copula@name), warn=TRUE, ...) etau(u, cop, method = c("tau.mean", "theta.mean"), warn=TRUE, ...)
ebeta(u, cop, interval = initOpt(cop@copula@name), ...) edmle(u, cop, interval = initOpt(cop@copula@name), warn=TRUE, ...) etau(u, cop, method = c("tau.mean", "theta.mean"), warn=TRUE, ...)
u |
|
cop |
|
interval |
bivariate vector denoting the interval where optimization takes place. The default is computed as described in Hofert et al. (2013). |
method |
a character string specifying the method (only
for
|
warn |
logical indicating if warnings are printed:
|
... |
additional arguments passed to
|
For ebeta
, the parameter is estimated with a
method-of-moments-like procedure such that the population version of
the multivariate Blomqvist's beta matches its sample version.
Note that the copula diagonal is a distribution function and the
maximum of all components of a random vector following the copula is
distributed according to this distribution function. For
edmle
, the parameter is estimated via maximum-likelihood
estimation based on the diagonal.
For etau
, corKendall(u, ...)
is used and if there
are no NA
s in u
, by default (if no additional
arguments are provided), corKendall()
calls the
fast
cor.fk()
from package pcaPP
instead of the
cor(*, method="kendall")
.
Conversely, when u
has NA
s, by default,
corKendall(u, ...)
will use
cor(u, method="kendall", use = "pairwise")
such that
etau(u, *)
will work.
Furthermore, method="tau.mean"
means that the average
of sample versions of Kendall's tau are computed first and then the
parameter is determined such that the population version of Kendall's
tau matches this average (if possible); the method="theta.mean"
stands for first computing all pairwise Kendall's tau estimators and
then returning the mean of these estimators.
For more details, see Hofert et al. (2013).
Note that these estimators should be used with care; see the
performance results in Hofert et al. (2013). In particular,
etau
should be used with the (default) method "tau.mean"
since "theta.mean"
is both slower and more prone to errors.
ebeta
the return value of safeUroot
(that is, typically almost the same as the value of
uniroot
) giving the Blomqvist beta estimator.
edmle
list
as returned by
optimize
, including the diagonal maximum likelihood
estimator.
etau
method-of-moments-like estimator based on Kendall's tau for the chosen method.
Hofert, M., Mächler, M., and McNeil, A. J. (2013). Archimedean Copulas in High Dimensions: Estimators and Numerical Challenges Motivated by Financial Applications. Journal de la Société Française de Statistique 154(1), 25–63.
corKendall()
.
The more sophisticated estimators emle
(Maximum Likelihood) and
emde
(Minimum Distance). enacopula
(wrapper for different estimators).
tau <- 0.25 (theta <- copGumbel@iTau(tau)) # 4/3 = 1.333.. d <- 20 (cop <- onacopulaL("Gumbel", list(theta,1:d))) set.seed(1) n <- 200 U <- rnacopula(n, cop) system.time(theta.hat.beta <- ebeta(U, cop=cop)) theta.hat.beta$root system.time(theta.hat.dmle <- edmle(U, cop=cop)) theta.hat.dmle$minimum system.time(theta.hat.etau <- etau(U, cop=cop, method="tau.mean")) theta.hat.etau system.time(theta.hat.etau. <- etau(U, cop=cop, method="theta.mean")) theta.hat.etau. ## etau() in the case of missing values (NA's) ## ------ --------------------- ##' @title add Missing Values completely at random ##' @param x matrix or vector ##' @param prob desired probability ("fraction") of missing values (\code{\link{NA}}s). ##' @return x[] with some (100*prob percent) entries replaced by \code{\link{NA}}s. addNAs <- function(x, prob) { np <- length(x) x[sample.int(np, prob*np)] <- NA x } ## UM[] := U[] with 5% missing values set.seed(7); UM <- addNAs(U, p = 0.05) mean(is.na(UM)) # 0.05 ## This error if x has NA's was happening for etau(UM, cop=cop) ## before copula version 0.999-17 (June 2017) : try(eM <- etau(UM, cop=cop, use = "everything")) # --> Error ... NA/NaN/Inf in foreign function call ## The new default: eM0 <- etau(UM, cop=cop, use = "pairwise") eM1 <- etau(UM, cop=cop, use = "complete") ## use = "complete" is really equivalent to dropping all obs. with with missing values: stopifnot(all.equal(eM1, etau(na.omit(UM), cop=cop), tol = 1e-15)) ## but use = "pairwise" ---> cor(*, use = "pairwise") is much better: rbind(etau.U = theta.hat.etau, etau.UM.pairwise = eM0, etau.UM.complete = eM1)
tau <- 0.25 (theta <- copGumbel@iTau(tau)) # 4/3 = 1.333.. d <- 20 (cop <- onacopulaL("Gumbel", list(theta,1:d))) set.seed(1) n <- 200 U <- rnacopula(n, cop) system.time(theta.hat.beta <- ebeta(U, cop=cop)) theta.hat.beta$root system.time(theta.hat.dmle <- edmle(U, cop=cop)) theta.hat.dmle$minimum system.time(theta.hat.etau <- etau(U, cop=cop, method="tau.mean")) theta.hat.etau system.time(theta.hat.etau. <- etau(U, cop=cop, method="theta.mean")) theta.hat.etau. ## etau() in the case of missing values (NA's) ## ------ --------------------- ##' @title add Missing Values completely at random ##' @param x matrix or vector ##' @param prob desired probability ("fraction") of missing values (\code{\link{NA}}s). ##' @return x[] with some (100*prob percent) entries replaced by \code{\link{NA}}s. addNAs <- function(x, prob) { np <- length(x) x[sample.int(np, prob*np)] <- NA x } ## UM[] := U[] with 5% missing values set.seed(7); UM <- addNAs(U, p = 0.05) mean(is.na(UM)) # 0.05 ## This error if x has NA's was happening for etau(UM, cop=cop) ## before copula version 0.999-17 (June 2017) : try(eM <- etau(UM, cop=cop, use = "everything")) # --> Error ... NA/NaN/Inf in foreign function call ## The new default: eM0 <- etau(UM, cop=cop, use = "pairwise") eM1 <- etau(UM, cop=cop, use = "complete") ## use = "complete" is really equivalent to dropping all obs. with with missing values: stopifnot(all.equal(eM1, etau(na.omit(UM), cop=cop), tol = 1e-15)) ## but use = "pairwise" ---> cor(*, use = "pairwise") is much better: rbind(etau.U = theta.hat.etau, etau.UM.pairwise = eM0, etau.UM.complete = eM1)
Constructs an extreme-value copula class object with its corresponding parameter.
evCopula(family, param, dim = 2, ...) galambosCopula(param) huslerReissCopula(param) tawnCopula(param) tevCopula(param, df = 4, df.fixed = FALSE)
evCopula(family, param, dim = 2, ...) galambosCopula(param) huslerReissCopula(param) tawnCopula(param) tevCopula(param, df = 4, df.fixed = FALSE)
family |
a character string specifying the family of an
extreme-value copula. Currently implemented are
|
param |
a numeric vector specifying the parameter values. |
dim |
the dimension of the copula. Currently, only |
df |
a number specifying the degrees of freedom of the t
extreme-value copula, |
df.fixed |
logical; if true, the degrees of freedom will never be
considered as a parameter to be estimated; |
... |
currently nothing. |
An object of class "gumbelCopula"
,
"galambosCopula"
,
"huslerReissCopula"
,
"tawnCopula"
, or "tevCopula"
.
The Gumbel copula is both an Archimedean and an extreme-value copula,
with principal documentation on gumbelCopula
(or
archmCopula
).
ellipCopula
, archmCopula
,
gofEVCopula
, An
.
## Gumbel is both stopifnot(identical( evCopula("gumbel"), gumbelCopula()), identical(archmCopula("gumbel"), gumbelCopula())) ## For a given degree of dependence these copulas are strikingly similar : tau <- 1/3 gumbel.cop <- gumbelCopula (iTau(gumbelCopula(), tau)) galambos.cop <- galambosCopula (iTau(galambosCopula(), tau)) huslerReiss.cop <- huslerReissCopula(iTau(huslerReissCopula(), tau)) tawn.cop <- tawnCopula (iTau(tawnCopula(), tau)) tev.cop <- tevCopula (iTau(tevCopula(), tau)) curve(A(gumbel.cop, x), 0, 1, ylab = "A(<cop>( iTau(<cop>(), tau)), x)", main = paste("A(x) for five Extreme Value cop. w/ tau =", format(tau))) curve(A(galambos.cop, x), lty=2, add=TRUE) curve(A(huslerReiss.cop, x), lty=3, add=TRUE) curve(A(tawn.cop, x), lty=4, add=TRUE) curve(A(tev.cop, x), lty=5, col=2, add=TRUE)# very close to Gumbel ## And look at the differences curve(A(gumbel.cop, x) - A(tawn.cop, x), ylim = c(-1,1)*0.005, ylab = '', main = "A(<Gumbel>, x) - A(<EV-Cop.>, x)") abline(h=0, lty=2) curve(A(gumbel.cop, x) - A(galambos.cop, x), add=TRUE, col=2) curve(A(gumbel.cop, x) - A(huslerReiss.cop, x), add=TRUE, col=3) curve(A(gumbel.cop, x) - A(tev.cop, x), add=TRUE, col=4, lwd=2) ## the t-EV-copula has always positive tau : curve(vapply(x, function(x) tau(tevCopula(x)), 0.), -1, 1, n=257, ylim=0:1, xlab=quote(rho),ylab=quote(tau), main= quote(tau( tevCopula(rho) )), col = 2, lwd = 2) rect(-1,0,1,1, lty = 2, border = adjustcolor("black", 0.5))
## Gumbel is both stopifnot(identical( evCopula("gumbel"), gumbelCopula()), identical(archmCopula("gumbel"), gumbelCopula())) ## For a given degree of dependence these copulas are strikingly similar : tau <- 1/3 gumbel.cop <- gumbelCopula (iTau(gumbelCopula(), tau)) galambos.cop <- galambosCopula (iTau(galambosCopula(), tau)) huslerReiss.cop <- huslerReissCopula(iTau(huslerReissCopula(), tau)) tawn.cop <- tawnCopula (iTau(tawnCopula(), tau)) tev.cop <- tevCopula (iTau(tevCopula(), tau)) curve(A(gumbel.cop, x), 0, 1, ylab = "A(<cop>( iTau(<cop>(), tau)), x)", main = paste("A(x) for five Extreme Value cop. w/ tau =", format(tau))) curve(A(galambos.cop, x), lty=2, add=TRUE) curve(A(huslerReiss.cop, x), lty=3, add=TRUE) curve(A(tawn.cop, x), lty=4, add=TRUE) curve(A(tev.cop, x), lty=5, col=2, add=TRUE)# very close to Gumbel ## And look at the differences curve(A(gumbel.cop, x) - A(tawn.cop, x), ylim = c(-1,1)*0.005, ylab = '', main = "A(<Gumbel>, x) - A(<EV-Cop.>, x)") abline(h=0, lty=2) curve(A(gumbel.cop, x) - A(galambos.cop, x), add=TRUE, col=2) curve(A(gumbel.cop, x) - A(huslerReiss.cop, x), add=TRUE, col=3) curve(A(gumbel.cop, x) - A(tev.cop, x), add=TRUE, col=4, lwd=2) ## the t-EV-copula has always positive tau : curve(vapply(x, function(x) tau(tevCopula(x)), 0.), -1, 1, n=257, ylim=0:1, xlab=quote(rho),ylab=quote(tau), main= quote(tau( tevCopula(rho) )), col = 2, lwd = 2) rect(-1,0,1,1, lty = 2, border = adjustcolor("black", 0.5))
Class evCopula
is the virtual (mother) class of all
extreme-value copulas. There currently are five subclasses,
"galambosCopula"
, "huslerReissCopula"
,
"tawnCopula"
, "tevCopula"
, and "gumbelCopula"
,
the latter of which is also an Archimedean copula, see the page for
class "archmCopula"
.
evCopula
is a virtual class: No objects may be created from it.
Objects of class "galambosCopula"
etc, can be created by calls
of the form new("galambosCopula", ...)
, but typically rather by
galambosCopula()
, etc, see there.
All slots are inherited from the mother class
"copula"
, see there.
signature(copula = "galambosCopula")
: ...
signature(copula = "galambosCopula")
: ...
signature(copula = "galambosCopula")
: ...
signature(copula = "huslerReissCopula")
: ...
signature(copula = "huslerReissCopula")
: ...
signature(copula = "huslerReissCopula")
: ...
Class "evCopula"
extends class "copula"
directly. Classes "galambosCopula"
, "huslerReissCopula"
,
"tawnCopula"
, and "tevCopula"
extend class
"evCopula"
directly.
Objects of class "gumbelCopula"
are also of class
"archmCopula"
.
evCopula
,
evTestC
,
evTestK
,
gofEVCopula
,
copula-class
.
Test of bivariate extreme-value dependence based on the process comparing the empirical copula with a natural nonparametric estimator of the unknown copula derived under extreme-value dependence. The test statistics are defined in the third reference. Approximate p-values for the test statistics are obtained by means of a multiplier technique.
evTestA(x, N = 1000, derivatives = c("An","Cn"), ties.method = eval(formals(rank)$ties.method), trace.lev = 0, report.err = FALSE)
evTestA(x, N = 1000, derivatives = c("An","Cn"), ties.method = eval(formals(rank)$ties.method), trace.lev = 0, report.err = FALSE)
x |
a data matrix that will be transformed to pseudo-observations. |
N |
number of multiplier iterations to be used to simulate realizations of the test statistic under the null hypothesis. |
derivatives |
string specifying how the derivatives of the unknown
copula are estimated, either |
ties.method |
|
trace.lev |
integer indicating the level of diagnostic tracing to be printed to the console (from low-level algorithm). |
report.err |
|
More details are available in the third reference. See also Genest and Segers (2009) and Remillard and Scaillet (2009).
An object of class
htest
which is a list,
some of the components of which are
statistic |
value of the test statistic. |
p.value |
corresponding approximate p-value. |
This test was derived under the assumption of continuous margins, which implies that ties occur with probability zero. The presence of ties in the data might substantially affect the approximate p-value.
Genest, C. and Segers, J. (2009). Rank-based inference for bivariate extreme-value copulas. Annals of Statistics, 37, pages 2990-3022.
Rémillard, B. and Scaillet, O. (2009). Testing for equality between two copulas. Journal of Multivariate Analysis, 100(3), pages 377-386.
Kojadinovic, I. and Yan, J. (2010). Nonparametric rank-based tests of bivariate extreme-value dependence. Journal of Multivariate Analysis 101, 2234–2249.
evTestK
, evTestC
,
evCopula
,
gofEVCopula
, An
.
## Do these data come from an extreme-value copula? set.seed(63) uG <- rCopula(100, gumbelCopula (3)) uC <- rCopula(100, claytonCopula(3)) ## these two take 21 sec on nb-mm4 (Intel Core i7-5600U @ 2.60GHz): evTestA(uG) evTestA(uC) # significant even though Clayton is *NOT* an extreme value copula ## These are fast: evTestA(uG, derivatives = "Cn") evTestA(uC, derivatives = "Cn") # small p-value even though Clayton is *NOT* an EV copula.
## Do these data come from an extreme-value copula? set.seed(63) uG <- rCopula(100, gumbelCopula (3)) uC <- rCopula(100, claytonCopula(3)) ## these two take 21 sec on nb-mm4 (Intel Core i7-5600U @ 2.60GHz): evTestA(uG) evTestA(uC) # significant even though Clayton is *NOT* an extreme value copula ## These are fast: evTestA(uG, derivatives = "Cn") evTestA(uC, derivatives = "Cn") # small p-value even though Clayton is *NOT* an EV copula.
Test of multivariate extreme-value dependence based on the empirical copula and max-stability. The test statistics are defined in the second reference. Approximate p-values for the test statistics are obtained by means of a multiplier technique.
evTestC(x, N = 1000)
evTestC(x, N = 1000)
x |
a data matrix that will be transformed to pseudo-observations. |
N |
number of multiplier iterations to be used to simulate realizations of the test statistic under the null hypothesis. |
More details are available in the second reference. See also Remillard and Scaillet (2009).
An object of class
htest
which is a list,
some of the components of which are
statistic |
value of the test statistic. |
p.value |
corresponding approximate p-value. |
This test was derived under the assumption of continuous margins, which implies that ties occur with probability zero. The presence of ties in the data might substantially affect the approximate p-value.
Rémillard, B. and Scaillet, O. (2009). Testing for equality between two copulas. Journal of Multivariate Analysis, 100(3), pages 377-386.
Kojadinovic, I., Segers, J., and Yan, J. (2011). Large-sample tests of extreme-value dependence for multivariate copulas. The Canadian Journal of Statistics 39, 4, pages 703-720.
evTestK
, evTestA
, evCopula
,
gofEVCopula
, An
.
## Do these data come from an extreme-value copula? evTestC(rCopula(200, gumbelCopula(3))) evTestC(rCopula(200, claytonCopula(3))) ## Three-dimensional examples evTestC(rCopula(200, gumbelCopula(3, dim=3))) evTestC(rCopula(200, claytonCopula(3, dim=3)))
## Do these data come from an extreme-value copula? evTestC(rCopula(200, gumbelCopula(3))) evTestC(rCopula(200, claytonCopula(3))) ## Three-dimensional examples evTestC(rCopula(200, gumbelCopula(3, dim=3))) evTestC(rCopula(200, claytonCopula(3, dim=3)))
Test of extreme-value dependence based on the bivariate probability integral transformation. The test statistic is defined in Ben Ghorbal, G. Nešlehová, and Genest (2009).
evTestK(x, method = c("fsample","asymptotic","jackknife"), ties = NA, N = 100)
evTestK(x, method = c("fsample","asymptotic","jackknife"), ties = NA, N = 100)
x |
a data matrix. |
method |
specifies the variance estimation method; can be either
|
ties |
logical; if |
N |
number of samples to be used to estimate a bias term if
|
The code for this test was generously provided by Johanna G. Nešlehová. More details are available in Appendix B of Ben Ghorbal, G. Nešlehová and Genest (2009).
An object of class
htest
which is a list,
some of the components of which are
statistic |
value of the test statistic. |
p.value |
corresponding p-value. |
Ghorbal, M. B., Genest, C., and G. Nešlehová, J. (2009) On the test of Ghoudi, Khoudraji, and Rivest for extreme-value dependence. The Canadian Journal of Statistics 37, 1–9.
Kojadinovic, I. (2017). Some copula inference procedures adapted to the presence of ties. Computational Statistics and Data Analysis 112, 24–41, https://arxiv.org/abs/1609.05519.
evTestC
, evTestA
, evCopula
,
gofEVCopula
, An
.
set.seed(321) ## Do the data come from an extreme-value copula? evTestK(Ug <- rCopula(200, gumbelCopula(3))) # not significant => yes, EV dim(Uc <- rCopula(200, claytonCopula(3))) ## Clayton: tests are highly significant => no, not EV (K1 <- evTestK(Uc)) (K2 <- evTestK(Uc, method = "asymptotic")) system.time(print(K3 <- evTestK(Uc, method = "jackknife"))) ## slower: 1.06 sec (2015 intel core i7)
set.seed(321) ## Do the data come from an extreme-value copula? evTestK(Ug <- rCopula(200, gumbelCopula(3))) # not significant => yes, EV dim(Uc <- rCopula(200, claytonCopula(3))) ## Clayton: tests are highly significant => no, not EV (K1 <- evTestK(Uc)) (K2 <- evTestK(Uc, method = "asymptotic")) system.time(print(K3 <- evTestK(Uc, method = "jackknife"))) ## slower: 1.06 sec (2015 intel core i7)
Test for assessing the exchangeability of the underlying bivariate copula when it is either extreme-value or left-tail decreasing. The test uses the nonparametric estimators of the Pickands dependence function studied by Genest and Segers (2009).
The test statistic is defined in the second reference. An approximate p-value for the test statistic is obtained by means of a multiplier technique if there are no ties in the component series of the bivariate data, or by means of an appropriate bootstrap otherwise.
exchEVTest(x, N = 1000, estimator = c("CFG", "Pickands"), ties = NA, ties.method = eval(formals(rank)$ties.method), m = 100, derivatives = c("Cn", "An"))
exchEVTest(x, N = 1000, estimator = c("CFG", "Pickands"), ties = NA, ties.method = eval(formals(rank)$ties.method), m = 100, derivatives = c("Cn", "An"))
x |
a data matrix that will be transformed to pseudo-observations. |
N |
number of multiplier or boostrap iterations to be used to simulate realizations of the test statistic under the null hypothesis. |
estimator |
string specifying which nonparametric estimator of
the Pickands dependence function |
ties |
logical; if |
ties.method |
string specifying how ranks should be computed if
there are ties in any of the coordinate samples of |
derivatives |
a string specifying how the derivatives of the
unknown copula are estimated; can be either |
m |
integer specifying the size of the integration grid for the statistic. |
More details are available in the references.
An object of class
htest
which is a list,
some of the components of which are
statistic |
value of the test statistic. |
pvalue |
corresponding approximate p-value. |
Genest, C. and Segers, J. (2009) Rank-based inference for bivariate extreme-value copulas. Annals of Statistics 37, 2990–3022.
Kojadinovic, I. and Yan, J. (2012) A nonparametric test of exchangeability for extreme-value and left-tail decreasing bivariate copulas. The Scandinavian Journal of Statistics 39:3, 480–496.
Kojadinovic, I. (2017). Some copula inference procedures adapted to the presence of ties. Computational Statistics and Data Analysis 112, 24–41, https://arxiv.org/abs/1609.05519.
exchTest
, radSymTest
, gofCopula
.
## Data from an exchangeable left-tail decreasing copulas exchEVTest(rCopula(200, gumbelCopula(3))) exchEVTest(rCopula(200, claytonCopula(3))) ## An asymmetric Khoudraji-Gumbel-Hougaard copula kc <- khoudrajiCopula(copula1 = indepCopula(), copula2 = gumbelCopula(4), shapes = c(0.4, 0.95)) exchEVTest(rCopula(200, kc))
## Data from an exchangeable left-tail decreasing copulas exchEVTest(rCopula(200, gumbelCopula(3))) exchEVTest(rCopula(200, claytonCopula(3))) ## An asymmetric Khoudraji-Gumbel-Hougaard copula kc <- khoudrajiCopula(copula1 = indepCopula(), copula2 = gumbelCopula(4), shapes = c(0.4, 0.95)) exchEVTest(rCopula(200, kc))
Test for assessing the exchangeability of the underlying bivariate copula based on the empirical copula. The test statistics are defined in the first two references. Approximate p-values for the test statistics are obtained by means of a multiplier technique if there are no ties in the component series of the bivariate data, or by means of an appropriate bootstrap otherwise.
exchTest(x, N = 1000, ties = NA, ties.method = eval(formals(rank)$ties.method), m = 0)
exchTest(x, N = 1000, ties = NA, ties.method = eval(formals(rank)$ties.method), m = 0)
x |
a data matrix that will be transformed to pseudo-observations. |
N |
number of multiplier or boostrap iterations to be used to simulate realizations of the test statistic under the null hypothesis. |
ties |
logical; if |
ties.method |
string specifying how ranks should be computed if
there are ties in any of the coordinate samples of |
m |
if |
More details are available in the references.
An object of class
htest
which is a list,
some of the components of which are
statistic |
value of the test statistic. |
p.value |
corresponding approximate p-value. |
Genest, C., G. Nešlehová, J. and Quessy, J.-F. (2012). Tests of symmetry for bivariate copulas. Annals of the Institute of Statistical Mathematics 64, 811–834.
Kojadinovic, I. and Yan, J. (2012). A nonparametric test of exchangeability for extreme-value and left-tail decreasing bivariate copulas. The Scandinavian Journal of Statistics 39:3, 480–496.
Kojadinovic, I. (2017). Some copula inference procedures adapted to the presence of ties. Computational Statistics and Data Analysis 112, 24–41, https://arxiv.org/abs/1609.05519.
radSymTest
, exchEVTest
, gofCopula
.
## Data from an exchangeable copulas exchTest(rCopula(200, gumbelCopula(3))) exchTest(rCopula(200, claytonCopula(3))) ## An asymmetric Khoudraji-Clayton copula kc <- khoudrajiCopula(copula1 = indepCopula(), copula2 = claytonCopula(6), shapes = c(0.4, 0.95)) exchTest(rCopula(200, kc))
## Data from an exchangeable copulas exchTest(rCopula(200, gumbelCopula(3))) exchTest(rCopula(200, claytonCopula(3))) ## An asymmetric Khoudraji-Clayton copula kc <- khoudrajiCopula(copula1 = indepCopula(), copula2 = claytonCopula(6), shapes = c(0.4, 0.95)) exchTest(rCopula(200, kc))
Constructs a multivariate multiparameter Farlie-Gumbel-Morgenstern copula class object with its corresponding parameters and dimension.
fgmCopula(param, dim = 2)
fgmCopula(param, dim = 2)
param |
a numeric vector specifying the parameter values. |
dim |
the dimension of the copula. |
A Farlie-Gumbel-Morgenstern copula object of class
"fgmCopula"
.
Note that a -dimensional
fgmCopula
must have
npar
parameters.
The verification of the validity of the parameter values is of high
complexity and may not work for high dimensional copulas.
The random number generation needs to be properly tested, especially for dimensions higher than 2.
Nelsen, R. B. (2006), An introduction to Copulas, Springer, New York.
Copula
, copula-class
,
fitCopula
.
## length(param) = #{parameters} for d-dimensional FGM copula: d <- 2:10; rbind(d, npar = 2^d - d - 1) ## d 2 3 4 5 6 7 8 9 10 ## npar 1 4 11 26 57 120 247 502 1013 ## a bivariate example fgm.cop <- fgmCopula(1) x <- rCopula(1000, fgm.cop) cor(x, method = "kendall") tau(fgm.cop) cor(x, method = "spearman") rho(fgm.cop) persp (fgm.cop, dCopula) contour(fgm.cop, dCopula) ## a trivariate example with wrong parameter values try( fgm2.cop <- fgmCopula(c(1,1,1,1), dim = 3) ) # Error: "Bad vector of parameters" ## a trivariate example with satisfactory parameter values fgm2.cop <- fgmCopula(c(.2,-.2,-.4,.6), dim = 3) fgm2.cop
## length(param) = #{parameters} for d-dimensional FGM copula: d <- 2:10; rbind(d, npar = 2^d - d - 1) ## d 2 3 4 5 6 7 8 9 10 ## npar 1 4 11 26 57 120 247 502 1013 ## a bivariate example fgm.cop <- fgmCopula(1) x <- rCopula(1000, fgm.cop) cor(x, method = "kendall") tau(fgm.cop) cor(x, method = "spearman") rho(fgm.cop) persp (fgm.cop, dCopula) contour(fgm.cop, dCopula) ## a trivariate example with wrong parameter values try( fgm2.cop <- fgmCopula(c(1,1,1,1), dim = 3) ) # Error: "Bad vector of parameters" ## a trivariate example with satisfactory parameter values fgm2.cop <- fgmCopula(c(.2,-.2,-.4,.6), dim = 3) fgm2.cop
The class of multivariate multiparameter Farlie-Gumbel-Morgenstern
copulas are typically created via fgmCopula(..)
.
Objects are typically created by fgmCopula(..)
, or more
low-level by (careful) calls to new("fgmCopula", ..)
.
exprdist
:Object of class "expression"
,
expressions for the cdf and pdf of the copula. These expressions
are used in function pCopula()
and dCopula()
.
subsets.char
:Object of class "character"
,
containing the subsets of integers used for naming the parameters.
dimension
:Object of class "numeric"
, the
dimension of the copula.
parameters
:Object of class "numeric"
,
parameter values.
param.names
:Object of class "character"
,
parameter names.
param.lowbnd
:Object of class "numeric"
,
parameter lower bound.
param.upbnd
:Object of class "numeric"
,
parameter upper bound.
fullname
:Object of class "character"
, family names
of the copula (deprecated).
signature(copula = "fgmCopula")
: ...
signature(copula = "fgmCopula")
: ...
signature(copula = "fgmCopula")
: ...
Class "fgmCopula"
extends class "copula"
directly.
The verification of the validity of the parameter values is of high complexity and may not work for high dimensional copulas.
The random number generation needs to be properly tested, especially for dimensions higher than 2.
Nelsen, R. B. (2006), An introduction to Copulas, Springer, New York.
copula-class
; to create such
objects, use fgmCopula()
; see there, also for examples.
Constructs the Fréchet-Hoeffding lower and upper bound
copulas aka and
.
fhCopula(family = c("upper", "lower"), dim = 2L) lowfhCopula(dim = 2L) upfhCopula(dim = 2L)
fhCopula(family = c("upper", "lower"), dim = 2L) lowfhCopula(dim = 2L) upfhCopula(dim = 2L)
family |
a character string specifying the Fréchet-Hoeffding bound copula. |
dim |
the dimension of the copula; note that the lower Fréchet-Hoeffding bound is only available in the bivariate case. |
A copula object of class "lowfhCopula"
or "upfhCopula"
.
fhCopula()
is a wrapper for lowfhCopula()
and
upfhCopula()
.
The dCopula()
method will simply return an error
for these copulas (as their density does not exist). Also,
since the Fréchet-Hoeffding bound copulas are not
parametric, certain methods available for parametric copulas are
not available.
## Lower bound W : ------------------------- try(W <- lowfhCopula(dim = 3)) # lower bound is *not* a copula for dim > 2 W <- lowfhCopula() wireframe2(W, FUN = pCopula) plot(W, n=100) # perfect anti-correlation ( rho = tau = -1 ) ## Upper bound M : ------------------------- wireframe2(upfhCopula(dim = 2), pCopula) M <- upfhCopula(dim = 3) set.seed(271) splom2(M, n = 100) # "random" data: all perfectly correlated
## Lower bound W : ------------------------- try(W <- lowfhCopula(dim = 3)) # lower bound is *not* a copula for dim > 2 W <- lowfhCopula() wireframe2(W, FUN = pCopula) plot(W, n=100) # perfect anti-correlation ( rho = tau = -1 ) ## Upper bound M : ------------------------- wireframe2(upfhCopula(dim = 2), pCopula) M <- upfhCopula(dim = 3) set.seed(271) splom2(M, n = 100) # "random" data: all perfectly correlated
Fréchet-Hoeffding bound copula class.
Created by calls of the form new("fhCopula", ...)
or rather
typically by fhCopula()
, lowfhCopula()
, or
upfhCopula()
. Actual (sub) classes are the lower and upper
Fréchet-Hoeffding bound copulas lowfhCopula()
(), and
upfhCopula()
().
dimension
:inherited from
super class "dimCopula"
.
exprdist
:an expression
of length two,
named "cdf"
with the R expression of the CDF, and
"pdf"
which is empty as the PDF does not exist (everywhere).
ellipCopula
, archmCopula
,
evCopula
.
The class constructors are fhCopula()
,
lowfhCopula()
, and upfhCopula()
. See there for
examples.
Parameter estimation of copulas, i.e., fitting of a copula model to multivariate (possibly “pseudo”) observations.
loglikCopula(param = getTheta(copula), u, copula, error = c("-Inf", "warn-Inf", "let-it-be")) loglikCopulaMany(pList, u, copula) ## Generic [and "rotCopula" method] : %- ../R/fitCopula.R fitCopula(copula, data, ...) ## S4 method for signature 'parCopula' fitCopula(copula, data, method = c("mpl", "ml", "itau", "irho", "itau.mpl"), posDef = is(copula, "ellipCopula"), start = NULL, lower = NULL, upper = NULL, optim.method = optimMeth(copula, method, dim = d), optim.control = list(maxit=1000), estimate.variance = NA, hideWarnings = FALSE, ...) optimMeth(copula, method, dim)
loglikCopula(param = getTheta(copula), u, copula, error = c("-Inf", "warn-Inf", "let-it-be")) loglikCopulaMany(pList, u, copula) ## Generic [and "rotCopula" method] : %- ../R/fitCopula.R fitCopula(copula, data, ...) ## S4 method for signature 'parCopula' fitCopula(copula, data, method = c("mpl", "ml", "itau", "irho", "itau.mpl"), posDef = is(copula, "ellipCopula"), start = NULL, lower = NULL, upper = NULL, optim.method = optimMeth(copula, method, dim = d), optim.control = list(maxit=1000), estimate.variance = NA, hideWarnings = FALSE, ...) optimMeth(copula, method, dim)
param |
vector of free (see |
pList |
a |
u |
|
data |
as |
copula |
a |
error |
(for
|
method |
a
|
posDef |
a |
start |
a |
lower , upper
|
Lower or upper parameter bounds for the
optimization methods |
optim.control |
a |
optim.method |
a character string specify the optimization
method or a The default has been changed (for copula 0.999-16, in
Aug. 2016) from |
dim |
integer, the data and copula dimension, |
estimate.variance |
a |
hideWarnings |
a |
... |
additional arguments passed to |
The only difference between "mpl"
and "ml"
is in the
variance-covariance estimate, not in the parameter
() estimates.
If method "mpl"
in fitCopula()
is used and if
start
is not assigned a value, estimates obtained from method
"itau"
are used as initial values in the optimization. Standard
errors are computed as explained in Genest, Ghoudi and Rivest (1995);
see also Kojadinovic and Yan (2010, Section 3). Their estimation
requires the computation of certain partial derivatives of the (log)
density. These have been implemented for six copula families thus far:
the Clayton, Gumbel-Hougaard, Frank, Plackett, normal and copula
families. For other families, numerical differentiation based on
grad()
from package numDeriv
is used (and a warning message is displayed).
In the multiparameter elliptical case and when the estimation is based
on Kendall's tau or Spearman's rho, the estimated correlation matrix
may not always be positive-definite. In that case,
nearPD(*, corr=TRUE)
(from Matrix) is
applied to get a proper correlation matrix.
For normal and copulas,
fitCopula(, method = "mpl")
and
fitCopula(, method = "ml")
maximize the log-likelihood based on
mvtnorm's dmvnorm()
and dmvt()
, respectively.
The latter two functions set the respective densities to zero if the
correlation matrices of the corresponding distributions are not
positive definite. As such, the estimated correlation matrices
will be positive definite.
If methods "itau"
or "irho"
are used in
fitCopula()
, an estimate of the asymptotic variance (if
available for the copula under consideration) will be correctly
computed only if the argument data
consists of pseudo-observations
(see pobs()
).
Consider the copula with
df.fixed=FALSE
(see
ellipCopula()
). In this case, the methods "itau"
and "irho"
cannot be used in fitCopula()
as they cannot
estimate the degrees of freedom parameter df
. For the methods
"mpl"
and "itau.mpl"
the asymptotic variance cannot be
(fully) estimated (yet). For the methods "ml"
and "mpl"
,
when start
is not specified, the starting value for df
is set to copula@df
, typically 4.
To implement the Inference Functions for Margins (IFM) method
(see, e.g., Joe 2005), set method="ml"
and note that
data
need to be parametric pseudo-observations obtained from fitted
parametric marginal distribution functions. The returned large-sample
variance will then underestimate the true variance (as the procedure
cannot take into account the (unknown) estimation error for the margins).
The fitting procedures based on optim()
generate
warnings because invalid parameter values are tried during the
optimization process. When the number of parameters is one and the
parameter space is bounded, using optim.method="Brent"
is
likely to give less warnings. Furthermore, from experience,
optim.method="Nelder-Mead"
is sometimes a more robust
alternative to optim.method="BFGS"
or "L-BFGS-B"
.
There are methods for vcov()
, coef()
,
logLik()
, and nobs()
.
loglikCopula()
returns the copula log-likelihood evaluated at the
parameter (vector) param
given the data u
.
loglikCopulaMany()
returns a numeric vector of such
log-likelihoods; it assumes consistent parameter values, corresponding to
loglikCopula()
's error = "let-it-be"
, for speed.
The return value of fitCopula()
is an object of class
"fitCopula"
(inheriting from hidden class
"fittedMV"
), containing (among others!) the slots
The parameter estimates.
The large-sample (i.e., asymptotic) variance estimate of the parameter
estimator unless estimate.variance=FALSE
where it is
matrix(numeric(), 0,0)
(to be distinguishable from cases when the
covariance estimates failed partially).
The fitted copula object.
The summary()
method for "fitCopula"
objects
returns an S3 “class” "summary.fitCopula"
, which is simply a list
with components method
, loglik
and convergence
,
all three from the corresponding slots of the
"fitCopula"
objects, and coefficients
(a matrix of estimated coefficients, standard errors, t values and p-values).
Genest, C. (1987). Frank's family of bivariate distributions. Biometrika 74, 549–555.
Genest, C. and Rivest, L.-P. (1993). Statistical inference procedures for bivariate Archimedean copulas. Journal of the American Statistical Association 88, 1034–1043.
Rousseeuw, P. and Molenberghs, G. (1993). Transformation of nonpositive semidefinite correlation matrices. Communications in Statistics: Theory and Methods 22, 965–984.
Genest, C., Ghoudi, K., and Rivest, L.-P. (1995). A semiparametric estimation procedure of dependence parameters in multivariate families of distributions. Biometrika 82, 543–552.
Joe, H. (2005). Asymptotic efficiency of the two-stage estimation method for copula-based models. Journal of Multivariate Analysis 94, 401–419.
Zeevi, Assaf and Mashal, Roy (2002) Beyond Correlation: Extreme Co-Movements between Financial Assets. doi:10.2139/ssrn.317122
Demarta, S. and McNeil, A. J. (2005). The t copula and related copulas. International Statistical Review 73, 111–129.
Genest, C. and Favre, A.-C. (2007). Everything you always wanted to know about copula modeling but were afraid to ask. Journal of Hydrologic Engineering 12, 347–368.
Kojadinovic, I. and Yan, J. (2010). Comparison of three semiparametric methods for estimating dependence parameters in copula models. Insurance: Mathematics and Economics 47, 52–63.
Copula
,
fitMvdc
for fitting multivariate distributions
including the margins,
gofCopula
for goodness-of-fit tests.
For maximum likelihood of (nested) Archimedean copulas, see
emle
, etc.
(Xtras <- copula:::doExtras()) # determine whether examples will be extra (long) n <- if(Xtras) 200 else 64 # sample size ## A Gumbel copula set.seed(7) # for reproducibility gumbel.cop <- gumbelCopula(3, dim=2) x <- rCopula(n, gumbel.cop) # "true" observations (simulated) u <- pobs(x) # pseudo-observations ## Inverting Kendall's tau fit.tau <- fitCopula(gumbelCopula(), u, method="itau") fit.tau confint(fit.tau) # work fine ! confint(fit.tau, level = 0.98) summary(fit.tau) # a bit more, notably "Std. Error"s coef(fit.tau)# named vector coef(fit.tau, SE = TRUE)# matrix ## Inverting Spearman's rho fit.rho <- fitCopula(gumbelCopula(), u, method="irho") summary(fit.rho) ## Maximum pseudo-likelihood fit.mpl <- fitCopula(gumbelCopula(), u, method="mpl") fit.mpl ## Maximum likelihood -- use 'x', not 'u' ! -- fit.ml <- fitCopula(gumbelCopula(), x, method="ml") summary(fit.ml) # now prints a bit more than simple 'fit.ml' ## ... and what's the log likelihood (in two different ways): (ll. <- logLik(fit.ml)) stopifnot(all.equal(as.numeric(ll.), loglikCopula(coef(fit.ml), u=x, copula=gumbel.cop))) ## A Gauss/normal copula ## With multiple/*un*constrained parameters set.seed(6) # for reproducibility normal.cop <- normalCopula(c(0.6, 0.36, 0.6), dim=3, dispstr="un") x <- rCopula(n, normal.cop) # "true" observations (simulated) u <- pobs(x) # pseudo-observations ## Inverting Kendall's tau fit.tau <- fitCopula(normalCopula(dim=3, dispstr="un"), u, method="itau") fit.tau ## Inverting Spearman's rho fit.rho <- fitCopula(normalCopula(dim=3, dispstr="un"), u, method="irho") fit.rho ## Maximum pseudo-likelihood fit.mpl <- fitCopula(normalCopula(dim=3, dispstr="un"), u, method="mpl") summary(fit.mpl) coef(fit.mpl) # named vector coef(fit.mpl, SE = TRUE) # the matrix, with SE ## Maximum likelihood (use 'x', not 'u' !) fit.ml <- fitCopula(normalCopula(dim=3, dispstr="un"), x, method="ml", traceOpt=TRUE) summary(fit.ml) confint(fit.ml) confint(fit.ml, level = 0.999) # clearly non-0 ## Fix some of the parameters param <- c(.6, .3, NA_real_) attr(param, "fixed") <- c(TRUE, FALSE, FALSE) ncp <- normalCopula(param = param, dim = 3, dispstr = "un") fixedParam(ncp) <- c(TRUE, TRUE, FALSE) ## 'traceOpt = 5': showing every 5-th log likelihood evaluation: summary(Fxf.mpl <- fitCopula(ncp, u, method = "mpl", traceOpt = 5)) Fxf.mpl@copula # reminding of the fixed param. values ## With dispstr = "toep" : normal.cop.toep <- normalCopula(c(0, 0), dim=3, dispstr="toep") ## Inverting Kendall's tau fit.tau <- fitCopula(normalCopula(dim=3, dispstr="toep"), u, method="itau") fit.tau ## Inverting Spearman's rho fit.rho <- fitCopula(normalCopula(dim=3, dispstr="toep"), u, method="irho") summary(fit.rho) ## Maximum pseudo-likelihood fit.mpl <- fitCopula(normalCopula(dim=3, dispstr="toep"), u, method="mpl") fit.mpl ## Maximum likelihood (use 'x', not 'u' !) fit.ml <- fitCopula(normalCopula(dim=3, dispstr="toep"), x, method="ml") summary(fit.ml) ## With dispstr = "ar1" normal.cop.ar1 <- normalCopula(c(0), dim=3, dispstr="ar1") ## Inverting Kendall's tau summary(fit.tau <- fitCopula(normalCopula(dim=3, dispstr="ar1"), u, method="itau")) ## Inverting Spearman's rho summary(fit.rho <- fitCopula(normalCopula(dim=3, dispstr="ar1"), u, method="irho")) ## Maximum pseudo-likelihood summary(fit.mpl <- fitCopula(normalCopula(dim=3, dispstr="ar1"), u, method="mpl")) ## Maximum likelihood (use 'x', not 'u' !) fit.ml <- fitCopula(normalCopula(dim=3, dispstr="ar1"), x, method="ml") summary(fit.ml) ## A t copula with variable df (df.fixed=FALSE) (tCop <- tCopula(c(0.2,0.4,0.6), dim=3, dispstr="un", df=5)) set.seed(101) x <- rCopula(n, tCop) # "true" observations (simulated) ## Maximum likelihood (start = (rho[1:3], df)) summary(tc.ml <- fitCopula(tCopula(dim=3, dispstr="un"), x, method="ml", start = c(0,0,0, 10))) ## Maximum pseudo-likelihood (the asymptotic variance cannot be estimated) u <- pobs(x) # pseudo-observations tc.mpl <- fitCopula(tCopula(dim=3, dispstr="un"), u, method="mpl", estimate.variance=FALSE, start= c(0,0,0, 10)) summary(tc.mpl)
(Xtras <- copula:::doExtras()) # determine whether examples will be extra (long) n <- if(Xtras) 200 else 64 # sample size ## A Gumbel copula set.seed(7) # for reproducibility gumbel.cop <- gumbelCopula(3, dim=2) x <- rCopula(n, gumbel.cop) # "true" observations (simulated) u <- pobs(x) # pseudo-observations ## Inverting Kendall's tau fit.tau <- fitCopula(gumbelCopula(), u, method="itau") fit.tau confint(fit.tau) # work fine ! confint(fit.tau, level = 0.98) summary(fit.tau) # a bit more, notably "Std. Error"s coef(fit.tau)# named vector coef(fit.tau, SE = TRUE)# matrix ## Inverting Spearman's rho fit.rho <- fitCopula(gumbelCopula(), u, method="irho") summary(fit.rho) ## Maximum pseudo-likelihood fit.mpl <- fitCopula(gumbelCopula(), u, method="mpl") fit.mpl ## Maximum likelihood -- use 'x', not 'u' ! -- fit.ml <- fitCopula(gumbelCopula(), x, method="ml") summary(fit.ml) # now prints a bit more than simple 'fit.ml' ## ... and what's the log likelihood (in two different ways): (ll. <- logLik(fit.ml)) stopifnot(all.equal(as.numeric(ll.), loglikCopula(coef(fit.ml), u=x, copula=gumbel.cop))) ## A Gauss/normal copula ## With multiple/*un*constrained parameters set.seed(6) # for reproducibility normal.cop <- normalCopula(c(0.6, 0.36, 0.6), dim=3, dispstr="un") x <- rCopula(n, normal.cop) # "true" observations (simulated) u <- pobs(x) # pseudo-observations ## Inverting Kendall's tau fit.tau <- fitCopula(normalCopula(dim=3, dispstr="un"), u, method="itau") fit.tau ## Inverting Spearman's rho fit.rho <- fitCopula(normalCopula(dim=3, dispstr="un"), u, method="irho") fit.rho ## Maximum pseudo-likelihood fit.mpl <- fitCopula(normalCopula(dim=3, dispstr="un"), u, method="mpl") summary(fit.mpl) coef(fit.mpl) # named vector coef(fit.mpl, SE = TRUE) # the matrix, with SE ## Maximum likelihood (use 'x', not 'u' !) fit.ml <- fitCopula(normalCopula(dim=3, dispstr="un"), x, method="ml", traceOpt=TRUE) summary(fit.ml) confint(fit.ml) confint(fit.ml, level = 0.999) # clearly non-0 ## Fix some of the parameters param <- c(.6, .3, NA_real_) attr(param, "fixed") <- c(TRUE, FALSE, FALSE) ncp <- normalCopula(param = param, dim = 3, dispstr = "un") fixedParam(ncp) <- c(TRUE, TRUE, FALSE) ## 'traceOpt = 5': showing every 5-th log likelihood evaluation: summary(Fxf.mpl <- fitCopula(ncp, u, method = "mpl", traceOpt = 5)) Fxf.mpl@copula # reminding of the fixed param. values ## With dispstr = "toep" : normal.cop.toep <- normalCopula(c(0, 0), dim=3, dispstr="toep") ## Inverting Kendall's tau fit.tau <- fitCopula(normalCopula(dim=3, dispstr="toep"), u, method="itau") fit.tau ## Inverting Spearman's rho fit.rho <- fitCopula(normalCopula(dim=3, dispstr="toep"), u, method="irho") summary(fit.rho) ## Maximum pseudo-likelihood fit.mpl <- fitCopula(normalCopula(dim=3, dispstr="toep"), u, method="mpl") fit.mpl ## Maximum likelihood (use 'x', not 'u' !) fit.ml <- fitCopula(normalCopula(dim=3, dispstr="toep"), x, method="ml") summary(fit.ml) ## With dispstr = "ar1" normal.cop.ar1 <- normalCopula(c(0), dim=3, dispstr="ar1") ## Inverting Kendall's tau summary(fit.tau <- fitCopula(normalCopula(dim=3, dispstr="ar1"), u, method="itau")) ## Inverting Spearman's rho summary(fit.rho <- fitCopula(normalCopula(dim=3, dispstr="ar1"), u, method="irho")) ## Maximum pseudo-likelihood summary(fit.mpl <- fitCopula(normalCopula(dim=3, dispstr="ar1"), u, method="mpl")) ## Maximum likelihood (use 'x', not 'u' !) fit.ml <- fitCopula(normalCopula(dim=3, dispstr="ar1"), x, method="ml") summary(fit.ml) ## A t copula with variable df (df.fixed=FALSE) (tCop <- tCopula(c(0.2,0.4,0.6), dim=3, dispstr="un", df=5)) set.seed(101) x <- rCopula(n, tCop) # "true" observations (simulated) ## Maximum likelihood (start = (rho[1:3], df)) summary(tc.ml <- fitCopula(tCopula(dim=3, dispstr="un"), x, method="ml", start = c(0,0,0, 10))) ## Maximum pseudo-likelihood (the asymptotic variance cannot be estimated) u <- pobs(x) # pseudo-observations tc.mpl <- fitCopula(tCopula(dim=3, dispstr="un"), u, method="mpl", estimate.variance=FALSE, start= c(0,0,0, 10)) summary(tc.mpl)
Classes and summary methods related to copula model fitting.
Objects can be created by calls to fitCopula
or
fitMvdc
, respectively or to their summary
methods.
The “mother class”, "fittedMV"
has the slots
estimate
:numeric
, the estimated parameters.
var.est
:numeric
, variance matrix estimate of
the parameter estimator. See note below.
loglik
:numeric
, log likelihood evaluated at
the maximizer.
nsample
:numeric
, integer representing the
sample size.
method
:character
, method of estimation.
fitting.stats
:a list
, currently
containing the numeric convergence
code from
optim
, the counts
, message
, and all
the control
arguments explicitly passed to optim()
.
Since copula version 1.0-1 also keeps information about
parameter transformations, currently needed only for
mixCopula
fits with free weights.
In addition, the "fitCopula"
class has a slot
copula
:the fitted copula, of class
"copula"
.
whereas the "fitMvdc"
has
mvdc
:the fitted distribution, of class
"mvdc"
.
Classes "fitCopula"
and "fitMvdc"
extend class
"fittedMV"
, directly.
signature(object = "fitMvdc")
: ...
signature(object = "fitCopula")
: ...
Further, there are S3 methods (class "fittedMV"
) for
coef()
, vcov()
and logLik()
,
see fitMvdc
.
Genest, C., Ghoudi, K., and Rivest, L.-P. (1995). A semiparametric estimation procedure of dependence parameters in multivariate families of distributions. Biometrika 82, 543–552.
Computing non-parametric estimators of the (matrix of) tail-dependence coefficients.
fitLambda(u, method = c("Schmid.Schmidt", "Schmidt.Stadtmueller", "t"), p = 1/sqrt(nrow(u)), lower.tail = TRUE, verbose = FALSE, ...)
fitLambda(u, method = c("Schmid.Schmidt", "Schmidt.Stadtmueller", "t"), p = 1/sqrt(nrow(u)), lower.tail = TRUE, verbose = FALSE, ...)
u |
|
method |
the method with which the tail-dependence coefficients are computed:
|
p |
(small) cut-off parameter in |
lower.tail |
|
verbose |
a |
... |
additional arguments passed to the underlying functions
(at the moment only to |
As seen in the examples, be careful using nonparametric estimators,
they might not perform too well (depending on p
and in
general). After all, the notion of tail dependence is a limit,
finite sample sizes may not be able to capture limits well.
The matrix of pairwise coefficients of tail dependence; for
method = "t"
a list
with components
Lambda
,
the matrix of pairwise estimated correlation coefficients P
and the matrix of pairwise estimated degrees of freedom Nu
.
Jaworski, P., Durante, F., Härdle, W. K., Rychlik, T. (2010). Copula Theory and Its Applications Springer, Lecture Notes in Statistics – Proceedings.
Schmid, F., Schmidt, R. (2007). Multivariate conditional versions of Spearman's rho and related measures of tail dependence. Journal of Multivariate Analysis 98, 1123–1140. doi:10.1016/j.jmva.2006.05.005
Schmidt, R., Stadtmueller, U. (2006). Non-parametric Estimation of Tail Dependence. Scandinavian Journal of Statistics 33(2), 307–335. doi:10.1111/j.1467-9469.2005.00483.x
lambda()
computes the true (lower and upper) tail
coefficients for a given copula.
n <- 10000 # sample size p <- 0.01 # cut-off ## Bivariate case d <- 2 cop <- claytonCopula(2, dim = d) set.seed(271) U <- rCopula(n, copula = cop) # generate observations (unrealistic) (lam.true <- lambda(cop)) # true tail-dependence coefficients lambda (lam.C <- c(lower = fitLambda(U, p = p)[2,1], upper = fitLambda(U, p = p, lower.tail = FALSE)[2,1])) # estimate lambdas ## => pretty good U. <- pobs(U) # pseudo-observations (realistic) (lam.C. <- c(lower = fitLambda(U., p = p)[2,1], upper = fitLambda(U., p = p, lower.tail = FALSE)[2,1])) # estimate lambdas ## => The pseudo-observations do have an effect... ## Higher-dimensional case d <- 5 cop <- claytonCopula(2, dim = d) set.seed(271) U <- rCopula(n, copula = cop) # generate observations (unrealistic) (lam.true <- lambda(cop)) # true tail-dependence coefficients lambda (Lam.C <- list(lower = fitLambda(U, p = p), upper = fitLambda(U, p = p, lower.tail = FALSE))) # estimate Lambdas ## => Not too good U. <- pobs(U) # pseudo-observations (realistic) (Lam.C. <- list(lower = fitLambda(U., p = p), upper = fitLambda(U., p = p, lower.tail = FALSE))) # estimate Lambdas ## => Performance not too great here in either case
n <- 10000 # sample size p <- 0.01 # cut-off ## Bivariate case d <- 2 cop <- claytonCopula(2, dim = d) set.seed(271) U <- rCopula(n, copula = cop) # generate observations (unrealistic) (lam.true <- lambda(cop)) # true tail-dependence coefficients lambda (lam.C <- c(lower = fitLambda(U, p = p)[2,1], upper = fitLambda(U, p = p, lower.tail = FALSE)[2,1])) # estimate lambdas ## => pretty good U. <- pobs(U) # pseudo-observations (realistic) (lam.C. <- c(lower = fitLambda(U., p = p)[2,1], upper = fitLambda(U., p = p, lower.tail = FALSE)[2,1])) # estimate lambdas ## => The pseudo-observations do have an effect... ## Higher-dimensional case d <- 5 cop <- claytonCopula(2, dim = d) set.seed(271) U <- rCopula(n, copula = cop) # generate observations (unrealistic) (lam.true <- lambda(cop)) # true tail-dependence coefficients lambda (Lam.C <- list(lower = fitLambda(U, p = p), upper = fitLambda(U, p = p, lower.tail = FALSE))) # estimate Lambdas ## => Not too good U. <- pobs(U) # pseudo-observations (realistic) (Lam.C. <- list(lower = fitLambda(U., p = p), upper = fitLambda(U., p = p, lower.tail = FALSE))) # estimate Lambdas ## => Performance not too great here in either case
Fitting copula-based multivariate distributions
("mvdc"
) to multivariate data,
estimating both the marginal and the copula parameters.
If you assume (non parametric) margins, in other words, take the
empirical distributions for all margins, you can use
fitCopula(*, pobs(x))
instead.
loglikMvdc(param, x, mvdc) fitMvdc(data, mvdc, start, optim.control = list(), method = "BFGS", lower = -Inf, upper = Inf, estimate.variance = fit$convergence == 0, hideWarnings = TRUE) ## S3 method for class 'fittedMV' coef(object, SE = FALSE, orig = TRUE, ...) ## S3 method for class 'fittedMV' logLik(object, ...) ## S3 method for class 'fittedMV' vcov(object, orig = TRUE, ...)
loglikMvdc(param, x, mvdc) fitMvdc(data, mvdc, start, optim.control = list(), method = "BFGS", lower = -Inf, upper = Inf, estimate.variance = fit$convergence == 0, hideWarnings = TRUE) ## S3 method for class 'fittedMV' coef(object, SE = FALSE, orig = TRUE, ...) ## S3 method for class 'fittedMV' logLik(object, ...) ## S3 method for class 'fittedMV' vcov(object, orig = TRUE, ...)
param |
a vector of parameter values. When specifying parameters for
|
x |
a data matrix. |
mvdc |
a |
data |
a data matrix. |
start |
a vector of starting value for |
optim.control |
a list of controls to be passed to |
method |
the method for |
lower , upper
|
bounds on each parameter, passed to
|
estimate.variance |
logical; if true (as by default, if the optimization converges), the asymptotic variance is estimated. |
hideWarnings |
logical indicating if warning messages from
likelihood maximization, e.g., from evaluating at invalid parameter
values, should be suppressed (via |
object |
an R object of class |
SE |
for the |
orig |
|
... |
potentially further arguments to methods. |
The return value loglikMvdc()
is the log likelihood evaluated
for the given value of param
.
The return value of fitMvdc()
is an object of class
"fitMvdc"
(see there), containing slots (among others!):
estimate |
the estimate of the parameters. |
var.est |
large-sample (i.e., asymptotic) variance estimate of the parameter
estimator (filled with |
mvdc |
the fitted multivariate distribution, see
|
The summary()
method for "fitMvdc"
objects
returns a S3 “class” "summary.fitMvdc"
, simply a list
with components method
, loglik
, and convergence
,
all three from corresponding slots of the
"fitMvdc"
objects, and
coefficients |
a matrix of estimated coefficients, standard errors, t values and p-values. |
User-defined marginal distributions can be used as long as the
"{dpq}"
functions are defined.
See vignette("AR_Clayton", package="copula")
.
When covariates are available for marginal distributions or for the copula,
one can construct the loglikelihood function and feed it to "optim"
to estimate all the parameters.
Finally, note that some of the fitting functions generate error
messages because invalid parameter values are tried during the
optimization process (see optim
). This should be rarer
since 2013, notably for likelihood based methods (as the likelihood
is now rather set to -Inf
than giving an error).
Previously, loglikMvdc()
had an argument hideWarnings
;
nowadays, do use suppressWarnings(..)
if you are sure
you do not want to see them.
mvdc
and mvdc
;
further, Copula
, fitCopula
,
gofCopula
.
For fitting univariate marginals, fitdistr()
.
G3 <- gumbelCopula(3, dim=2) gMvd2 <- mvdc(G3, c("exp","exp"), param = list(list(rate=2), list(rate=4))) ## with identical margins: gMvd.I <- mvdc(G3, "exp", param = list(rate=3), marginsIdentical=TRUE) (Xtras <- copula:::doExtras()) # determine whether examples will be extra (long) n <- if(Xtras) 10000 else 200 # sample size (realistic vs short for example) set.seed(11) x <- rMvdc(n, gMvd2) ## Default: hideWarnings = FALSE .. i.e. show warnings here fit2 <- fitMvdc(x, gMvd2, start = c(1,1, 2)) fit2 confint(fit2) summary(fit2) # slightly more ## The estimated, asymptotic var-cov matrix [was used for confint()]: vcov(fit2) ## with even more output for the "identical margin" case: fitI <- fitMvdc(x, gMvd.I, start = c(3, 2), optim.control=list(trace= TRUE, REPORT= 2)) summary(fitI) coef(fitI, SE = TRUE) stopifnot(is.matrix(coef(fitI, SE = TRUE)), is.matrix(print( confint(fitI) )) ) ## a wrong starting value can already be *the* problem: f2 <- try(fitMvdc(x, gMvd.I, start = c(1, 1), optim.control=list(trace= TRUE, REPORT= 2))) ##--> Error in optim( ... ) : non-finite finite-difference value [2] ##==> "Solution" : Using a more robust (but slower) optim() method: fI.2 <- fitMvdc(x, gMvd.I, start = c(1, 1), method = "Nelder", optim.control=list(trace= TRUE)) fI.2
G3 <- gumbelCopula(3, dim=2) gMvd2 <- mvdc(G3, c("exp","exp"), param = list(list(rate=2), list(rate=4))) ## with identical margins: gMvd.I <- mvdc(G3, "exp", param = list(rate=3), marginsIdentical=TRUE) (Xtras <- copula:::doExtras()) # determine whether examples will be extra (long) n <- if(Xtras) 10000 else 200 # sample size (realistic vs short for example) set.seed(11) x <- rMvdc(n, gMvd2) ## Default: hideWarnings = FALSE .. i.e. show warnings here fit2 <- fitMvdc(x, gMvd2, start = c(1,1, 2)) fit2 confint(fit2) summary(fit2) # slightly more ## The estimated, asymptotic var-cov matrix [was used for confint()]: vcov(fit2) ## with even more output for the "identical margin" case: fitI <- fitMvdc(x, gMvd.I, start = c(3, 2), optim.control=list(trace= TRUE, REPORT= 2)) summary(fitI) coef(fitI, SE = TRUE) stopifnot(is.matrix(coef(fitI, SE = TRUE)), is.matrix(print( confint(fitI) )) ) ## a wrong starting value can already be *the* problem: f2 <- try(fitMvdc(x, gMvd.I, start = c(1, 1), optim.control=list(trace= TRUE, REPORT= 2))) ##--> Error in optim( ... ) : non-finite finite-difference value [2] ##==> "Solution" : Using a more robust (but slower) optim() method: fI.2 <- fitMvdc(x, gMvd.I, start = c(1, 1), method = "Nelder", optim.control=list(trace= TRUE)) fI.2
It is sometimes useful to keep fixed some components of a copula
parameter vector whereas the others are “free” and will be
estimated, e.g., by fitCopula
.
The first two functions set or modify the “fixedness”, whereas
isFree()
, isFreeP()
and nParam()
are utilities
enquiring about the “fixedness” of the parameters (of a copula).
fixParam(param, fixed = TRUE) fixedParam(copula) <- value isFreeP(param) ## S4 method for signature 'copula' isFree(copula) ## and specific '*Copula' methods ## S4 method for signature 'copula' nParam(copula, freeOnly = FALSE) ## and specific '*Copula' methods
fixParam(param, fixed = TRUE) fixedParam(copula) <- value isFreeP(param) ## S4 method for signature 'copula' isFree(copula) ## and specific '*Copula' methods ## S4 method for signature 'copula' nParam(copula, freeOnly = FALSE) ## and specific '*Copula' methods
param |
numeric parameter vector |
fixed , value
|
logical vector of the same length as |
copula |
a |
freeOnly |
logical (scalar) indicating if only free parameters should be counted or all. |
fixParam(param)
returns a numeric
vector with
attribute "fixed"
(a logical
, either TRUE
or vector of the same length as param
) to indicate which
components of param
are to be held fixed or not.
fixedParam<-
, a generic function, returns a
"copula"
object
with a partly fixed parameter (slot), i.e., corresponding to
fixParam()
above.
fitCopula
for fitting;
t-copulas, tCopula(*, df.fixed=TRUE)
now uses parameter
fixing for "df"
.
setTheta()
for setting or changing the
non-fixed parameter values.
nc1 <- normalCopula(dim = 3, fixParam(c(.6,.3,.2), c(TRUE, FALSE,FALSE)), dispstr = "un") nc1 nc13 <- nc12 <- nc1 fixedParam(nc12) <- c(TRUE, TRUE, FALSE) ; nc12 fixedParam(nc13) <- c(TRUE, FALSE, TRUE) ; nc13 set.seed(17); x <- rCopula(100, nc1) summary(f.13 <- fitCopula(nc13, x, method = "ml")) f.13@copula # 'rho.2' is estimated; the others kept fixed ## Setting to 'FALSE' (removes the "fixed" parts completely): nc0 <- nc13; fixedParam(nc0) <- FALSE nc0 stopifnot(is.null(attributes(nc0@parameters)))
nc1 <- normalCopula(dim = 3, fixParam(c(.6,.3,.2), c(TRUE, FALSE,FALSE)), dispstr = "un") nc1 nc13 <- nc12 <- nc1 fixedParam(nc12) <- c(TRUE, TRUE, FALSE) ; nc12 fixedParam(nc13) <- c(TRUE, FALSE, TRUE) ; nc13 set.seed(17); x <- rCopula(100, nc1) summary(f.13 <- fitCopula(nc13, x, method = "ml")) f.13@copula # 'rho.2' is estimated; the others kept fixed ## Setting to 'FALSE' (removes the "fixed" parts completely): nc0 <- nc13; fixedParam(nc0) <- FALSE nc0 stopifnot(is.null(attributes(nc0@parameters)))
Three years of daily prices (from July 2003 to July 2006) of crude oil and natural gas. These data should be very close to those analysed in Grégoire, Genest and Gendron (2008).
data(gasoil, package="copula")
data(gasoil, package="copula")
A data frame of 762 daily prices from 2003 to 2006.
date
date (of class Date
).
oil
daily price of crude oil
gas
daily price of natural gas
Grégoire, V., Genest, C., and Gendron, M. (2008) Using copulas to model price dependence in energy markets. Energy Risk 5(5), 58–64.
data(gasoil) ## Log Scaled Oil & Gas Prices : lattice :: xyplot(oil + gas ~ date, data = gasoil, auto.key=TRUE, type = c("l","r"), scales=list(y = list(log = TRUE), equispaced.log = FALSE))
data(gasoil) ## Log Scaled Oil & Gas Prices : lattice :: xyplot(oil + gas ~ date, data = gasoil, auto.key=TRUE, type = c("l","r"), scales=list(y = list(log = TRUE), equispaced.log = FALSE))
Methods to evaluate the generator function, the inverse generator function, and derivatives of the inverse of the generator function for Archimedean copulas. For extreme-value copulas, the “Pickands dependence function” plays the role of a generator function.
psi(copula, s) iPsi(copula, u, ...) diPsi(copula, u, degree=1, log=FALSE, ...) A(copula, w) dAdu(copula, w)
psi(copula, s) iPsi(copula, u, ...) diPsi(copula, u, degree=1, log=FALSE, ...) A(copula, w) dAdu(copula, w)
copula |
an object of class |
u , s , w
|
numerical vector at which these functions are to be evaluated. |
... |
further arguments for specific families. |
degree |
the degree of the derivative (defaults to 1). |
log |
logical indicating if the |
psi()
and iPsi()
are, respectively, the generator
function and its inverse
for
an Archimedean copula, see
pnacopula
for definition and
more details.
diPsi()
computes (currently only the first two) derivatives of
iPsi()
().
A()
, the “Pickands dependence function”, can be seen as the
generator function of an extreme-value copula. For instance, in the
bivariate case, we have the following result (see, e.g., Gudendorf and
Segers 2009):
A bivariate copula is an extreme-value copula if and only if
where is convex and satisfies
for all
.
In the -variate case, there is a similar characterization,
except that this time, the Pickands dependence function
is
defined on the
-dimensional unit simplex.
dAdu()
returns a data.frame containing the 1st and 2nd
derivative of A()
.
Gudendorf, G. and Segers, J. (2010). Extreme-value copulas. In Copula theory and its applications, Jaworski, P., Durante, F., Härdle, W. and Rychlik, W., Eds. Springer-Verlag, Lecture Notes in Statistics, 127–146, doi:10.1007/978-3-642-12465-5; preprint at https://arxiv.org/abs/0911.1015.
Nonparametric estimators for are available, see
An
.
## List the available methods (and their definitions): showMethods("A") showMethods("iPsi", incl=TRUE)
## List the available methods (and their definitions): showMethods("A") showMethods("iPsi", incl=TRUE)
Get one of our "acopula" family objects (see acopula-families by name.
Named strings for “translation” between different names and forms of Archimedean copulas.
getAcop (family, check = TRUE) getAname(family, objName = FALSE) .ac.shortNames .ac.longNames .ac.objNames .ac.classNames
getAcop (family, check = TRUE) getAname(family, objName = FALSE) .ac.shortNames .ac.longNames .ac.objNames .ac.classNames
family |
either a |
check |
logical indicating whether the class of the return value
should be checked to be |
objName |
logical indicating that the name of the R
object should be returned, instead of the family name, e.g.,
|
getAcop()
returns an "acopula"
family
object, typically one of one of our predefined ones.
getAname()
returns a character
string, the name
of an "acopula"
family object.
.as.longnames
etc are named string constants, useful in
programming for all our (five) standard Archimedean families.
Our predefined acopula-families
;
the class definition "acopula"
.
getAcop("Gumbel") ## different ways of getting the same "acopula" family object: stopifnot(## Joe (three ways): identical(getAcop("J"), getAcop("Joe")), identical(getAcop("J"), copJoe), ## Frank (yet another two different ways): identical(getAcop(frankCopula()), copFrank), identical(getAcop("frankCopula"), copFrank)) stopifnot( identical(getAname(claytonCopula()), getAname("C")), identical(getAname(copClayton), "Clayton"), identical(getAname("J"), "Joe"), identical(getAname(amhCopula(), TRUE), "copAMH"), identical(getAname(joeCopula(), TRUE), "copJoe") ) .ac.shortNames .ac.longNames .ac.objNames .ac.classNames
getAcop("Gumbel") ## different ways of getting the same "acopula" family object: stopifnot(## Joe (three ways): identical(getAcop("J"), getAcop("Joe")), identical(getAcop("J"), copJoe), ## Frank (yet another two different ways): identical(getAcop(frankCopula()), copFrank), identical(getAcop("frankCopula"), copFrank)) stopifnot( identical(getAname(claytonCopula()), getAname("C")), identical(getAname(copClayton), "Clayton"), identical(getAname("J"), "Joe"), identical(getAname(amhCopula(), TRUE), "copAMH"), identical(getAname(joeCopula(), TRUE), "copJoe") ) .ac.shortNames .ac.longNames .ac.objNames .ac.classNames
A (S4) generic function and methods providing a typically cheap method to get
valid parameters for the copula
, given the data
. This is
used, e.g., in fitCopula()
when start
is not
specified.
getIniParam(copula, data, default, named = TRUE, ...)
getIniParam(copula, data, default, named = TRUE, ...)
copula |
a |
data |
an |
default |
a parameter vector of correct length, to be used when no method is available or the method does “not work”. |
named |
|
... |
optional further arguments to underlying methods. |
a numeric
vector of correct length, say param
, which
should e.g., “work” in loglikCopula(param, u = data, copula)
.
signature(copula = "parCopula")
Close to a default
method (as class "parCopula"
contains most
copulas), currently mainly trying to use a version of
fitCopula(*, method = "itau")
(itself based on moment
matching iTau()
).
signature(copula = "mixCopula")
a relatively simple method, for the copula parameters, trying
getInitParam(cop[[k]])
for each component, and using equal weights w[k]
.
getTheta()
gets such a vector from a copula
object;
fitCopula
, loglikCopula
.
# TODO !
# TODO !
Get the parameter (vector) (
theta
) of
a copula, see setTheta
for more background.
getTheta(copula, freeOnly = TRUE, attr = FALSE, named = attr)
getTheta(copula, freeOnly = TRUE, attr = FALSE, named = attr)
copula |
|
freeOnly |
logical indicating that only non-fixed aka “free” parameters are to be returned (as vector). |
attr |
logical indicating if |
named |
logical if the resulting parameter vector should have
|
parameter vector of the copula, a numeric
vector,
possibly with names and other attributes (depending on the attr
and named
arguments).
signature(copula = "parCopula")
a default method,
returning numeric(0).
signature(copula = "copula")
..
signature(copula = "acopula")
..
signature(copula = "khoudrajiCopula")
..
signature(copula = "mixCopula")
..
signature(copula = "rotCopula")
..
signature(copula = "Xcopula")
..
setTheta
, its inverse.
showMethods("getTheta") getTheta(setTheta(copClayton, 0.5)) # is 0.5
showMethods("getTheta") getTheta(setTheta(copClayton, 0.5)) # is 0.5
Tools for computing a graphical goodness-of-fit (GOF) test based on pairwise Rosenblatt transformed data.
pairwiseCcop()
computes a -
array
which contains pairwise Rosenblatt-transformed data.
pairwiseIndepTest()
takes such an array as input and
computes a -
matrix
of test results from
pairwise tests of independence (as by indepTest()
).
pviTest()
can be used to extract the matrix of p-values from
the return matrix of pairwiseIndepTest()
.
gpviTest()
takes such a matrix of p-values and computes a global p-value with the method provided.
pairwiseCcop(u, copula, ...) pairwiseIndepTest(cu.u, N=256, iTest = indepTestSim(n, p=2, m=2, N=N, verbose = idT.verbose, ...), verbose=TRUE, idT.verbose = verbose, ...) pviTest(piTest) gpviTest(pvalues, method=p.adjust.methods, globalFun=min)
pairwiseCcop(u, copula, ...) pairwiseIndepTest(cu.u, N=256, iTest = indepTestSim(n, p=2, m=2, N=N, verbose = idT.verbose, ...), verbose=TRUE, idT.verbose = verbose, ...) pviTest(piTest) gpviTest(pvalues, method=p.adjust.methods, globalFun=min)
u |
|
copula |
copula object used for the Rosenblatt transform
( |
... |
additional arguments passed to the internal function
which computes the conditional copulas (for For |
cu.u |
|
N |
argument of |
iTest |
the result of (a version of) |
verbose |
|
idT.verbose |
logical, passed as |
piTest |
|
pvalues |
|
method |
|
globalFun |
|
-
array
cu.u
with cu.u[i,j]
containing
for
and
for
.
-
matrix
of lists
with test results as returned by indepTest()
. The
test results correspond to pairwise tests of independence as
conducted by indepTest()
.
-
matrix
of p-values.
global p-values for the specified methods.
If u
are distributed according to or “perfectly” sampled
from a copula, p-values on GOF tests for that copula should be uniformly
distributed in .
Hofert and Mächler (2014),
see pairsRosenblatt
.
pairsRosenblatt
for where these tools are used, including
demo(gof_graph)
for examples.
## demo(gof_graph)
## demo(gof_graph)
gnacopula()
conducts a goodness-of-fit test for the given
(-)copula
cop
based on the (copula-)data u
.
NOTE: gnacopula()
is deprecated, call gofCopula()
instead.
gnacopula(u, cop, n.bootstrap, estim.method = eval(formals(enacopula)$method), include.K=TRUE, n.MC=0, trafo=c("Hering.Hofert", "Rosenblatt"), method=eval(formals(gofTstat)$method), verbose=TRUE, ...)
gnacopula(u, cop, n.bootstrap, estim.method = eval(formals(enacopula)$method), include.K=TRUE, n.MC=0, trafo=c("Hering.Hofert", "Rosenblatt"), method=eval(formals(gofTstat)$method), verbose=TRUE, ...)
u |
|
cop |
|
n.bootstrap |
positive integer specifying the number of bootstrap replicates. |
estim.method |
|
include.K |
logical indicating whether the last component, involving the
Kendall distribution function |
n.MC |
parameter |
trafo |
a |
method |
a |
verbose |
if |
... |
additional arguments passed to |
The function gnacopula()
performs a parametric bootstrap for the
goodness-of-fit test specified by trafo
and method
. The
transformation given by trafo
specifies the multivariate
transformation which is first applied to the (copula-) data u
(typically, the pseudo-observations are used); see
htrafo()
or cCopula()
for more details. The
argument method
specifies the particular goodness-of-fit test
carried out, which is either the Anderson-Darling test for the
univariate standard uniform distribution (for method="AnChisq"
or
method="AnGamma"
) in a one-dimensional setup or the tests
described in Genest et al. (2009) for the
multivariate standard uniform distribution directly in a multivariate
setup. As estimation method, the method provided by
estim.method
is used.
Note that a finite-sample correction is made when computing p-values;
see gofCopula()
for details.
A word of warning: Do work carefully with the variety of different
goodness-of-fit tests that can be performed with gnacopula()
.
For example, among the possible estimation methods at hand, only
MLE is known to be consistent (under conditions to be verified).
Furthermore, for the tests based on the Anderson-Darling test
statistic, it is theoretically not clear whether the parametric bootstrap
converges. Consequently, the results obtained should be treated with
care. Moreover, several estimation methods are known to be prone to
numerical errors (see Hofert et al. (2013)) and are thus not
recommended to be used in the parametric bootstrap. A warning is given if
gnacopula()
is called with a method not being MLE.
gnacopula
returns an R object of class "htest"
.
This object contains a list with the bootstrap results including the
components
p.value
:the bootstrapped p-value;
statistic
:the value of the test statistic computed
for the data u
;
data.name
:the name of u
;
method
:a character
describing the
goodness-of-fit test applied;
estimator
:the estimator computed for the data u
;
bootStats
:a list with component estimator
containing the estimators for all bootstrap replications and
component statistic
containing the values of the test statistic
for each bootstrap replication.
Genest, C., Rémillard, B., and Beaudoin, D. (2009), Goodness-of-fit tests for copulas: A review and a power study Insurance: Mathematics and Economics 44, 199–213.
Rosenblatt, M. (1952), Remarks on a Multivariate Transformation, The Annals of Mathematical Statistics 23, 3, 470–472.
Hering, C. and Hofert, M. (2011), Goodness-of-fit tests for Archimedean copulas in large dimensions, submitted.
Hofert, M., Mächler, M., and McNeil, A. J. (2012). Likelihood inference for Archimedean copulas in high dimensions under known margins. Journal of Multivariate Analysis 110, 133–150.
gofTstat()
for the implemented test statistis,
htrafo()
and cCopula()
involved and
K()
for the Kendall distribution function.
gofCopula()
for other (parametric bootstrap) based
goodness-of-fit tests.
The goodness-of-fit tests are based, by default, on the empirical
process comparing the empirical copula with a parametric estimate of
the copula derived under the null hypothesis, the default test
statistic, "Sn", being the Cramer-von Mises functional
defined in Equation (2) of Genest, Remillard and Beaudoin (2009). In
that case, approximate p-values for the test statistic can be obtained
either using a parametric bootstrap (see references two and
three) or by means of a faster multiplier approach (see
references four and five).
Alternative test statistics can be used, in particular if a parametric bootstrap is employed.
The prinicipal function is gofCopula()
which, depending on
simulation
either calls gofPB()
or gofMB()
.
## Generic [and "rotCopula" method] ------ Main function ------ gofCopula(copula, x, ...) ## S4 method for signature 'copula' gofCopula(copula, x, N = 1000, method = c("Sn", "SnB", "SnC", "Rn"), estim.method = c("mpl", "ml", "itau", "irho", "itau.mpl"), simulation = c("pb", "mult"), test.method = c("family", "single"), verbose = interactive(), ties = NA, ties.method = c("max", "average", "first", "last", "random", "min"), fit.ties.meth = eval(formals(rank)$ties.method), ...) ## (Deprecated) internal 'helper' functions : --- gofPB(copula, x, N, method = c("Sn", "SnB", "SnC"), estim.method = c("mpl", "ml", "itau", "irho", "itau.mpl"), trafo.method = if(method == "Sn") "none" else c("cCopula", "htrafo"), trafoArgs = list(), test.method = c("family", "single"), verbose = interactive(), useR = FALSE, ties = NA, ties.method = c("max", "average", "first", "last", "random", "min"), fit.ties.meth = eval(formals(rank)$ties.method), ...) gofMB(copula, x, N, method = c("Sn", "Rn"), estim.method = c("mpl", "ml", "itau", "irho"), test.method = c("family", "single"), verbose = interactive(), useR = FALSE, m = 1/2, zeta.m = 0, b = 1/sqrt(nrow(x)), ties.method = c("max", "average", "first", "last", "random", "min"), fit.ties.meth = eval(formals(rank)$ties.method), ...)
## Generic [and "rotCopula" method] ------ Main function ------ gofCopula(copula, x, ...) ## S4 method for signature 'copula' gofCopula(copula, x, N = 1000, method = c("Sn", "SnB", "SnC", "Rn"), estim.method = c("mpl", "ml", "itau", "irho", "itau.mpl"), simulation = c("pb", "mult"), test.method = c("family", "single"), verbose = interactive(), ties = NA, ties.method = c("max", "average", "first", "last", "random", "min"), fit.ties.meth = eval(formals(rank)$ties.method), ...) ## (Deprecated) internal 'helper' functions : --- gofPB(copula, x, N, method = c("Sn", "SnB", "SnC"), estim.method = c("mpl", "ml", "itau", "irho", "itau.mpl"), trafo.method = if(method == "Sn") "none" else c("cCopula", "htrafo"), trafoArgs = list(), test.method = c("family", "single"), verbose = interactive(), useR = FALSE, ties = NA, ties.method = c("max", "average", "first", "last", "random", "min"), fit.ties.meth = eval(formals(rank)$ties.method), ...) gofMB(copula, x, N, method = c("Sn", "Rn"), estim.method = c("mpl", "ml", "itau", "irho"), test.method = c("family", "single"), verbose = interactive(), useR = FALSE, m = 1/2, zeta.m = 0, b = 1/sqrt(nrow(x)), ties.method = c("max", "average", "first", "last", "random", "min"), fit.ties.meth = eval(formals(rank)$ties.method), ...)
copula |
object of class |
x |
a data matrix that will be transformed to pseudo-observations
using |
N |
number of bootstrap or multiplier replications to be used to obtain approximate realizations of the test statistic under the null hypothesis. |
method |
a |
estim.method |
a |
simulation |
a string specifying the resampling method for
generating approximate realizations of the test statistic under the null
hypothesis; can be either |
test.method |
a |
verbose |
a logical specifying if progress of the parametric bootstrap
should be displayed via |
... |
for |
trafo.method |
only for the parametric bootstrap ( |
trafoArgs |
only for the parametric bootstrap. A
|
useR |
logical indicating whether an R or C implementation is used. |
ties.method |
string specifying how ranks should be computed,
except for fitting, if there are ties in any of the coordinate
samples of |
fit.ties.meth |
string specifying how ranks should be computed
when fitting by maximum pseudo-likelihood if there are ties in any
of the coordinate samples of |
ties |
only for the parametric bootstrap. Logical indicating
whether a version of the parametric boostrap adapted to the
presence of ties in any of the coordinate samples of |
m , zeta.m
|
only for the multiplier with |
b |
only for the multiplier. |
If the parametric bootstrap is used, the dependence parameters of the hypothesized copula family can be estimated by any estimation method available for the family, up to a few exceptions. If the multiplier is used, any of the rank-based methods can be used in the bivariate case, but only maximum pseudo-likelihood estimation can be used in the multivariate (multiparameter) case.
The price to pay for the higher computational efficiency of the
multiplier is more programming work as certain
partial derivatives need to be computed for each hypothesized
parametric copula family. When estimation is based on maximization of
the pseudo-likelihood, these have been implemented for six copula
families thus far: the Clayton, Gumbel-Hougaard, Frank, Plackett,
normal and copula families. For other families, numerical
differentiation based on
grad()
from package
numDeriv is used (and a warning message is displayed).
Although the empirical processes involved in the multiplier and the parametric bootstrap-based test are asymptotically equivalent under the null, the finite-sample behavior of the two tests might differ significantly.
Both for the parametric bootstrap and the multiplier, the approximate p-value is computed as
where and
denote the test statistic and
the bootstrapped test statistc, respectively. This ensures that the
approximate p-value is a number strictly between 0 and 1, which is
sometimes necessary for further treatments. See Pesarin (2001) for
more details.
For the normal and copulas, several dependence structures can
be hypothesized:
"ex"
for exchangeable, "ar1"
for AR(1),
"toep"
for Toeplitz, and "un"
for unstructured (see
ellipCopula()
). For the copula,
"df.fixed"
has to be set to TRUE
, which implies that the
degrees of freedom are not considered as a parameter to be estimated.
The former argument print.every
is deprecated and not
supported anymore; use verbose
instead.
An object of class
htest
which is a list,
some of the components of which are
statistic |
value of the test statistic. |
p.value |
corresponding approximate p-value. |
parameter |
estimates of the parameters for the hypothesized copula family. |
These tests were theoretically studied and implemented under the
assumption of continuous margins, which implies that ties in the
component samples occur with probability zero. The presence of ties in
the data might substantially affect the approximate p-values. Through
argument ties
, the user can however select a version of the
parametric bootstrap adapted to the presence of ties. No such adaption
exists for the multiplier for the moment.
Genest, C., Huang, W., and Dufour, J.-M. (2013). A regularized goodness-of-fit test for copulas. Journal de la Société française de statistique 154, 64–77.
Genest, C. and Rémillard, B. (2008). Validity of the parametric bootstrap for goodness-of-fit testing in semiparametric models. Annales de l'Institut Henri Poincare: Probabilites et Statistiques 44, 1096–1127.
Genest, C., Rémillard, B., and Beaudoin, D. (2009). Goodness-of-fit tests for copulas: A review and a power study. Insurance: Mathematics and Economics 44, 199–214.
Kojadinovic, I., Yan, J., and Holmes M. (2011). Fast large-sample goodness-of-fit tests for copulas. Statistica Sinica 21, 841–871.
Kojadinovic, I. and Yan, J. (2011). A goodness-of-fit test for multivariate multiparameter copulas based on multiplier central limit theorems. Statistics and Computing 21, 17–30.
Kojadinovic, I. and Yan, J. (2010). Modeling Multivariate Distributions with Continuous Margins Using the copula R Package. Journal of Statistical Software 34(9), 1–20, https://www.jstatsoft.org/v34/i09/.
Kojadinovic, I. (2017). Some copula inference procedures adapted to the presence of ties. Computational Statistics and Data Analysis 112, 24–41, https://arxiv.org/abs/1609.05519.
Pesarin, F. (2001). Multivariate Permutation Tests: With Applications in Biostatistics. Wiley.
fitCopula()
for the underlying estimation procedure and
gofTstat()
for details on *some* of the available test
statistics.
## The following example is available in batch through ## demo(gofCopula) n <- 200; N <- 1000 # realistic (but too large for interactive use) n <- 60; N <- 200 # (time (and tree !) saving ...) ## A two-dimensional data example ---------------------------------- set.seed(271) x <- rCopula(n, claytonCopula(3)) ## Does the Gumbel family seem to be a good choice (statistic "Sn")? gofCopula(gumbelCopula(), x, N=N) ## With "SnC", really s..l..o..w.. --- with "SnB", *EVEN* slower gofCopula(gumbelCopula(), x, N=N, method = "SnC", trafo.method = "cCopula") ## What about the Clayton family? gofCopula(claytonCopula(), x, N=N) ## Similar with a different estimation method gofCopula(gumbelCopula (), x, N=N, estim.method="itau") gofCopula(claytonCopula(), x, N=N, estim.method="itau") ## A three-dimensional example ------------------------------------ x <- rCopula(n, tCopula(c(0.5, 0.6, 0.7), dim = 3, dispstr = "un")) ## Does the Gumbel family seem to be a good choice? g.copula <- gumbelCopula(dim = 3) gofCopula(g.copula, x, N=N) ## What about the t copula? t.copula <- tCopula(dim = 3, dispstr = "un", df.fixed = TRUE) if(FALSE) ## this is *VERY* slow currently gofCopula(t.copula, x, N=N) ## The same with a different estimation method gofCopula(g.copula, x, N=N, estim.method="itau") if(FALSE) # still really slow gofCopula(t.copula, x, N=N, estim.method="itau") ## The same using the multiplier approach gofCopula(g.copula, x, N=N, simulation="mult") gofCopula(t.copula, x, N=N, simulation="mult") if(FALSE) # no yet possible gofCopula(t.copula, x, N=N, simulation="mult", estim.method="itau")
## The following example is available in batch through ## demo(gofCopula) n <- 200; N <- 1000 # realistic (but too large for interactive use) n <- 60; N <- 200 # (time (and tree !) saving ...) ## A two-dimensional data example ---------------------------------- set.seed(271) x <- rCopula(n, claytonCopula(3)) ## Does the Gumbel family seem to be a good choice (statistic "Sn")? gofCopula(gumbelCopula(), x, N=N) ## With "SnC", really s..l..o..w.. --- with "SnB", *EVEN* slower gofCopula(gumbelCopula(), x, N=N, method = "SnC", trafo.method = "cCopula") ## What about the Clayton family? gofCopula(claytonCopula(), x, N=N) ## Similar with a different estimation method gofCopula(gumbelCopula (), x, N=N, estim.method="itau") gofCopula(claytonCopula(), x, N=N, estim.method="itau") ## A three-dimensional example ------------------------------------ x <- rCopula(n, tCopula(c(0.5, 0.6, 0.7), dim = 3, dispstr = "un")) ## Does the Gumbel family seem to be a good choice? g.copula <- gumbelCopula(dim = 3) gofCopula(g.copula, x, N=N) ## What about the t copula? t.copula <- tCopula(dim = 3, dispstr = "un", df.fixed = TRUE) if(FALSE) ## this is *VERY* slow currently gofCopula(t.copula, x, N=N) ## The same with a different estimation method gofCopula(g.copula, x, N=N, estim.method="itau") if(FALSE) # still really slow gofCopula(t.copula, x, N=N, estim.method="itau") ## The same using the multiplier approach gofCopula(g.copula, x, N=N, simulation="mult") gofCopula(t.copula, x, N=N, simulation="mult") if(FALSE) # no yet possible gofCopula(t.copula, x, N=N, simulation="mult", estim.method="itau")
Goodness-of-fit tests for extreme-value copulas based on the empirical process comparing one of the two nonparameteric rank-based estimator of the Pickands dependence function studied in Genest and Segers (2009) with a parametric estimate of the Pickands dependence function derived under the null hypothesis. The test statistic is the Cramer-von Mises functional Sn defined in Equation (5) of Genest, Kojadinovic, G. Nešlehová, and Yan (2010). Approximate p-values for the test statistic are obtained using a parametric bootstrap.
gofEVCopula(copula, x, N = 1000, method = c("mpl", "ml", "itau", "irho"), estimator = c("CFG", "Pickands"), m = 1000, verbose = interactive(), ties.method = c("max", "average", "first", "last", "random", "min"), fit.ties.meth = eval(formals(rank)$ties.method), ...)
gofEVCopula(copula, x, N = 1000, method = c("mpl", "ml", "itau", "irho"), estimator = c("CFG", "Pickands"), m = 1000, verbose = interactive(), ties.method = c("max", "average", "first", "last", "random", "min"), fit.ties.meth = eval(formals(rank)$ties.method), ...)
copula |
object of class |
x |
a data matrix that will be transformed to pseudo-observations. |
N |
number of bootstrap samples to be used to simulate realizations of the test statistic under the null hypothesis. |
method |
estimation method to be used to estimate the
dependence parameter(s); can be either |
estimator |
specifies which nonparametric rank-based estimator
of the unknown Pickands dependence function to use; can be either
|
m |
number of points of the uniform grid on [0,1] used to compute the test statistic numerically. |
verbose |
a logical specifying if progress of the bootstrap
should be displayed via |
ties.method |
string specifying how ranks should be computed,
except for fitting, if there are ties in any of the coordinate
samples of |
fit.ties.meth |
string specifying how ranks should be computed
when fitting by maximum pseudo-likelihood if there are ties in any
of the coordinate samples of |
... |
further optional arguments, passed to
|
More details can be found in the second reference.
The former argument print.every
is deprecated and not
supported anymore; use verbose
instead.
An object of class
htest
which is a list,
some of the components of which are
statistic |
value of the test statistic. |
p.value |
corresponding approximate p-value. |
parameter |
estimates of the parameters for the hypothesized copula family. |
For a given degree of dependence, the most popular bivariate extreme-value copulas are strikingly similar.
Genest, C. and Segers, J. (2009). Rank-based inference for bivariate extreme-value copulas. Annals of Statistics 37, 2990–3022.
Genest, C. Kojadinovic, I., G. Nešlehová, J., and Yan, J. (2011). A goodness-of-fit test for bivariate extreme-value copulas. Bernoulli 17(1), 253–275.
evCopula
, evTestC
, evTestA
,
evTestK
, gofCopula
, An
.
n <- 100; N <- 1000 # realistic (but too large currently for CRAN checks) n <- 60; N <- 200 # (time (and tree !) saving ...) x <- rCopula(n, claytonCopula(3)) ## Does the Gumbel family seem to be a good choice? gofEVCopula(gumbelCopula(), x, N=N) ## The same with different (and cheaper) estimation methods: gofEVCopula(gumbelCopula(), x, N=N, method="itau") gofEVCopula(gumbelCopula(), x, N=N, method="irho") ## The same with different extreme-value copulas gofEVCopula(galambosCopula(), x, N=N) gofEVCopula(galambosCopula(), x, N=N, method="itau") gofEVCopula(galambosCopula(), x, N=N, method="irho") gofEVCopula(huslerReissCopula(), x, N=N) gofEVCopula(huslerReissCopula(), x, N=N, method="itau") gofEVCopula(huslerReissCopula(), x, N=N, method="irho") gofEVCopula(tevCopula(df.fixed=TRUE), x, N=N) gofEVCopula(tevCopula(df.fixed=TRUE), x, N=N, method="itau") gofEVCopula(tevCopula(df.fixed=TRUE), x, N=N, method="irho")
n <- 100; N <- 1000 # realistic (but too large currently for CRAN checks) n <- 60; N <- 200 # (time (and tree !) saving ...) x <- rCopula(n, claytonCopula(3)) ## Does the Gumbel family seem to be a good choice? gofEVCopula(gumbelCopula(), x, N=N) ## The same with different (and cheaper) estimation methods: gofEVCopula(gumbelCopula(), x, N=N, method="itau") gofEVCopula(gumbelCopula(), x, N=N, method="irho") ## The same with different extreme-value copulas gofEVCopula(galambosCopula(), x, N=N) gofEVCopula(galambosCopula(), x, N=N, method="itau") gofEVCopula(galambosCopula(), x, N=N, method="irho") gofEVCopula(huslerReissCopula(), x, N=N) gofEVCopula(huslerReissCopula(), x, N=N, method="itau") gofEVCopula(huslerReissCopula(), x, N=N, method="irho") gofEVCopula(tevCopula(df.fixed=TRUE), x, N=N) gofEVCopula(tevCopula(df.fixed=TRUE), x, N=N, method="itau") gofEVCopula(tevCopula(df.fixed=TRUE), x, N=N, method="irho")
gofBTstat()
computes supposedly Beta distributed test
statistics for checking uniformity of u
on the unit sphere.
gofBTstat(u)
gofBTstat(u)
u |
|
An -
matrix
where the th entry is
Li, R.-Z., Fang, K.-T., and Zhu, L.-X. (1997). Some Q-Q probability plots to test spherical and elliptical symmetry. Journal of Computational and Graphical Statistics 6(4), 435–450.
## generate data on the unit sphere n <- 360 d <- 5 set.seed(1) x <- matrix(rnorm(n*d), ncol=d) U <- x/sqrt(rowSums(x^2)) ## compute the test statistics B_k, k in {1,..,d-1} Bmat <- gofBTstat(U) ## (graphically) check if Bmat[,k] follows a Beta(k/2, (d-k)/2) distribution qqp <- function(k, Bmat) { d <- ncol(Bmat)+1L tit <- substitute(plain("Beta")(.A.,.B.)~~ bold("Q-Q Plot"), list(.A. = k/2, .B. = (d-k)/2)) qqplot2(Bmat[,k], qF=function(p) qbeta(p, shape1=k/2, shape2=(d-k)/2), main.args=list(text=tit, side=3, cex=1.3, line=1.1, xpd=NA)) } qqp(1, Bmat=Bmat) # k=1 qqp(3, Bmat=Bmat) # k=3
## generate data on the unit sphere n <- 360 d <- 5 set.seed(1) x <- matrix(rnorm(n*d), ncol=d) U <- x/sqrt(rowSums(x^2)) ## compute the test statistics B_k, k in {1,..,d-1} Bmat <- gofBTstat(U) ## (graphically) check if Bmat[,k] follows a Beta(k/2, (d-k)/2) distribution qqp <- function(k, Bmat) { d <- ncol(Bmat)+1L tit <- substitute(plain("Beta")(.A.,.B.)~~ bold("Q-Q Plot"), list(.A. = k/2, .B. = (d-k)/2)) qqplot2(Bmat[,k], qF=function(p) qbeta(p, shape1=k/2, shape2=(d-k)/2), main.args=list(text=tit, side=3, cex=1.3, line=1.1, xpd=NA)) } qqp(1, Bmat=Bmat) # k=1 qqp(3, Bmat=Bmat) # k=3
gofTstat()
computes various goodness-of-fit test statistics
typically used in gofCopula(*, simulation = "pb")
.
gofT2stat()
computes the two-sample goodness of fit test
statistic of Rémillard and Scaillet (2009).
gofTstat(u, method = c("Sn", "SnB", "SnC", "AnChisq", "AnGamma"), useR = FALSE, ...) gofT2stat(u1, u2, useR = FALSE)
gofTstat(u, method = c("Sn", "SnB", "SnC", "AnChisq", "AnGamma"), useR = FALSE, ...) gofT2stat(u1, u2, useR = FALSE)
u |
|
u1 , u2
|
|
method |
a
|
useR |
logical indicating whether an R or C implementation is used. |
... |
additional arguments passed for computing the different test statistics. |
These functions should be used with care. The different test statistics were implemented (partly) for different purposes and goodness-of-fit tests and should be used only with knowledge about such tests (see the references for more details).
The value of the test statistic, a numeric
.
Genest, C., Rémillard, B., and Beaudoin, D. (2009), Goodness-of-fit tests for copulas: A review and a power study Insurance: Mathematics and Economics 44, 199–213.
Rosenblatt, M. (1952), Remarks on a Multivariate Transformation, The Annals of Mathematical Statistics 23, 3, 470–472.
Hering, C. and Hofert, M. (2014), Goodness-of-fit tests for Archimedean copulas in high dimensions, Innovations in Quantitative Risk Management.
Hofert, M., Mächler, M., and McNeil, A. J. (2012). Likelihood inference for Archimedean copulas in high dimensions under known margins. Journal of Multivariate Analysis 110, 133–150.
Rémillard, B. and Scaillet, O. (2009). Testing for equality between two copulas. Journal of Multivariate Analysis 100, 377–386.
gofCopula()
for goodness-of-fit tests where (some of)
the test statistics of gofTstat()
are used.
## generate data cop <- archmCopula("Gumbel", param=iTau(gumbelCopula(), 0.5), dim=5) set.seed(1) U <- rCopula(1000, cop) ## compute Sn (as is done in a parametric bootstrap, for example) Uhat <- pobs(U) # pseudo-observations u <- cCopula(Uhat, copula = cop) # Rosenblatt transformed data (with correct copula) gofTstat(u, method = "Sn", copula = cop) # compute test statistic Sn; requires copula argument
## generate data cop <- archmCopula("Gumbel", param=iTau(gumbelCopula(), 0.5), dim=5) set.seed(1) U <- rCopula(1000, cop) ## compute Sn (as is done in a parametric bootstrap, for example) Uhat <- pobs(U) # pseudo-observations u <- cCopula(Uhat, copula = cop) # Rosenblatt transformed data (with correct copula) gofTstat(u, method = "Sn", copula = cop) # compute test statistic Sn; requires copula argument
The transformation described in Hering and Hofert (2014), for Archimedean copulas.
htrafo(u, copula, include.K = TRUE, n.MC = 0, inverse = FALSE, method = eval(formals(qK)$method), u.grid, ...)
htrafo(u, copula, include.K = TRUE, n.MC = 0, inverse = FALSE, method = eval(formals(qK)$method), u.grid, ...)
u |
|
copula |
an Archimedean copula specified as
|
include.K |
logical indicating whether the last component, involving the
Kendall distribution function |
n.MC |
parameter |
inverse |
logical indicating whether the inverse of the transformation is returned. |
method |
method to compute |
u.grid |
argument of |
... |
additional arguments passed to |
Given a -dimensional random vector
following an Archimedean copula
with
generator
, Hering and Hofert (2014) showed that
, where
htrafo
applies this transformation row-wise to
u
and thus returns either an - or an
-matrix, depending on whether the last
component
which involves the (possibly
numerically challenging) Kendall distribution function
is used
(
include.K=TRUE
) or not (include.K=FALSE
).
htrafo()
returns an
- or
-matrix
(depending on whether
include.K
is TRUE
or
FALSE
) containing the transformed input u
.
Hering, C. and Hofert, M. (2014). Goodness-of-fit tests for Archimedean copulas in high dimensions. Innovations in Quantitative Risk Management.
## Sample and build pseudo-observations (what we normally have available) ## of a Clayton copula tau <- 0.5 theta <- iTau(claytonCopula(), tau = tau) d <- 5 cc <- claytonCopula(theta, dim = d) set.seed(271) n <- 1000 U <- rCopula(n, copula = cc) X <- qnorm(U) # X now follows a meta-Gumbel model with N(0,1) marginals U <- pobs(X) # build pseudo-observations ## Graphically check if the data comes from a meta-Clayton model ## with the transformation of Hering and Hofert (2014): U.H <- htrafo(U, copula = cc) # transform the data splom2(U.H, cex = 0.2) # looks good ## The same for an 'outer_nacopula' object cc. <- onacopulaL("Clayton", list(theta, 1:d)) U.H. <- htrafo(U, copula = cc.) splom2(U.H., cex = 0.2) # looks good ## What about a meta-Gumbel model? ## The parameter is chosen such that Kendall's tau equals (the same) tau gc <- gumbelCopula(iTau(gumbelCopula(), tau = tau), dim = d) ## Plot of the transformed data (Hering and Hofert (2014)) to see the ## deviations from uniformity U.H.. <- htrafo(U, copula = gc) splom2(U.H.., cex = 0.2) # deviations visible
## Sample and build pseudo-observations (what we normally have available) ## of a Clayton copula tau <- 0.5 theta <- iTau(claytonCopula(), tau = tau) d <- 5 cc <- claytonCopula(theta, dim = d) set.seed(271) n <- 1000 U <- rCopula(n, copula = cc) X <- qnorm(U) # X now follows a meta-Gumbel model with N(0,1) marginals U <- pobs(X) # build pseudo-observations ## Graphically check if the data comes from a meta-Clayton model ## with the transformation of Hering and Hofert (2014): U.H <- htrafo(U, copula = cc) # transform the data splom2(U.H, cex = 0.2) # looks good ## The same for an 'outer_nacopula' object cc. <- onacopulaL("Clayton", list(theta, 1:d)) U.H. <- htrafo(U, copula = cc.) splom2(U.H., cex = 0.2) # looks good ## What about a meta-Gumbel model? ## The parameter is chosen such that Kendall's tau equals (the same) tau gc <- gumbelCopula(iTau(gumbelCopula(), tau = tau), dim = d) ## Plot of the transformed data (Hering and Hofert (2014)) to see the ## deviations from uniformity U.H.. <- htrafo(U, copula = gc) splom2(U.H.., cex = 0.2) # deviations visible
Constructs an independence copula with its corresponding dimension.
indepCopula(dim = 2)
indepCopula(dim = 2)
dim |
the dimension of the copula. |
An independence copula object of class "indepCopula"
.
Mathematically, the independence copula is also a special
(boundary) case of e.g., classes "archmCopula"
,
"ellipCopula"
, and "evCopula"
.
indep.cop <- indepCopula(3) x <- rCopula(10, indep.cop) dCopula(x, indep.cop) persp(indepCopula(), pCopula)
indep.cop <- indepCopula(3) x <- rCopula(10, indep.cop) dCopula(x, indep.cop) persp(indepCopula(), pCopula)
Independence copula class.
Objects can be created by calls of the form new("indepCopula", ...)
or by function indepCopula()
. Such objects can be
useful as special cases of parametric copulas, bypassing
copula-specific computations such as distribution, density, and sampler.
dimension
:Object of class "numeric"
, dimension
of the copula.
exprdist
:an expression
of length two,
for the “formulas” of the cdf and pdf of the copula.
signature(copula = "indepCopula")
: ...
signature(copula = "indepCopula")
: ...
signature(copula = "indepCopula")
: ...
signature(copula = "indepCopula")
: ...
Class "indepCopula"
directly extends classes
"dimCopula"
and "parCopula"
.
indepCopula
; documentation for classes
dimCopula
and
parCopula
.
getClass("indepCopula")
getClass("indepCopula")
Multivariate independence test based on the empirical copula process as proposed by Christian Genest and Bruno Rémillard. The test can be seen as composed of three steps: (i) a simulation step, which consists of simulating the distribution of the test statistics under independence for the sample size under consideration; (ii) the test itself, which consists of computing the approximate p-values of the test statistics with respect to the empirical distributions obtained in step (i); and (iii) the display of a graphic, called a dependogram, enabling to understand the type of departure from independence, if any. More details can be found in the articles cited in the reference section.
indepTestSim(n, p, m = p, N = 1000, verbose = interactive()) indepTest(x, d, alpha=0.05) dependogram(test, pvalues = FALSE, print = FALSE)
indepTestSim(n, p, m = p, N = 1000, verbose = interactive()) indepTest(x, d, alpha=0.05) dependogram(test, pvalues = FALSE, print = FALSE)
n |
sample size when simulating the distribution of the test statistics under independence. |
p |
dimension of the data when simulating the distribution of the test statistics under independence. |
m |
maximum cardinality of the subsets of variables for which a
test statistic is to be computed. It makes sense to consider |
N |
number of repetitions when simulating under independence. |
verbose |
a logical specifying if progress
should be displayed via |
x |
data frame or data matrix containing realizations (one per line) of the random vector whose independence is to be tested. |
d |
object of class |
alpha |
significance level used in the computation of the critical values for the test statistics. |
test |
object of class |
pvalues |
logical indicating whether the dependogram should be drew from test statistics or the corresponding p-values. |
print |
logical indicating whether details should be printed. |
The current (C code) implementation of indepTestSim()
uses (RAM) memory of size , and time
. This renders it unfeasible when
n
is large.
See the references below for more details, especially Genest and Rémillard (2004).
The former argument print.every
is deprecated and not
supported anymore; use verbose
instead.
The function indepTestSim()
returns an object of class
"indepTestDist"
whose attributes are: sample.size
,
data.dimension
, max.card.subsets
,
number.repetitons
, subsets
(list of the subsets for
which test statistics have been computed), subsets.binary
(subsets in binary 'integer' notation),
dist.statistics.independence
(a N
line matrix containing
the values of the test statistics for each subset and each repetition)
and dist.global.statistic.independence
(a vector a length
N
containing the values of the global Cramér-von Mises test
statistic for each repetition – see Genest et al (2007), p.175).
The function indepTest()
returns an object of class
"indepTest"
whose attributes are: subsets
,
statistics
, critical.values
, pvalues
,
fisher.pvalue
(a p-value resulting from a combination à la
Fisher of the subset statistic p-values), tippett.pvalue
(a p-value
resulting from a combination à la Tippett of the subset
statistic p-values),
alpha
(global significance level of the test), beta
(1 - beta
is the significance level per statistic),
global.statistic
(value of the global Cramér-von Mises
statistic derived directly from the independence empirical copula
process - see Genest et al (2007), p.175) and
global.statistic.pvalue
(corresponding p-value).
Deheuvels, P. (1979). La fonction de dépendance empirique et ses propriétés: un test non paramétrique d'indépendance, Acad. Roy. Belg. Bull. Cl. Sci., 5th Ser. 65, 274–292.
Deheuvels, P. (1981) A non parametric test for independence, Publ. Inst. Statist. Univ. Paris. 26, 29–50.
Genest, C. and Rémillard, B. (2004) Tests of independence and randomness based on the empirical copula process. Test 13, 335–369.
Genest, C., Quessy, J.-F., and Rémillard, B. (2006). Local efficiency of a Cramer-von Mises test of independence, Journal of Multivariate Analysis 97, 274–294.
Genest, C., Quessy, J.-F., and Rémillard, B. (2007) Asymptotic local efficiency of Cramér-von Mises tests for multivariate independence. The Annals of Statistics 35, 166–191.
serialIndepTest
,
multIndepTest
,
multSerialIndepTest
.
## Consider the following example taken from ## Genest and Remillard (2004), p 352: set.seed(2004) x <- matrix(rnorm(500),100,5) x[,1] <- abs(x[,1]) * sign(x[,2] * x[,3]) x[,5] <- x[,4]/2 + sqrt(3) * x[,5]/2 ## In order to test for independence "within" x, the first step consists ## in simulating the distribution of the test statistics under ## independence for the same sample size and dimension, ## i.e. n=100 and p=5. As we are going to consider all the subsets of ## {1,...,5} whose cardinality is between 2 and 5, we set p=m=5. ## For a realistic N = 1000 (default), this takes a few seconds: N. <- if(copula:::doExtras()) 1000 else 120 N. system.time(d <- indepTestSim(100, 5, N = N.)) ## For N=1000, 2 seconds (lynne 2015) ## You could save 'd' for future use, via saveRDS() ## The next step consists of performing the test itself (and print its results): (iTst <- indepTest(x,d)) ## Display the dependogram with the details: dependogram(iTst, print=TRUE) ## We could have tested for a weaker form of independence, for instance, ## by only computing statistics for subsets whose cardinality is between 2 ## and 3. Consider for instance the following data: y <- matrix(runif(500),100,5) ## and perform the test: system.time( d <- indepTestSim(100,5,3, N=N.) ) iTy <- indepTest(y,d) iTy dependogram(iTy, print=TRUE)
## Consider the following example taken from ## Genest and Remillard (2004), p 352: set.seed(2004) x <- matrix(rnorm(500),100,5) x[,1] <- abs(x[,1]) * sign(x[,2] * x[,3]) x[,5] <- x[,4]/2 + sqrt(3) * x[,5]/2 ## In order to test for independence "within" x, the first step consists ## in simulating the distribution of the test statistics under ## independence for the same sample size and dimension, ## i.e. n=100 and p=5. As we are going to consider all the subsets of ## {1,...,5} whose cardinality is between 2 and 5, we set p=m=5. ## For a realistic N = 1000 (default), this takes a few seconds: N. <- if(copula:::doExtras()) 1000 else 120 N. system.time(d <- indepTestSim(100, 5, N = N.)) ## For N=1000, 2 seconds (lynne 2015) ## You could save 'd' for future use, via saveRDS() ## The next step consists of performing the test itself (and print its results): (iTst <- indepTest(x,d)) ## Display the dependogram with the details: dependogram(iTst, print=TRUE) ## We could have tested for a weaker form of independence, for instance, ## by only computing statistics for subsets whose cardinality is between 2 ## and 3. Consider for instance the following data: y <- matrix(runif(500),100,5) ## and perform the test: system.time( d <- indepTestSim(100,5,3, N=N.) ) iTy <- indepTest(y,d) iTy dependogram(iTy, print=TRUE)
Compute an initial interval or initial value for optimization/estimation routines (only a heuristic; if this fails, choose your own interval or value).
initOpt(family, tau.range=NULL, interval = TRUE, u, method = c("tau.Gumbel", "tau.mean"), warn = TRUE, ...)
initOpt(family, tau.range=NULL, interval = TRUE, u, method = c("tau.Gumbel", "tau.mean"), warn = TRUE, ...)
family |
Archimedean family to find an initial interval for. |
tau.range |
numeric vector containing lower and upper admissible
Kendall's tau, or |
interval |
|
u |
matrix of realizations following the copula family specified
by |
method |
a
|
warn |
logical indicating if warnings are printed for
|
... |
additional arguments passed to |
For method="tau.mean"
and interval=FALSE
, the mean of pairwise
sample versions of Kendall's tau is computed as an estimator of the Kendall's
tau of the Archimedean copula family provided. This can be slow, especially
if the dimension is large. Method method="tau.Gumbel"
(the default)
uses the explicit and thus very fast diagonal maximum-likelihood estimator for
Gumbel's family to find initial values. Given this estimator
, the corresponding Kendall's tau
is
where
denotes Kendall's tau for Gumbel. This provides an estimator of Kendall's tau
which is typically much faster to evaluate than, pairwise Kendall's taus.
Given the estimated ‘amount of concordance’ based on Kendall's tau, one
can obtain an initial value for the provided family by applying
, that is, the inverse of Kendall's tau of the
family for which the initial value is to be computed. Note that if the
estimated Kendall's tau does not lie in the range of Kendall's tau as provided
by the bivariate vector
tau.range
, the point in tau.range
closest to the estimated Kendall's tau is chosen.
The default (interval=TRUE
) returns a reasonably large initial
interval; see the default of tau.range
in the definition of
initOpt
for the chosen values (in terms of Kendall's tau). These
default values cover a large range of concordance. If this interval is
(still) too small, one can adjust it by providing tau.range
. If it is
too large, a ‘distance to concordance’ can be used to determine
parameter values such that the corresponding Kendall's taus share a certain
distance to the initial value. For more details, see Hofert et al. (2012).
Finally, let us note that for the case interval=TRUE
, u
is not
required.
initial interval which can be used for optimization (for example, for
emle
).
Hofert, M., Mächler, M., and McNeil, A. J. (2012). Likelihood inference for Archimedean copulas in high dimensions under known margins. Journal of Multivariate Analysis 110, 133–150.
enacopula
, emle
, edmle
,
emde
, and ebeta
(where initOpt
is
applied to find initial intervals).
## Definition of the function: initOpt ## Generate some data: tau <- 0.25 (theta <- copGumbel@iTau(tau)) # 4/3 d <- 20 (cop <- onacopulaL("Gumbel", list(theta,1:d))) set.seed(1) n <- 200 U <- rnacopula(n, cop) ## Initial interval: initOpt("Gumbel") # contains theta ## Initial values: initOpt("Gumbel", interval=FALSE, u=U) # 1.3195 initOpt("Gumbel", interval=FALSE, u=U, method="tau.mean") # 1.2844
## Definition of the function: initOpt ## Generate some data: tau <- 0.25 (theta <- copGumbel@iTau(tau)) # 4/3 d <- 20 (cop <- onacopulaL("Gumbel", list(theta,1:d))) set.seed(1) n <- 200 U <- rnacopula(n, cop) ## Initial interval: initOpt("Gumbel") # contains theta ## Initial values: initOpt("Gumbel", interval=FALSE, u=U) # 1.3195 initOpt("Gumbel", interval=FALSE, u=U, method="tau.mean") # 1.2844
Easy construction of an object of class interval
,
using typical mathematical notation.
interval(ch)
interval(ch)
ch |
a character string specifying the interval. |
an interval
object.
the interval
class documentation,
notably its reference to more sophisticated interval classes available
for R.
interval("[0, 1)") ## Two ways to specify open interval borders: identical(interval("]-1,1["), interval("(-1,1)")) ## infinite : interval("[0, Inf)") ## arithmetic with scalars works: 4 + 2 * interval("[0, 1.5)") # -> [4, 7) ## str() to look at internals: str( interval("[1.2, 7]") )
interval("[0, 1)") ## Two ways to specify open interval borders: identical(interval("]-1,1["), interval("(-1,1)")) ## infinite : interval("[0, Inf)") ## arithmetic with scalars works: 4 + 2 * interval("[0, 1.5)") # -> [4, 7) ## str() to look at internals: str( interval("[1.2, 7]") )
The S4 class
"interval"
is a simple class for numeric
intervals.
"maybeInterval"
is a class union (see
setClassUnion
) of "interval"
and "NULL"
.
Objects can be created by calls of the form new("interval", ...)
,
but typically they are built via interval()
.
.Data
:numeric vector of length two, specifying the interval ranges.
open
:logical
vector of length two,
specifying if the interval is open or closed on the left and
right, respectively.
Class "interval"
extends "numeric"
, from
data part, and "maybeInterval"
, directly.
signature(x = "numeric", table = "interval")
:
check if x
is inside the interval, carefully differentiating
open and closed intervals.
signature(x = "interval")
: ...
signature(object = "interval")
: ...
signature(x = "interval")
: Group methods,
notably range()
, min()
, etc.
There are more sophisticated interval classes, functions and methods, notably in package intervals. We only use this as a simple interface in order to specify our copula functions consistently.
interval
constructs "interval" objects conveniently.
-1:2 %in% interval("(0, Inf)") ## 0 is *not* inside
-1:2 %in% interval("(0, Inf)") ## 0 is *not* inside
The Kendall distribution of an Archimedean copula is defined by
where , and the
-dimensional
is distributed according
to the copula
. Note that the random variable
is known as
“probability integral transform”. Its distribution function
is equal to the identity if
, but is non-trivial for
.
Kn()
computes the empirical Kendall distribution function,
pK()
the distribution function (so itself),
qK()
the quantile function, dK()
the density, and
rK()
random number generation from for an Archimedean
copula.
Kn(u, x, method = c("GR", "GNZ")) # empirical Kendall distribution function dK(u, copula, d, n.MC = 0, log.p = FALSE) # density pK(u, copula, d, n.MC = 0, log.p = FALSE) # df qK(p, copula, d, n.MC = 0, log.p = FALSE, # quantile function method = c("default", "simple", "sort", "discrete", "monoH.FC"), u.grid, xtraChecks = FALSE, ...) rK(n, copula, d) # random number generation
Kn(u, x, method = c("GR", "GNZ")) # empirical Kendall distribution function dK(u, copula, d, n.MC = 0, log.p = FALSE) # density pK(u, copula, d, n.MC = 0, log.p = FALSE) # df qK(p, copula, d, n.MC = 0, log.p = FALSE, # quantile function method = c("default", "simple", "sort", "discrete", "monoH.FC"), u.grid, xtraChecks = FALSE, ...) rK(n, copula, d) # random number generation
u |
evaluation point(s) (in |
x |
data (in the |
copula |
|
d |
dimension (not used when |
n.MC |
|
log.p |
|
p |
probabilities or log-probabilities if |
method |
for
For
|
u.grid |
(for |
xtraChecks |
experimental logical indicating if extra
checks should be done before calling |
... |
additional arguments passed to |
n |
sample size for |
For a completely monotone Archimedean generator ,
see Barbe et al. (1996). The corresponding density is
The empirical Kendall distribution function, density, distribution function, quantile function and random number generator.
Currently, the "default"
method of qK()
is fast but
not very accurate, see the ‘Examples’ for more accuracy (with
more CPU effort).
Barbe, P., Genest, C., Ghoudi, K., and Rémillard, B. (1996), On Kendall's Process, Journal of Multivariate Analysis 58, 197–229.
Hofert, M., Mächler, M., and McNeil, A. J. (2012). Likelihood inference for Archimedean copulas in high dimensions under known margins. Journal of Multivariate Analysis 110, 133–150. doi:10.1016/j.jmva.2012.02.019
Genest, C. and Rivest, L.-P. (1993). Statistical inference procedures for bivariate Archimedean copulas. Journal of the American Statistical Association 88, 1034–1043.
Genest, C., G. Nešlehová, J., and Ziegel, J. (2011). Inference in multivariate Archimedean copula models. TEST 20, 223–256.
htrafo
or emde
(where K
is used);
splinefun(*, "monoHC")
for that method.
tau <- 0.5 (theta <- copGumbel@iTau(tau)) # 2 d <- 20 (cop <- onacopulaL("Gumbel", list(theta,1:d))) ## Basic check of the empirical Kendall distribution function set.seed(271) n <- 1000 U <- rCopula(n, copula = cop) X <- qnorm(U) K.sample <- pCopula(U, copula = cop) u <- seq(0, 1, length.out = 256) edfK <- ecdf(K.sample) plot(u, edfK(u), type = "l", ylim = 0:1, xlab = quote(italic(u)), ylab = quote(K[n](italic(u)))) # simulated K.n <- Kn(u, x = X) lines(u, K.n, col = "royalblue3") # Kn ## Difference at 0 edfK(0) # edf of K at 0 K.n[1] # K_n(0); this is > 0 since K.n is the edf of a discrete distribution ## => therefore, Kn(K.sample, x = X) is not uniform plot(Kn(K.sample, x = X), ylim = 0:1) ## Note: Kn(0) -> 0 for n -> Inf ## Compute Kendall distribution function u <- seq(0,1, length.out = 255) Ku <- pK(u, copula = cop@copula, d = d) # exact Ku.MC <- pK(u, copula = cop@copula, d = d, n.MC = 1000) # via Monte Carlo stopifnot(all.equal(log(Ku), pK(u, copula = cop@copula, d = d, log.p=TRUE)))# rel.err 3.2e-16 ## Build sample from K set.seed(1) n <- 200 W <- rK(n, copula = cop) ## Plot empirical distribution function based on W ## and the corresponding theoretical Kendall distribution function ## (exact and via Monte Carlo) plot(ecdf(W), col = "blue", xlim = 0:1, verticals=TRUE, main = quote("Empirical"~ F[n](C(U)) ~ "and its Kendall distribution" ~ K(u)), do.points = FALSE, asp = 1) abline(0,1, lty = 2); abline(h = 0:1, v = 0:1, lty = 3, col = "gray") lines(u, Ku.MC, col = "red") # not quite monotone lines(u, Ku, col = "black") # strictly monotone: stopifnot(diff(Ku) >= 0) legend(.25, .75, expression(F[n], K[MC](u), K(u)), col=c("blue" , "red", "black"), lty = 1, lwd = 1.5, bty = "n") if(require("Rmpfr")) { # pK() now also works with high precision numbers: uM <- mpfr(0:255, 99)/256 if(FALSE) { # not yet, now fails in polyG() : KuM <- pK(uM, copula = cop@copula, d = d) ## debug(copula:::.pK) debug(copula:::polyG) } }# if( Rmpfr ) ## Testing qK pexpr <- quote( 0:63/63 ); p <- eval(pexpr) d <- 10 cop <- onacopulaL("Gumbel", list(theta = 2, 1:d)) system.time(qK0 <- qK(p, copula = cop@copula, d = d)) # "default" - fast system.time(qK1 <- qK(p, copula= cop@copula, d=d, method = "simple")) system.time(qK1. <- qK(p, copula= cop@copula, d=d, method = "simple", tol = 1e-12)) system.time(qK2 <- qK(p, copula= cop@copula, d=d, method = "sort")) system.time(qK2. <- qK(p, copula= cop@copula, d=d, method = "sort", tol = 1e-12)) system.time(qK3 <- qK(p, copula= cop@copula, d=d, method = "discrete", u.grid = 0:1e4/1e4)) system.time(qK4 <- qK(p, copula= cop@copula, d=d, method = "monoH.FC", u.grid = 0:5e2/5e2)) system.time(qK4. <- qK(p, copula= cop@copula, d=d, method = "monoH.FC", u.grid = 0:5e2/5e2, tol = 1e-12)) system.time(qK5 <- qK(p, copula= cop@copula, d=d, method = "monoH.FC", u.grid = 0:5e3/5e3)) system.time(qK5. <- qK(p, copula= cop@copula, d=d, method = "monoH.FC", u.grid = 0:5e3/5e3, tol = 1e-12)) system.time(qK6 <- qK(p, copula= cop@copula, d=d, method = "monoH.FC", u.grid = (0:5e3/5e3)^2)) system.time(qK6. <- qK(p, copula= cop@copula, d=d, method = "monoH.FC", u.grid = (0:5e3/5e3)^2, tol = 1e-12)) ## Visually they all coincide : cols <- adjustcolor(c("gray50", "gray80", "light blue", "royal blue", "purple3", "purple4", "purple"), 0.6) matplot(p, cbind(qK0, qK1, qK2, qK3, qK4, qK5, qK6), type = "l", lwd = 2*7:1, lty = 1:7, col = cols, xlab = bquote(p == .(pexpr)), ylab = quote({K^{-1}}(u)), main = "qK(p, method = *)") legend("topleft", col = cols, lwd = 2*7:1, lty = 1:7, bty = "n", inset = .03, legend= paste0("method= ", sQuote(c("default", "simple", "sort", "discrete(1e4)", "monoH.FC(500)", "monoH.FC(5e3)", "monoH.FC(*^2)")))) ## See they *are* inverses (but only approximately!): eqInv <- function(qK) all.equal(p, pK(qK, cop@copula, d=d), tol=0) eqInv(qK0 ) # "default" 0.03 worst eqInv(qK1 ) # "simple" 0.0011 - best eqInv(qK1.) # "simple", e-12 0.00000 (8.73 e-13) ! eqInv(qK2 ) # "sort" 0.0013 (close) eqInv(qK2.) # "sort", e-12 0.00000 (7.32 e-12) eqInv(qK3 ) # "discrete" 0.0026 eqInv(qK4 ) # "monoH.FC(500)" 0.0095 eqInv(qK4.) # "m.H.FC(5c)e-12" 0.00963 eqInv(qK5 ) # "monoH.FC(5e3)" 0.001148 eqInv(qK5.) # "m.H.FC(5k)e-12" 0.000989 eqInv(qK6 ) # "monoH.FC(*^2)" 0.001111 eqInv(qK6.) # "m.H.FC(*^2)e-12"0.00000 (1.190 e-09) ## and ensure the differences are not too large stopifnot( all.equal(qK0, qK1, tol = 1e-2) # ! , all.equal(qK1, qK2, tol = 1e-4) , all.equal(qK2, qK3, tol = 1e-3) , all.equal(qK3, qK4, tol = 1e-3) , all.equal(qK4, qK0, tol = 1e-2) # ! ) stopifnot(all.equal(p, pK(qK0, cop@copula, d=d), tol = 0.04))
tau <- 0.5 (theta <- copGumbel@iTau(tau)) # 2 d <- 20 (cop <- onacopulaL("Gumbel", list(theta,1:d))) ## Basic check of the empirical Kendall distribution function set.seed(271) n <- 1000 U <- rCopula(n, copula = cop) X <- qnorm(U) K.sample <- pCopula(U, copula = cop) u <- seq(0, 1, length.out = 256) edfK <- ecdf(K.sample) plot(u, edfK(u), type = "l", ylim = 0:1, xlab = quote(italic(u)), ylab = quote(K[n](italic(u)))) # simulated K.n <- Kn(u, x = X) lines(u, K.n, col = "royalblue3") # Kn ## Difference at 0 edfK(0) # edf of K at 0 K.n[1] # K_n(0); this is > 0 since K.n is the edf of a discrete distribution ## => therefore, Kn(K.sample, x = X) is not uniform plot(Kn(K.sample, x = X), ylim = 0:1) ## Note: Kn(0) -> 0 for n -> Inf ## Compute Kendall distribution function u <- seq(0,1, length.out = 255) Ku <- pK(u, copula = cop@copula, d = d) # exact Ku.MC <- pK(u, copula = cop@copula, d = d, n.MC = 1000) # via Monte Carlo stopifnot(all.equal(log(Ku), pK(u, copula = cop@copula, d = d, log.p=TRUE)))# rel.err 3.2e-16 ## Build sample from K set.seed(1) n <- 200 W <- rK(n, copula = cop) ## Plot empirical distribution function based on W ## and the corresponding theoretical Kendall distribution function ## (exact and via Monte Carlo) plot(ecdf(W), col = "blue", xlim = 0:1, verticals=TRUE, main = quote("Empirical"~ F[n](C(U)) ~ "and its Kendall distribution" ~ K(u)), do.points = FALSE, asp = 1) abline(0,1, lty = 2); abline(h = 0:1, v = 0:1, lty = 3, col = "gray") lines(u, Ku.MC, col = "red") # not quite monotone lines(u, Ku, col = "black") # strictly monotone: stopifnot(diff(Ku) >= 0) legend(.25, .75, expression(F[n], K[MC](u), K(u)), col=c("blue" , "red", "black"), lty = 1, lwd = 1.5, bty = "n") if(require("Rmpfr")) { # pK() now also works with high precision numbers: uM <- mpfr(0:255, 99)/256 if(FALSE) { # not yet, now fails in polyG() : KuM <- pK(uM, copula = cop@copula, d = d) ## debug(copula:::.pK) debug(copula:::polyG) } }# if( Rmpfr ) ## Testing qK pexpr <- quote( 0:63/63 ); p <- eval(pexpr) d <- 10 cop <- onacopulaL("Gumbel", list(theta = 2, 1:d)) system.time(qK0 <- qK(p, copula = cop@copula, d = d)) # "default" - fast system.time(qK1 <- qK(p, copula= cop@copula, d=d, method = "simple")) system.time(qK1. <- qK(p, copula= cop@copula, d=d, method = "simple", tol = 1e-12)) system.time(qK2 <- qK(p, copula= cop@copula, d=d, method = "sort")) system.time(qK2. <- qK(p, copula= cop@copula, d=d, method = "sort", tol = 1e-12)) system.time(qK3 <- qK(p, copula= cop@copula, d=d, method = "discrete", u.grid = 0:1e4/1e4)) system.time(qK4 <- qK(p, copula= cop@copula, d=d, method = "monoH.FC", u.grid = 0:5e2/5e2)) system.time(qK4. <- qK(p, copula= cop@copula, d=d, method = "monoH.FC", u.grid = 0:5e2/5e2, tol = 1e-12)) system.time(qK5 <- qK(p, copula= cop@copula, d=d, method = "monoH.FC", u.grid = 0:5e3/5e3)) system.time(qK5. <- qK(p, copula= cop@copula, d=d, method = "monoH.FC", u.grid = 0:5e3/5e3, tol = 1e-12)) system.time(qK6 <- qK(p, copula= cop@copula, d=d, method = "monoH.FC", u.grid = (0:5e3/5e3)^2)) system.time(qK6. <- qK(p, copula= cop@copula, d=d, method = "monoH.FC", u.grid = (0:5e3/5e3)^2, tol = 1e-12)) ## Visually they all coincide : cols <- adjustcolor(c("gray50", "gray80", "light blue", "royal blue", "purple3", "purple4", "purple"), 0.6) matplot(p, cbind(qK0, qK1, qK2, qK3, qK4, qK5, qK6), type = "l", lwd = 2*7:1, lty = 1:7, col = cols, xlab = bquote(p == .(pexpr)), ylab = quote({K^{-1}}(u)), main = "qK(p, method = *)") legend("topleft", col = cols, lwd = 2*7:1, lty = 1:7, bty = "n", inset = .03, legend= paste0("method= ", sQuote(c("default", "simple", "sort", "discrete(1e4)", "monoH.FC(500)", "monoH.FC(5e3)", "monoH.FC(*^2)")))) ## See they *are* inverses (but only approximately!): eqInv <- function(qK) all.equal(p, pK(qK, cop@copula, d=d), tol=0) eqInv(qK0 ) # "default" 0.03 worst eqInv(qK1 ) # "simple" 0.0011 - best eqInv(qK1.) # "simple", e-12 0.00000 (8.73 e-13) ! eqInv(qK2 ) # "sort" 0.0013 (close) eqInv(qK2.) # "sort", e-12 0.00000 (7.32 e-12) eqInv(qK3 ) # "discrete" 0.0026 eqInv(qK4 ) # "monoH.FC(500)" 0.0095 eqInv(qK4.) # "m.H.FC(5c)e-12" 0.00963 eqInv(qK5 ) # "monoH.FC(5e3)" 0.001148 eqInv(qK5.) # "m.H.FC(5k)e-12" 0.000989 eqInv(qK6 ) # "monoH.FC(*^2)" 0.001111 eqInv(qK6.) # "m.H.FC(*^2)e-12"0.00000 (1.190 e-09) ## and ensure the differences are not too large stopifnot( all.equal(qK0, qK1, tol = 1e-2) # ! , all.equal(qK1, qK2, tol = 1e-4) , all.equal(qK2, qK3, tol = 1e-3) , all.equal(qK3, qK4, tol = 1e-3) , all.equal(qK4, qK0, tol = 1e-2) # ! ) stopifnot(all.equal(p, pK(qK0, cop@copula, d=d), tol = 0.04))
Creates an object representing a copula constructed using
Khoudraji's device (Khoudraji, 1995). The resulting R object
is either of class "khoudrajiBivCopula"
,
"khoudrajiExplicitCopula"
or "khoudrajiCopula"
.
In the bivariate case, given two copulas and
,
Khoudraji's device consists of defining a copula whose c.d.f. is given
by:
where and
are shape parameters in [0,1].
The construction principle (see also Genest et al. 1998) is a special case of that considered in Liebscher (2008).
khoudrajiCopula(copula1 = indepCopula(), copula2 = indepCopula(dim = d), shapes = rep(NA_real_, dim(copula1)))
khoudrajiCopula(copula1 = indepCopula(), copula2 = indepCopula(dim = d), shapes = rep(NA_real_, dim(copula1)))
copula1 , copula2
|
each a |
shapes |
|
If the argument copulas are bivariate, an object of class
"khoudrajiBivCopula"
will be constructed. If
they are exchangeable and -dimensional with
, and if
they have explicit p.d.f. and c.d.f. expressions, an object of class
"khoudrajiExplicitCopula"
will be
constructed. For the latter two classes, density evaluation is implemented,
and fitting and goodness-of-fit testing can be attempted. If but one of the argument copulas does not have explicit p.d.f. and
c.d.f. expressions, or is not exchangeable, an object of class
"khoudrajiCopula"
will be constructed, for which
density evaluation is not possible.
A new object of class "khoudrajiBivCopula"
in
dimension two or of class
"khoudrajiExplicitCopula"
or
"khoudrajiCopula"
when .
Genest, C., Ghoudi, K., and Rivest, L.-P. (1998), Discussion of "Understanding relationships using copulas", by Frees, E., and Valdez, E., North American Actuarial Journal 3, 143–149.
Khoudraji, A. (1995), Contributions à l'étude des copules et àla modélisation des valeurs extrêmes bivariées, PhD thesis, Université Laval, Québec, Canada.
Liebscher, E. (2008), Construction of asymmetric multivariate copulas, Journal of Multivariate Analysis 99, 2234–2250.
## A bivariate Khoudraji-Clayton copula kc <- khoudrajiCopula(copula2 = claytonCopula(6), shapes = c(0.4, 0.95)) class(kc) # "kh..._Biv_Copula" kc contour(kc, dCopula, nlevels = 20, main = "dCopula(<khoudrajiBivCopula>)") ## A Khoudraji-Clayton copula with second shape parameter fixed kcf <- khoudrajiCopula(copula2 = claytonCopula(6), shapes = fixParam(c(0.4, 0.95), c(FALSE, TRUE))) kcf. <- setTheta(kcf, c(3, 0.2)) # (change *free* param's only) validObject(kcf) & validObject(kcf.) ## A "nested" Khoudraji bivariate copula kgkcf <- khoudrajiCopula(copula1 = gumbelCopula(3), copula2 = kcf, shapes = c(0.7, 0.25)) kgkcf # -> 6 parameters (1 of 6 is 'fixed') contour(kgkcf, dCopula, nlevels = 20, main = "dCopula(<khoudrajiBivC.(nested)>)") (Xtras <- copula:::doExtras()) # determine whether examples will be extra (long) n <- if(Xtras) 300 else 64 # sample size (realistic vs short for example) u <- rCopula(n, kc) plot(u) ## For likelihood (or fitting), specify the "free" (non-fixed) param's: ## C1: C2c C2s1 sh1 sh2 loglikCopula(c(3, 6, 0.4, 0.7, 0.25), u = u, copula = kgkcf) ## Fitting takes time (using numerical differentiation) and may be difficult: ## Starting values are required for all parameters f.IC <- fitCopula(khoudrajiCopula(copula2 = claytonCopula()), start = c(1.1, 0.5, 0.5), data = pobs(u), optim.method = "Nelder-Mead") summary(f.IC) confint(f.IC) # (only interesting for reasonable sample size) ## Because of time, don't run these by default : ## Second shape parameter fixed to 0.95 kcf2 <- khoudrajiCopula(copula2 = claytonCopula(), shapes = fixParam(c(NA_real_, 0.95), c(FALSE, TRUE))) system.time( f.ICf <- fitCopula(kcf2, start = c(1.1, 0.5), data = pobs(u), optim.method = "Nelder-Mead") ) # ~ 7-8 sec confint(f.ICf) # ! coef(f.ICf, SE=TRUE) ## With a different optimization method system.time( f.IC2 <- fitCopula(kcf2, start = c(1.1, 0.5), data = pobs(u), optim.method = "BFGS") ) printCoefmat(coef(f.IC2, SE=TRUE), digits = 3) # w/o unuseful extra digits if(Xtras >= 2) { # really S..L..O..W... -------- ## GOF example optim.method <- "Nelder-Mead" #try "BFGS" as well gofCopula(kcf2, x = u, start = c(1.1, 0.5), optim.method = optim.method) gofCopula(kcf2, x = u, start = c(1.1, 0.5), optim.method = optim.method, sim = "mult") ## The goodness-of-fit tests should hold their level ## but this would need to be tested ## Another example under the alternative u <- rCopula(n, gumbelCopula(4)) gofCopula(kcf2, x = u, start = c(1.1, 0.5), optim.method = optim.method) gofCopula(kcf2, x = u, start = c(1.1, 0.5), optim.method = optim.method, sim = "mult") }## ------ end { really slow gofC*() } -------- ## Higher-dimensional constructions ## A three dimensional Khoudraji-Clayton copula kcd3 <- khoudrajiCopula(copula1 = indepCopula(dim=3), copula2 = claytonCopula(6, dim=3), shapes = c(0.4, 0.95, 0.95)) n <- if(Xtras) 1000 else 100 # sample size (realistic vs short for example) u <- rCopula(n, kcd3) splom2(u) v <- matrix(runif(15), 5, 3) dCopula(v, kcd3) ## A four dimensional Khoudraji-Normal copula knd4 <- khoudrajiCopula(copula1 = indepCopula(dim=4), copula2 = normalCopula(.9, dim=4), shapes = c(0.4, 0.95, 0.95, 0.95)) knd4 stopifnot(class(knd4) == "khoudrajiCopula") u <- rCopula(n, knd4) splom2(u) ## TODO : ## dCopula(v, knd4) ## not implemented
## A bivariate Khoudraji-Clayton copula kc <- khoudrajiCopula(copula2 = claytonCopula(6), shapes = c(0.4, 0.95)) class(kc) # "kh..._Biv_Copula" kc contour(kc, dCopula, nlevels = 20, main = "dCopula(<khoudrajiBivCopula>)") ## A Khoudraji-Clayton copula with second shape parameter fixed kcf <- khoudrajiCopula(copula2 = claytonCopula(6), shapes = fixParam(c(0.4, 0.95), c(FALSE, TRUE))) kcf. <- setTheta(kcf, c(3, 0.2)) # (change *free* param's only) validObject(kcf) & validObject(kcf.) ## A "nested" Khoudraji bivariate copula kgkcf <- khoudrajiCopula(copula1 = gumbelCopula(3), copula2 = kcf, shapes = c(0.7, 0.25)) kgkcf # -> 6 parameters (1 of 6 is 'fixed') contour(kgkcf, dCopula, nlevels = 20, main = "dCopula(<khoudrajiBivC.(nested)>)") (Xtras <- copula:::doExtras()) # determine whether examples will be extra (long) n <- if(Xtras) 300 else 64 # sample size (realistic vs short for example) u <- rCopula(n, kc) plot(u) ## For likelihood (or fitting), specify the "free" (non-fixed) param's: ## C1: C2c C2s1 sh1 sh2 loglikCopula(c(3, 6, 0.4, 0.7, 0.25), u = u, copula = kgkcf) ## Fitting takes time (using numerical differentiation) and may be difficult: ## Starting values are required for all parameters f.IC <- fitCopula(khoudrajiCopula(copula2 = claytonCopula()), start = c(1.1, 0.5, 0.5), data = pobs(u), optim.method = "Nelder-Mead") summary(f.IC) confint(f.IC) # (only interesting for reasonable sample size) ## Because of time, don't run these by default : ## Second shape parameter fixed to 0.95 kcf2 <- khoudrajiCopula(copula2 = claytonCopula(), shapes = fixParam(c(NA_real_, 0.95), c(FALSE, TRUE))) system.time( f.ICf <- fitCopula(kcf2, start = c(1.1, 0.5), data = pobs(u), optim.method = "Nelder-Mead") ) # ~ 7-8 sec confint(f.ICf) # ! coef(f.ICf, SE=TRUE) ## With a different optimization method system.time( f.IC2 <- fitCopula(kcf2, start = c(1.1, 0.5), data = pobs(u), optim.method = "BFGS") ) printCoefmat(coef(f.IC2, SE=TRUE), digits = 3) # w/o unuseful extra digits if(Xtras >= 2) { # really S..L..O..W... -------- ## GOF example optim.method <- "Nelder-Mead" #try "BFGS" as well gofCopula(kcf2, x = u, start = c(1.1, 0.5), optim.method = optim.method) gofCopula(kcf2, x = u, start = c(1.1, 0.5), optim.method = optim.method, sim = "mult") ## The goodness-of-fit tests should hold their level ## but this would need to be tested ## Another example under the alternative u <- rCopula(n, gumbelCopula(4)) gofCopula(kcf2, x = u, start = c(1.1, 0.5), optim.method = optim.method) gofCopula(kcf2, x = u, start = c(1.1, 0.5), optim.method = optim.method, sim = "mult") }## ------ end { really slow gofC*() } -------- ## Higher-dimensional constructions ## A three dimensional Khoudraji-Clayton copula kcd3 <- khoudrajiCopula(copula1 = indepCopula(dim=3), copula2 = claytonCopula(6, dim=3), shapes = c(0.4, 0.95, 0.95)) n <- if(Xtras) 1000 else 100 # sample size (realistic vs short for example) u <- rCopula(n, kcd3) splom2(u) v <- matrix(runif(15), 5, 3) dCopula(v, kcd3) ## A four dimensional Khoudraji-Normal copula knd4 <- khoudrajiCopula(copula1 = indepCopula(dim=4), copula2 = normalCopula(.9, dim=4), shapes = c(0.4, 0.95, 0.95, 0.95)) knd4 stopifnot(class(knd4) == "khoudrajiCopula") u <- rCopula(n, knd4) splom2(u) ## TODO : ## dCopula(v, knd4) ## not implemented
"khoudrajiCopula"
and its SubclassesThe virtual class "asymCopula"
of (conceptually) all
asymmetric copulas and its ‘subclass’ "asym2copula"
of
those which are constructed from two copulas.
More specifically, the class "khoudrajiCopula"
and its two
subclasses "khoudrajiBivCopula"
and "khoudrajiExplicitCopula"
represent copulas constructed using Khoudraji's device from two copulas of
the same dimension; see khoudrajiCopula()
for more details.
Objects are typically created via khoudrajiCopula(...)
.
As these classes extend "copula"
, they have all
its slots: dimension
, parameters
, param.names
,
param.lowbnd
, param.upbnd
, and fullname
.
The classes "khoudrajiCopula"
and "khoudrajiBivCopula"
have the extra slots
In addition to these, the class "khoudrajiExplicitCopula"
has the slots
exprdist
:an expression
, ...
derExprs1
:an expression
of length , ...
derExprs2
:an expression
of length , ...
When possible, methods are defined at the "khoudrajiCopula"
class level. The implementation of method dCopula
for
instance is however not possible at that level. In addition, it
differs for "khoudrajiBivCopula"
and
"khoudrajiExplicitCopula"
classes.
Genest, C., Ghoudi, K., and Rivest, L.-P. (1998), Discussion of "Understanding relationships using copulas", by Frees, E., and Valdez, E., North American Actuarial Journal 3, 143–149.
Khoudraji, A. (1995), Contributions à l'étude des copules et àla modélisation des valeurs extrêmes bivariées, PhD thesis, Université Laval, Québec, Canada.
Liebscher, E. (2008), Construction of asymmetric multivariate copulas, Journal of Multivariate Analysis 99, 2234–2250.
showClass("khoudrajiCopula")# two subclasses ## all methods applicable to these subclasses: (meths <- sapply(names(getClass("khoudrajiCopula")@subclasses), function(CL) methods(class = CL), simplify=FALSE))
showClass("khoudrajiCopula")# two subclasses ## all methods applicable to these subclasses: (meths <- sapply(names(getClass("khoudrajiCopula")@subclasses), function(CL) methods(class = CL), simplify=FALSE))
(1 +/-
(-a))
Numerically OptimallyCompute f(a) = log(1 - exp(-a)), respectively g(x) = log(1 + exp(x)) quickly numerically accurately.
log1mexp(a, cutoff = log(2)) log1pexp(x, c0 = -37, c1 = 18, c2 = 33.3)
log1mexp(a, cutoff = log(2)) log1pexp(x, c0 = -37, c1 = 18, c2 = 33.3)
a |
numeric vector of positive values |
x |
numeric vector |
cutoff |
positive number; |
c0 , c1 , c2
|
cutoffs for |
f(a) == log(1 - exp(-a)) == log1p(-exp(-a)) == log(-expm1(-a))
or
g(x) == log(1 + exp(x)) == log1p(exp(x))
computed accurately and quickly
Martin Mächler (2012).
Accurately Computing ;
https://CRAN.R-project.org/package=Rmpfr/vignettes/log1mexp-note.pdf.
a <- 2^seq(-58,10, length = 256) fExpr <- expression( log(1 - exp(-a)), log(-expm1(-a)), log1p(-exp(-a)), log1mexp(a)) names(fExpr) <- c("DEF", "expm1", "log1p", "F") str(fa <- do.call(cbind, as.list(fExpr))) head(fa)# expm1() works here tail(fa)# log1p() works here ## graphically: lwd <- 1.5*(5:2); col <- adjustcolor(1:4, 0.4) op <- par(mfcol=c(1,2), mgp = c(1.25, .6, 0), mar = .1+c(3,2,1,1)) matplot(a, fa, type = "l", log = "x", col=col, lwd=lwd) legend("topleft", fExpr, col=col, lwd=lwd, lty=1:4, bty="n") # expm1() & log1mexp() work here matplot(a, -fa, type = "l", log = "xy", col=col, lwd=lwd) legend("left", paste("-",fExpr), col=col, lwd=lwd, lty=1:4, bty="n") # log1p() & log1mexp() work here par(op) curve(log1pexp, -10, 10, asp=1) abline(0,1, h=0,v=0, lty=3, col="gray") ## Cutoff c1 for log1pexp() -- not often "needed": curve(log1p(exp(x)) - log1pexp(x), 16, 20, n=2049) ## need for *some* cutoff: x <- seq(700, 720, by=2) cbind(x, log1p(exp(x)), log1pexp(x)) ## Cutoff c2 for log1pexp(): curve((x+exp(-x)) - x, 20, 40, n=1025) curve((x+exp(-x)) - x, 33.1, 33.5, n=1025)
a <- 2^seq(-58,10, length = 256) fExpr <- expression( log(1 - exp(-a)), log(-expm1(-a)), log1p(-exp(-a)), log1mexp(a)) names(fExpr) <- c("DEF", "expm1", "log1p", "F") str(fa <- do.call(cbind, as.list(fExpr))) head(fa)# expm1() works here tail(fa)# log1p() works here ## graphically: lwd <- 1.5*(5:2); col <- adjustcolor(1:4, 0.4) op <- par(mfcol=c(1,2), mgp = c(1.25, .6, 0), mar = .1+c(3,2,1,1)) matplot(a, fa, type = "l", log = "x", col=col, lwd=lwd) legend("topleft", fExpr, col=col, lwd=lwd, lty=1:4, bty="n") # expm1() & log1mexp() work here matplot(a, -fa, type = "l", log = "xy", col=col, lwd=lwd) legend("left", paste("-",fExpr), col=col, lwd=lwd, lty=1:4, bty="n") # log1p() & log1mexp() work here par(op) curve(log1pexp, -10, 10, asp=1) abline(0,1, h=0,v=0, lty=3, col="gray") ## Cutoff c1 for log1pexp() -- not often "needed": curve(log1p(exp(x)) - log1pexp(x), 16, 20, n=2049) ## need for *some* cutoff: x <- seq(700, 720, by=2) cbind(x, log1p(exp(x)), log1pexp(x)) ## Cutoff c2 for log1pexp(): curve((x+exp(-x)) - x, 20, 40, n=1025) curve((x+exp(-x)) - x, 33.1, 33.5, n=1025)
Indemnity payment and allocated loss adjustment expense from an insurance company.
data(loss, package="copula")
data(loss, package="copula")
A data frame with 1500 observations of the following 4 variables:
loss
a numeric vector of loss amount up to the limit
.
alae
a numeric vector of the corresponding allocated loss adjustment expense.
limit
a numeric vector of limit (-99 means no limit).
censored
1 means censored (limit reached) and 0 otherwise.
Frees, E. and Valdez, E. (1998). Understanding relationships using copulas. North American Actuarial Journal 2, 1–25.
data(loss)
data(loss)
The marginal copula of a copula is simply the
restriction of
on a subset of the the coordinate (directions)
.
margCopula(copula, keep)
margCopula(copula, keep)
copula |
a |
keep |
logical vector (of length |
The marginal copula of a copula is needed in practical data analysis when one or more of the components of some multivariate observations is missing. For normal/t/Archimedean copulas, the marginal copulas can be easily obtained. For a general copula, this may not be an easy problem.
The current implementation only supports normal/t/Archimedean
copulas. margCopula
is generic function with methods for the
different copula classes.
The marginal copula of the specified margin(s).
tc <- tCopula(8:2 / 10, dim = 8, dispstr = "toep") margCopula(tc, c(TRUE, TRUE, FALSE, TRUE, FALSE, FALSE, TRUE, FALSE)) nc <- normalCopula(.8, dim = 8, dispstr = "ar1") mnc <- margCopula(nc, c(TRUE, TRUE, FALSE, TRUE, FALSE, FALSE, TRUE, FALSE)) mnc7 <- margCopula(nc, (1:8) != 1) stopifnot(dim(nc) == 8, dim(mnc) == 4, dim(mnc7) == 7) gc <- gumbelCopula(2, dim = 8) margCopula(gc, c(TRUE, TRUE, TRUE, FALSE, FALSE, FALSE, FALSE, FALSE))
tc <- tCopula(8:2 / 10, dim = 8, dispstr = "toep") margCopula(tc, c(TRUE, TRUE, FALSE, TRUE, FALSE, FALSE, TRUE, FALSE)) nc <- normalCopula(.8, dim = 8, dispstr = "ar1") mnc <- margCopula(nc, c(TRUE, TRUE, FALSE, TRUE, FALSE, FALSE, TRUE, FALSE)) mnc7 <- margCopula(nc, (1:8) != 1) stopifnot(dim(nc) == 8, dim(mnc) == 4, dim(mnc7) == 7) gc <- gumbelCopula(2, dim = 8) margCopula(gc, c(TRUE, TRUE, TRUE, FALSE, FALSE, FALSE, FALSE, FALSE))
sinc(x)
computes the sinc function
for
and
, such that
is continuous, also at
.
A..Z(x, a)
computes Zolotarev's function to
the power 1-a
.
sinc(x) A..Z(x, alpha, I.alpha = 1 - alpha)
sinc(x) A..Z(x, alpha, I.alpha = 1 - alpha)
x |
|
alpha |
parameter in (0,1]. |
I.alpha |
must be |
For more details about Zolotarev's function, see, for example, Devroye (2009).
A..Z(x,alpha)
is ,
defined as
where is
alpha
.
Devroye, L. (2009) Random variate generation for exponentially and polynomially tilted stable distributions, ACM Transactions on Modeling and Computer Simulation 19, 18, 1–20.
retstable
internally makes use of these functions.
curve(sinc, -15,25); abline(h=0,v=0, lty=2) curve(A..Z(x, 0.25), xlim = c(-4,4), main = "Zolotarev's function A(x) ^ 1-alpha")
curve(sinc, -15,25); abline(h=0,v=0, lty=2) curve(A..Z(x, 0.25), xlim = c(-4,4), main = "Zolotarev's function A(x) ^ 1-alpha")
p2P()
creates a matrix
from a given
vector
of parameters. P2p()
creates a
numeric vector from a given matrix
,
currently useful for elliptical copulas.
getSigma()
returns the symmetric
matrix
which is called “Rho” as well, written
(capital Greek
!) as
(and hence sometimes
erronously pronounced "Pee").
Note that
getSigma()
works for all elliptical copulas and uses
p2P()
for the “unstuctured” case, dispstr = "un"
.
extremePairs()
identifies pairs with the largest (or smallest
or both) entries in a symmetric matrix.
p2P(param, d = floor(1 + sqrt(2*length(param)))) P2p(P) getSigma(copula) extremePairs(x, n = 6, method = c("largest", "smallest", "both"), use.names = FALSE)
p2P(param, d = floor(1 + sqrt(2*length(param)))) P2p(P) getSigma(copula) extremePairs(x, n = 6, method = c("largest", "smallest", "both"), use.names = FALSE)
param |
a parameter vector. |
d |
dimension of the resulting |
P |
a |
copula |
an elliptical copula, i.e., an object (extending)
class |
x |
a symmetric |
n |
the number of pairs with smallest (or largest) values to be displayed. |
method |
a |
use.names |
A |
These auxiliary functions are often used when working with elliptical copulas.
p2P
:a symmetric matrix
with ones on the
diagonal and the values of param
filled column-wise below the
diagonal (which corresponds to row-wise filling above the diagonal).
P2p
:vector
of column-wise below-diagonal
entries of P
(equal to the row-wise above-diagonal entries in
case of a symmetric matrix).
getSigma
:matrix
as from p2P()
for all cases of elliptical copulas.
extremePairs
:a data.frame
consisting of three columns (row
(index or name), col
(index or name),
value
).
ellipCopula
, tCopula
, normalCopula
.
## display the two simple definitions: p2P P2p param <- (2:7)/10 tC <- tCopula(param, dim = 4, dispstr = "un", df = 3) ## consistency of the three functions : P <- p2P(param) # (using the default 'd') stopifnot(identical(param, P2p(P)), identical(P, getSigma(tC))) ## Toeplitz case: (tCt <- tCopula((2:6)/10, dim = 6, disp = "toep")) (rhoP <- tCt@getRho(tCt)) stopifnot(identical(getSigma (tCt), toeplitz (c(1, rhoP)))) ## "AR1" case: nC.7 <- normalCopula(0.8, dim = 7, dispstr = "ar1") (Sar1.7 <- getSigma(nC.7)) 0.8^(0:(7-1)) # 1 0.8 0.64 0.512 .. stopifnot(all.equal(Sar1.7, toeplitz(0.8^(0:(7-1)))))
## display the two simple definitions: p2P P2p param <- (2:7)/10 tC <- tCopula(param, dim = 4, dispstr = "un", df = 3) ## consistency of the three functions : P <- p2P(param) # (using the default 'd') stopifnot(identical(param, P2p(P)), identical(P, getSigma(tC))) ## Toeplitz case: (tCt <- tCopula((2:6)/10, dim = 6, disp = "toep")) (rhoP <- tCt@getRho(tCt)) stopifnot(identical(getSigma (tCt), toeplitz (c(1, rhoP)))) ## "AR1" case: nC.7 <- normalCopula(0.8, dim = 7, dispstr = "ar1") (Sar1.7 <- getSigma(nC.7)) 0.8^(0:(7-1)) # 1 0.8 0.64 0.512 .. stopifnot(all.equal(Sar1.7, toeplitz(0.8^(0:(7-1)))))
A mixture of copulas of dimension
with weights
,
is itself a
-dimensional copula,
with cumulative distribution function
and (probability) density function
where are the CDFs and
are the
densities of the
component copulas,
.
mixCopula(coplist, w = NULL)
mixCopula(coplist, w = NULL)
coplist |
a |
w |
numeric vector of length |
It easy to see that the tail dependencies lambda()
and
Spearman's rank correlation rho()
can be computed as
mixture of the individual measures.
khoudrajiCopula
, rotCopula
also create new
copula models from existing ones.
mC <- mixCopula(list(gumbelCopula(2.5, dim=3), claytonCopula(pi, dim=3), tCopula(0.7, dim=3)), c(2,2,4)/8) mC stopifnot(dim(mC) == 3) set.seed(17) uM <- rCopula(600, mC) splom2(uM, main = "mixCopula( (gumbel, clayton, t-Cop) )") d.uM <- dCopula(uM, mC) p.uM <- pCopula(uM, mC) ## mix a Gumbel with a rotated Gumbel (with equal weights 1/2): mGG <- mixCopula(list(gumbelCopula(2), rotCopula(gumbelCopula(1.5)))) rho(mGG) # 0.57886 lambda(mGG)# both lower and upper tail dependency loglikCopula(c(2.5, pi, rho.1=0.7, df = 4, w = c(2,2,4)/8), u = uM, copula = mC) ## define (profiled) log-likelihood function of two arguments (df, rho) : ll.df <- Vectorize(function(df, rho) loglikCopula(c(2.5, pi, rho.1=rho, df=df, w = c(2,2,4)/8), uM, mC)) (df. <- 1/rev(seq(1/8, 1/2, length=21)))# in [2, 8] equidistant in 1/. scale ll. <- ll.df(df., rho = (rh1 <- 0.7)) plot(df., ll., type = "b", main = "loglikCopula((.,.,rho = 0.7, df, ..), u, <mixCopula>)") if(!exists("Xtras")) Xtras <- copula:::doExtras() ; cat("Xtras: ", Xtras,"\n") if (Xtras) withAutoprint({ Rhos <- seq(0.55, 0.70, by = 0.01) ll.m <- matrix(NA, nrow=length(df.), ncol=length(Rhos)) for(k in seq_along(Rhos)) ll.m[,k] <- ll.df(df., rho = Rhos[k]) tit <- "loglikelihood(<tCop>, true param. for rest)" persp (df., Rhos, ll.m, phi=30, theta = 50, ticktype="detailed", main = tit) filled.contour(df., Rhos, ll.m, xlab="df", ylab = "rho", main = tit) }) ## fitCopula() example ----------------------------------------------------- ## 1) with "fixed" weights : (mNt <- mixCopula(list(normalCopula(0.95), tCopula(-0.7)), w = c(1, 2) / 3)) set.seed(1452) ; U <- pobs(rCopula(1000, mNt)) (m1 <- mixCopula(list(normalCopula(), tCopula()), w = mNt@w)) getTheta(m1, freeOnly = TRUE, attr = TRUE) getTheta(m1, named=TRUE) isFree(m1) # all of them; --> now fix the weights : fixedParam(m1) <- fx <- c(FALSE, FALSE, FALSE, TRUE, TRUE) stopifnot(identical(isFree(m1), !fx)) if(Xtras) withAutoprint({ ## time system.time( # ~ 16 sec (nb-mm4) : fit <- fitCopula(m1, start = c(0, 0, 10), data = U) ) fit summary(fit) #-> incl 'Std.Error' (which seems small for rho1 !) }) ## 2) with "free" weights (possible since copula 1.0-0): (mNt2 <- mixCopula(list(normalCopula(0.9), tCopula(-0.8)), w = c(1, 3) / 4)) set.seed(1959) ; U2 <- pobs(rCopula(2000, mNt2)) if(Xtras) withAutoprint({ ## time m2 <- mixCopula(list(normalCopula(), tCopula()), w = mNt@w) system.time( # ~ 13.5 sec (lynne) : f2 <- fitCopula(m2, start = c(0, 0, 10, c(1/2, 1/2)), data = U2) ) f2 summary(f2) # NA for 'Std. Error' as did *not* estimate.variance summary(f2, orig=FALSE) # default 'orig=TRUE': w-scale; whereas coef(f2, orig=FALSE) # 'orig=FALSE' => shows 'l-scale' instead })
mC <- mixCopula(list(gumbelCopula(2.5, dim=3), claytonCopula(pi, dim=3), tCopula(0.7, dim=3)), c(2,2,4)/8) mC stopifnot(dim(mC) == 3) set.seed(17) uM <- rCopula(600, mC) splom2(uM, main = "mixCopula( (gumbel, clayton, t-Cop) )") d.uM <- dCopula(uM, mC) p.uM <- pCopula(uM, mC) ## mix a Gumbel with a rotated Gumbel (with equal weights 1/2): mGG <- mixCopula(list(gumbelCopula(2), rotCopula(gumbelCopula(1.5)))) rho(mGG) # 0.57886 lambda(mGG)# both lower and upper tail dependency loglikCopula(c(2.5, pi, rho.1=0.7, df = 4, w = c(2,2,4)/8), u = uM, copula = mC) ## define (profiled) log-likelihood function of two arguments (df, rho) : ll.df <- Vectorize(function(df, rho) loglikCopula(c(2.5, pi, rho.1=rho, df=df, w = c(2,2,4)/8), uM, mC)) (df. <- 1/rev(seq(1/8, 1/2, length=21)))# in [2, 8] equidistant in 1/. scale ll. <- ll.df(df., rho = (rh1 <- 0.7)) plot(df., ll., type = "b", main = "loglikCopula((.,.,rho = 0.7, df, ..), u, <mixCopula>)") if(!exists("Xtras")) Xtras <- copula:::doExtras() ; cat("Xtras: ", Xtras,"\n") if (Xtras) withAutoprint({ Rhos <- seq(0.55, 0.70, by = 0.01) ll.m <- matrix(NA, nrow=length(df.), ncol=length(Rhos)) for(k in seq_along(Rhos)) ll.m[,k] <- ll.df(df., rho = Rhos[k]) tit <- "loglikelihood(<tCop>, true param. for rest)" persp (df., Rhos, ll.m, phi=30, theta = 50, ticktype="detailed", main = tit) filled.contour(df., Rhos, ll.m, xlab="df", ylab = "rho", main = tit) }) ## fitCopula() example ----------------------------------------------------- ## 1) with "fixed" weights : (mNt <- mixCopula(list(normalCopula(0.95), tCopula(-0.7)), w = c(1, 2) / 3)) set.seed(1452) ; U <- pobs(rCopula(1000, mNt)) (m1 <- mixCopula(list(normalCopula(), tCopula()), w = mNt@w)) getTheta(m1, freeOnly = TRUE, attr = TRUE) getTheta(m1, named=TRUE) isFree(m1) # all of them; --> now fix the weights : fixedParam(m1) <- fx <- c(FALSE, FALSE, FALSE, TRUE, TRUE) stopifnot(identical(isFree(m1), !fx)) if(Xtras) withAutoprint({ ## time system.time( # ~ 16 sec (nb-mm4) : fit <- fitCopula(m1, start = c(0, 0, 10), data = U) ) fit summary(fit) #-> incl 'Std.Error' (which seems small for rho1 !) }) ## 2) with "free" weights (possible since copula 1.0-0): (mNt2 <- mixCopula(list(normalCopula(0.9), tCopula(-0.8)), w = c(1, 3) / 4)) set.seed(1959) ; U2 <- pobs(rCopula(2000, mNt2)) if(Xtras) withAutoprint({ ## time m2 <- mixCopula(list(normalCopula(), tCopula()), w = mNt@w) system.time( # ~ 13.5 sec (lynne) : f2 <- fitCopula(m2, start = c(0, 0, 10, c(1/2, 1/2)), data = U2) ) f2 summary(f2) # NA for 'Std. Error' as did *not* estimate.variance summary(f2, orig=FALSE) # default 'orig=TRUE': w-scale; whereas coef(f2, orig=FALSE) # 'orig=FALSE' => shows 'l-scale' instead })
"mixCopula"
of Copula MixturesThe class "mixCopula"
is the class of all finite mixtures of
copulas.
These are given by (a list of) “arbitrary” copulas, and
their respective
non-negative probability weights.
Objects are typically created by mixCopula()
.
w
:Object of class "mixWeights"
, basically a
non-negative numeric
vector of length, say ,
which sums to one.
cops
:Object of class "parClist"
, a
list
of (parametrized) copulas, "parCopula"
.
Class "parCopula"
, directly.
Class "Copula"
, by class "parCopula", distance 2.
signature(x = "mixCopula")
: dimension of copula.
signature(x = "mixCopula")
: Spearman's rho of
copula x
.
signature(x = "mixCopula")
: lower and upper
tail dependecies lambda
, ,
of the mixture copula.
As the probability weights must some to one (1
), which is part
of the validity (see validObject
) of an object of class
"mixWeights"
, the number of “free” parameters inherently
is (at most) one less than the number of mixture components
.
Because of that, it does not make sense to fix (see
fixParam
or fixedParam<-
)
all but one of the weights: Either all are fixed, or at least two must
be free. Further note, that the definition of free or fixed
parameters, and the meaning of the methods (for mixCopula
) of
getTheta
, setTheta
and
fixedParam<-
will probably change in a next release of
package copula, where it is planned to use a reparametrization
better suited for fitCopula
.
mixCopula
for creation and examples.
showClass("mixCopula")
showClass("mixCopula")
Computes Marshall-Olkin copulas in the bivariate case.
moCopula(param = NA_real_, dim = 2L)
moCopula(param = NA_real_, dim = 2L)
param |
|
dim |
the dimension of the copula. |
moCopula()
is the constructor for objects of class
moCopula
.
Marshall-Olkin copulas are only implemented for dim = 2L
.
The "moCopula"
class, its mathematical
definition, etc.
alpha <- c(0.2, 0.7) MO <- moCopula(alpha) tau(MO) # 0.18 lambda(MO) stopifnot(all.equal(lambda(MO), c(lower = 0, upper = 0.2))) wireframe2 (MO, FUN = pCopula) # if you look carefully, you can see the kink contourplot2(MO, FUN = pCopula) set.seed(271) plot(rCopula(1000, MO))
alpha <- c(0.2, 0.7) MO <- moCopula(alpha) tau(MO) # 0.18 lambda(MO) stopifnot(all.equal(lambda(MO), c(lower = 0, upper = 0.2))) wireframe2 (MO, FUN = pCopula) # if you look carefully, you can see the kink contourplot2(MO, FUN = pCopula) set.seed(271) plot(rCopula(1000, MO))
The Marshall-Olkin copula class.
The (2-dimensional) "MO" copula with parameter
is (i.e., its CDF is)
Consequently, the density is undefined on a curve (in ),
namely for the points
where two
expressions in the above
are equal,
. It is easy to see that that is equivalent to
Objects can be created by new("moCopula", ...)
but are
typically produced by moCopula(...)
.
dimension
:Numeric (scalar), the dimension of the copula.
exprdist
:a length two expression
with
expressions for the CDF and PDF of the copula.
parameters
:numeric vector of two parameter values in .
param.names
:"character"
vector of length two.
param.lowbnd
:numeric vector of two values in .
param.upbnd
:numeric vector of two values in .
fullname
:(deprecated; do not use!)
Typical copula methods work, see "moCopula"
and use methods(class = "moCopula")
.
Class "moCopula"
extends class "copula"
directly.
Nelsen, R. B. (2006), An introduction to Copulas, Springer, New York.
moCopula
for constructing them;
copula-class
.
moCopula()@exprdist[["cdf"]] # a simple definition methods(class = "moCopula") contourplot2(moCopula(c(.1, .8)), pCopula, main= "moCopula((0.1, 0.8))") Xmo <- rCopula(5000, moCopula(c(.2, .5))) try( # gives an error, as there is no density (!): loglikCopula(c(.1, .2), Xmo, moCopula()) ) plot(moCopula(c(.9, .2)), n = 10000, xaxs="i", yaxs="i", # opaque color (for "density effect"): pch = 16, col = adjustcolor("black", 0.3))
moCopula()@exprdist[["cdf"]] # a simple definition methods(class = "moCopula") contourplot2(moCopula(c(.1, .8)), pCopula, main= "moCopula((0.1, 0.8))") Xmo <- rCopula(5000, moCopula(c(.2, .5))) try( # gives an error, as there is no density (!): loglikCopula(c(.1, .2), Xmo, moCopula()) ) plot(moCopula(c(.9, .2)), n = 10000, xaxs="i", yaxs="i", # opaque color (for "density effect"): pch = 16, col = adjustcolor("black", 0.3))
Analog of the independence test based on the empirical copula process
proposed by Christian Genest and Bruno Rémillard (see
indepTest
) for random vectors. The main
difference comes from the fact that critical values and p-values are
obtained through the bootstrap/permutation methodology, since, here,
test statistics are not distribution-free.
multIndepTest(x, d, m = length(d), N = 1000, alpha = 0.05, verbose = interactive())
multIndepTest(x, d, m = length(d), N = 1000, alpha = 0.05, verbose = interactive())
x |
data frame ( |
d |
dimensions of the random vectors whose realizations are given
in |
m |
maximum cardinality of the subsets of random vectors for
which a test statistic is to be computed. It makes sense to consider
|
N |
number of bootstrap/permutation samples. |
alpha |
significance level used in the computation of the critical values for the test statistics. |
verbose |
a logical specifying if progress
should be displayed via |
See the references below for more details, especially the last one.
The function "multIndepTest"
returns an object of class
"indepTest"
whose attributes are: subsets
,
statistics
, critical.values
, pvalues
,
fisher.pvalue
(a p-value resulting from a combination à la
Fisher of the subset statistic p-values), tippett.pvalue
(a p-value
resulting from a combination à la Tippett of the subset
statistic p-values), alpha
(global significance level of the
test), beta
(1 - beta
is the significance level per
statistic), global.statistic
(value of the global Cramér-von
Mises statistic derived directly from the independence empirical
copula process - see In
in the last reference) and
global.statistic.pvalue
(corresponding p-value).
The former argument print.every
is deprecated and not
supported anymore; use verbose
instead.
Deheuvels, P. (1979). La fonction de dépendance empirique et ses propriétés: un test non paramétrique d'indépendance, Acad. Roy. Belg. Bull. Cl. Sci., 5th Ser. 65, 274–292.
Deheuvels, P. (1981), A non parametric test for independence, Publ. Inst. Statist. Univ. Paris. 26, 29–50.
Genest, C. and Rémillard, B. (2004), Tests of independence and randomness based on the empirical copula process. Test 13, 335–369.
Genest, C., Quessy, J.-F., and Rémillard, B. (2006). Local efficiency of a Cramer-von Mises test of independence, Journal of Multivariate Analysis 97, 274–294.
Genest, C., Quessy, J.-F., and Rémillard, B. (2007), Asymptotic local efficiency of Cramér-von Mises tests for multivariate independence. The Annals of Statistics 35, 166–191.
Kojadinovic, I. and Holmes, M. (2009), Tests of independence among continuous random vectors based on Cramér-von Mises functionals of the empirical copula process. Journal of Multivariate Analysis 100, 1137–1154.
indepTest
,
serialIndepTest
,
multSerialIndepTest
,
dependogram
.
## Consider the following example taken from ## Kojadinovic and Holmes (2008): n <- 100 ## Generate data y <- matrix(rnorm(6*n),n,6) y[,1] <- y[,2]/2 + sqrt(3)/2*y[,1] y[,3] <- y[,4]/2 + sqrt(3)/2*y[,3] y[,5] <- y[,6]/2 + sqrt(3)/2*y[,5] nc <- normalCopula(0.3,dim=3) x <- cbind(y,rCopula(n, nc),rCopula(n, nc)) x[,1] <- abs(x[,1]) * sign(x[,3] * x[,5]) x[,2] <- abs(x[,2]) * sign(x[,3] * x[,5]) x[,7] <- x[,7] + x[,10] x[,8] <- x[,8] + x[,11] x[,9] <- x[,9] + x[,12] ## Dimensions of the random vectors d <- c(2,2,2,3,3) ## Run the test test <- multIndepTest(x,d) test ## Display the dependogram dependogram(test,print=TRUE)
## Consider the following example taken from ## Kojadinovic and Holmes (2008): n <- 100 ## Generate data y <- matrix(rnorm(6*n),n,6) y[,1] <- y[,2]/2 + sqrt(3)/2*y[,1] y[,3] <- y[,4]/2 + sqrt(3)/2*y[,3] y[,5] <- y[,6]/2 + sqrt(3)/2*y[,5] nc <- normalCopula(0.3,dim=3) x <- cbind(y,rCopula(n, nc),rCopula(n, nc)) x[,1] <- abs(x[,1]) * sign(x[,3] * x[,5]) x[,2] <- abs(x[,2]) * sign(x[,3] * x[,5]) x[,7] <- x[,7] + x[,10] x[,8] <- x[,8] + x[,11] x[,9] <- x[,9] + x[,12] ## Dimensions of the random vectors d <- c(2,2,2,3,3) ## Run the test test <- multIndepTest(x,d) test ## Display the dependogram dependogram(test,print=TRUE)
Analog of the serial independence test based on the empirical
copula process proposed by Christian Genest and Bruno Rémillard (see
serialIndepTest
) for multivariate time
series. The main difference comes from the fact that critical values
and p-values are obtained through the bootstrap/permutation
methodology, since, here, test statistics are not distribution-free.
multSerialIndepTest(x, lag.max, m = lag.max+1, N = 1000, alpha = 0.05, verbose = interactive())
multSerialIndepTest(x, lag.max, m = lag.max+1, N = 1000, alpha = 0.05, verbose = interactive())
x |
data frame or matrix of multivariate continuous time series whose serial independence is to be tested. |
lag.max |
maximum lag. |
m |
maximum cardinality of the subsets of 'lags' for
which a test statistic is to be computed. It makes sense to consider
|
N |
number of bootstrap/permutation samples. |
alpha |
significance level used in the computation of the critical values for the test statistics. |
verbose |
a logical specifying if progress
should be displayed via |
See the references below for more details, especially the last one.
The former argument print.every
is deprecated and not
supported anymore; use verbose
instead.
The function "multSerialIndepTest"
returns an object of class
"indepTest"
whose attributes are: subsets
,
statistics
, critical.values
, pvalues
,
fisher.pvalue
(a p-value resulting from a combination à la
Fisher of the subset statistic p-values), tippett.pvalue
(a p-value
resulting from a combination à la Tippett of the subset
statistic p-values), alpha
(global significance level of the
test), beta
(1 - beta
is the significance level per statistic),
global.statistic
(value of the global Cramér-von Mises
statistic derived directly from
the independence empirical copula process - see In
in the last
reference) and global.statistic.pvalue
(corresponding p-value).
Deheuvels, P. (1979) La fonction de dépendance empirique et ses propriétés: un test non paramétrique d'indépendance. Acad. Roy. Belg. Bull. Cl. Sci., 5th Ser. 65, 274–292.
Deheuvels, P. (1981) A non parametric test for independence. Publ. Inst. Statist. Univ. Paris 26, 29–50.
Genest, C. and Rémillard, B. (2004) Tests of independence and randomness based on the empirical copula process. Test 13, 335–369.
Ghoudi, K., Kulperger, R., and Rémillard, B. (2001) A nonparametric test of serial independence for times series and residuals. Journal of Multivariate Analysis 79, 191–218.
Kojadinovic, I. and Yan, J. (2011) Tests of multivariate serial independence based on a Möbius decomposition of the independence empirical copula process. Annals of the Institute of Statistical Mathematics 63, 347–373.
serialIndepTest
,
indepTest
,
multIndepTest
,
dependogram
## A multivariate time series {minimal example for demo purposes} d <- 2 n <- 100 # sample size *and* "burn-in" size param <- 0.25 A <- matrix(param,d,d) # the bivariate AR(1)-matrix set.seed(17) ar <- matrix(rnorm(2*n * d), 2*n,d) # used as innovations for (i in 2:(2*n)) ar[i,] <- A %*% ar[i-1,] + ar[i,] ## drop burn-in : x <- ar[(n+1):(2*n),] ## Run the test test <- multSerialIndepTest(x,3) test ## Display the dependogram dependogram(test,print=TRUE)
## A multivariate time series {minimal example for demo purposes} d <- 2 n <- 100 # sample size *and* "burn-in" size param <- 0.25 A <- matrix(param,d,d) # the bivariate AR(1)-matrix set.seed(17) ar <- matrix(rnorm(2*n * d), 2*n,d) # used as innovations for (i in 2:(2*n)) ar[i,] <- A %*% ar[i-1,] + ar[i,] ## drop burn-in : x <- ar[(n+1):(2*n),] ## Run the test test <- multSerialIndepTest(x,3) test ## Display the dependogram dependogram(test,print=TRUE)
Density, distribution function, and random generator for a multivariate distribution via copula and parametric margins.
For likelihood and fitting these distributions to data, see
fitMvdc
.
mvdc(copula, margins, paramMargins, marginsIdentical = FALSE, check = TRUE, fixupNames = TRUE) dMvdc(x, mvdc, log=FALSE) pMvdc(x, mvdc) rMvdc(n, mvdc)
mvdc(copula, margins, paramMargins, marginsIdentical = FALSE, check = TRUE, fixupNames = TRUE) dMvdc(x, mvdc, log=FALSE) pMvdc(x, mvdc) rMvdc(n, mvdc)
copula |
an object of |
margins |
a |
paramMargins |
a |
marginsIdentical |
logical variable restricting the marginal distributions to be identical. |
check |
logical indicating to apply quick checks about existence of
|
fixupNames |
logical indicating if the parameters of the margins
should get automatic names (from |
mvdc |
a |
x |
a numeric vector of length the copula dimension, say |
log |
logical indicating if the |
n |
number of observations to be generated. |
The characters in argument margins
are used to construct
density, distribution, and quantile function names. For
example, norm
can be used to specify marginal distribution,
because dnorm
, pnorm
, and qnorm
are all
available.
A user-defined distribution, for example, fancy
, can be used as
margin provided that dfancy
, pfancy
, and
qfancy
are available.
Each component list in argument paramMargins
is a
list
with named components which are used to specify the
parameters of the marginal distributions. For example, the list
paramMargins = list(list(mean = 0, sd = 2), list(rate = 2))
can be used to specify that the first margin is normal with mean 0 and standard deviation 2, and the second margin is exponential with rate 2.
mvdc()
constructs an object of class "mvdc"
.
dMvdc()
gives the density, pMvdc()
gives the cumulative
distribution function, and rMvdc()
generates random variates.
mvdc()
, fitMvdc
, etc, are only for
parametric margins.
If you do not want to model all margins parametrically, use the
standard copula approach, transforming the data by their
empirical margins via pobs
and modelling the copula
alone, e.g., using fitCopula
, i.e., conceptually,
using
fitCopula(.., pobs(x))
ellipCopula
,
archmCopula
;
the classes mvdc
and copula
.
## construct a bivariate distribution whose marginals ## are normal and exponential respectively, coupled ## together via a normal copula mv.NE <- mvdc(normalCopula(0.75), c("norm", "exp"), list(list(mean = 0, sd =2), list(rate = 2))) dim(mv.NE) mv.NE # using its print() / show() method persp (mv.NE, dMvdc, xlim = c(-4, 4), ylim=c(0, 2), main = "dMvdc(mv.NE)") persp (mv.NE, pMvdc, xlim = c(-4, 4), ylim=c(0, 2), main = "pMvdc(mv.NE)") contour(mv.NE, dMvdc, xlim = c(-4, 4), ylim=c(0, 2)) # Generate (bivariate) random numbers from that, and visualize x.samp <- rMvdc(250, mv.NE) plot(x.samp) summary(fx <- dMvdc(x.samp, mv.NE)) summary(Fx <- pMvdc(x.samp, mv.NE)) op <- par(mfcol=c(1,2)) pp <- persp(mv.NE, pMvdc, xlim = c(-5,5), ylim=c(0,2), main = "pMvdc(mv.NE)", ticktype="detail") px <- copula:::perspMvdc(x.samp, FUN = F.n, xlim = c(-5, 5), ylim = c(0, 2), main = "F.n(x.samp)", ticktype="detail") par(op) all.equal(px, pp)# about 5% difference
## construct a bivariate distribution whose marginals ## are normal and exponential respectively, coupled ## together via a normal copula mv.NE <- mvdc(normalCopula(0.75), c("norm", "exp"), list(list(mean = 0, sd =2), list(rate = 2))) dim(mv.NE) mv.NE # using its print() / show() method persp (mv.NE, dMvdc, xlim = c(-4, 4), ylim=c(0, 2), main = "dMvdc(mv.NE)") persp (mv.NE, pMvdc, xlim = c(-4, 4), ylim=c(0, 2), main = "pMvdc(mv.NE)") contour(mv.NE, dMvdc, xlim = c(-4, 4), ylim=c(0, 2)) # Generate (bivariate) random numbers from that, and visualize x.samp <- rMvdc(250, mv.NE) plot(x.samp) summary(fx <- dMvdc(x.samp, mv.NE)) summary(Fx <- pMvdc(x.samp, mv.NE)) op <- par(mfcol=c(1,2)) pp <- persp(mv.NE, pMvdc, xlim = c(-5,5), ylim=c(0,2), main = "pMvdc(mv.NE)", ticktype="detail") px <- copula:::perspMvdc(x.samp, FUN = F.n, xlim = c(-5, 5), ylim = c(0, 2), main = "F.n(x.samp)", ticktype="detail") par(op) all.equal(px, pp)# about 5% difference
"mvdc"
is a class
representing
multivariate distributions constructed via
copula and margins, using Sklar's theorem.
Objects are typically created by mvdc()
, or
can be created by calls of the form new("mvdc", ...)
.
copula
:Object of class "copula"
,
specifying the copula.
margins
:Object of class "character"
,
specifying the marginal distributions.
paramMargins
:Object of class "list"
, whose
each component is a list of named components, giving the parameter
values of the marginal distributions. See mvdc
.
marginsIdentical
:Object of class "logical"
,
that, if TRUE, restricts the marginal distributions to be
identical, default is FALSE
.
signature(x = "mvdc")
: ...
signature(x = "mvdc")
: the dimension of the
distribution; this is the same as dim(x@copula)
.
signature(x = "mvdc")
: ...
signature(object = "mvdc")
: quite compactly display
the content of the "mvdc" object
.
mvdc
,
also for examples; for fitting, fitMvdc
.
This function provides measurements of user run times for the frailty variables involved in a nested Archimedean copula.
nacFrail.time(n, family, taus, digits = 3, verbose = FALSE)
nacFrail.time(n, family, taus, digits = 3, verbose = FALSE)
n |
integer specifying the sample size to be used for the random
variates |
family |
the Archimedean family (class
|
taus |
|
digits |
number of digits for the output. |
verbose |
logical indicating if |
A matrix of user run time measurements in milliseconds
(
1000*system.time(.)[1]
) where is
length(taus)
.
The first column contains the run times for generating the
s. For the submatrix that remains if the first column is
removed, row
(for
) contains the run times
for the
s for a particular
and all the
admissible
s.
The class acopula
and our predefined "acopula"
family objects in acopula-families
. For some timings on a
standard notebook, see demo(timings)
(or the file
‘timings.R’ in the demo folder).
## takes about 7 seconds:% so we rather test a much smaller set in R CMD check nacFrail.time(10000, "Gumbel", taus= c(0.05,(1:9)/10, 0.95)) system.time( print( nacFrail.time(1000, "Gumbel", taus = c(0.5,1,6,9)/10) ) )
## takes about 7 seconds:% so we rather test a much smaller set in R CMD check nacFrail.time(10000, "Gumbel", taus= c(0.05,(1:9)/10, 0.95)) system.time( print( nacFrail.time(1000, "Gumbel", taus = c(0.5,1,6,9)/10) ) )
Class of nested Archimedean Copulas, "nacopula"
,
and its specification "outer_nacopula"
differ only by the validation method, which is stricter for the
outer(most) copula (the root copula).
Objects can be created by calls of the form new("nacopula", ...)
,
which is only intended for experts. Root copulas are typically constructed by
onacopula(.)
.
copula
:an object of class
"acopula"
, denoting the top-level Archimedean
copula of the nested Archimedean copula, that is, the root copula.
comp
:an integer
vector (possibly of
length 0) of indices of components in 1:d
which are not
nested Archimedean copulas. Here, denotes the
dimension of the random vectors under consideration; see
the
dim()
method below.
childCops
:a (possibly empty) list
of
further nested Archimedean copulas (child copulas), that is, objects
of class "nacopula"
. The "nacopula"
objects
therefore contain "acopula"
objects as
special cases.
signature(x = "nacopula"):
returns the dimension
of the random vector
following
x
.
signature("nacopula"):
calling
printNacopula
for a compact overview of the nested
Archimedean copula under consideration.
onacopula
for building (outer) "nacopula" objects. For the
class definition of the copula
component, see
acopula
.
## nacopula and outer_nacopula class information showClass("nacopula") showClass("outer_nacopula") ## Construct a three-dimensional nested Frank copula with parameters ## chosen such that the Kendall's tau of the respective bivariate margins ## are 0.2 and 0.5. theta0 <- copFrank@iTau(.2) theta1 <- copFrank@iTau(.5) C3 <- onacopula("F", C(theta0, 1, C(theta1, c(2,3)))) C3 # displaying it, using show(C3); see help(printNacopula) ## What is the dimension of this copula? dim(C3) ## What are the indices of direct components of the root copula? C3@comp ## How does the list of child nested Archimedean copulas look like? C3@childCops # only one child for this copula, components 2, 3
## nacopula and outer_nacopula class information showClass("nacopula") showClass("outer_nacopula") ## Construct a three-dimensional nested Frank copula with parameters ## chosen such that the Kendall's tau of the respective bivariate margins ## are 0.2 and 0.5. theta0 <- copFrank@iTau(.2) theta1 <- copFrank@iTau(.5) C3 <- onacopula("F", C(theta0, 1, C(theta1, c(2,3)))) C3 # displaying it, using show(C3); see help(printNacopula) ## What is the dimension of this copula? dim(C3) ## What are the indices of direct components of the root copula? C3@comp ## How does the list of child nested Archimedean copulas look like? C3@childCops # only one child for this copula, components 2, 3
Return a matrix of pairwise thetas for a nested Archimedean
copula (
nacopula
) of dimension .
nacPairthetas(x)
nacPairthetas(x)
x |
an (outer) nacopula (with thetas sets). |
a () matrix of thetas, say
T
, where
T[j,k]
= theta of the bivariate Archimedean copula
.
the class nacopula
(with its dim
method).
## test with options(width=97) (mm <- rnacModel("Gumbel", d=15, pr.comp = 0.25, order="random")) stopifnot(isSymmetric(PT <- nacPairthetas(mm))) round(PT, 2) ## The tau's -- "Kendall's correlation matrix" : round(copGumbel@tau(PT), 2) ## do this several times: m1 <- rnacModel("Gumbel", d=15, pr.comp = 1/8, order="seq") stopifnot(isSymmetric(PT <- nacPairthetas(m1))) m1; PT m100 <- rnacModel("Gumbel", d= 100, pr.comp = 1/16, order="seq") system.time(PT <- nacPairthetas(m100))# how slow {non-optimal algorithm}? ##-- very fast, still! stopifnot(isSymmetric(PT)) m100 ## image(PT)# not ok -- want one color per theta nt <- length(th0 <- unique(sort(PT[!is.na(PT)]))) th1 <- c(th0[1]/2, th0, 1.25*th0[nt]) ths <- (th1[-1]+th1[-(nt+2)])/2 image(log(PT), breaks = ths, col = heat.colors(nt)) ## Nicer and easier: require(Matrix) image(as(log(PT),"Matrix"), main = "log( nacPairthetas( m100 ))", useAbs=FALSE, useRaster=TRUE, border=NA)
## test with options(width=97) (mm <- rnacModel("Gumbel", d=15, pr.comp = 0.25, order="random")) stopifnot(isSymmetric(PT <- nacPairthetas(mm))) round(PT, 2) ## The tau's -- "Kendall's correlation matrix" : round(copGumbel@tau(PT), 2) ## do this several times: m1 <- rnacModel("Gumbel", d=15, pr.comp = 1/8, order="seq") stopifnot(isSymmetric(PT <- nacPairthetas(m1))) m1; PT m100 <- rnacModel("Gumbel", d= 100, pr.comp = 1/16, order="seq") system.time(PT <- nacPairthetas(m100))# how slow {non-optimal algorithm}? ##-- very fast, still! stopifnot(isSymmetric(PT)) m100 ## image(PT)# not ok -- want one color per theta nt <- length(th0 <- unique(sort(PT[!is.na(PT)]))) th1 <- c(th0[1]/2, th0, 1.25*th0[nt]) ths <- (th1[-1]+th1[-(nt+2)])/2 image(log(PT), breaks = ths, col = heat.colors(nt)) ## Nicer and easier: require(Matrix) image(as(log(PT),"Matrix"), main = "log( nacPairthetas( m100 ))", useAbs=FALSE, useRaster=TRUE, border=NA)
Compute the nesting depth of a nested Archimedean copula which is the length of the longest branch in the tree representation of the copula, and hence at least one.
nesdepth(x)
nesdepth(x)
x |
object of class |
an integer, the nesting depth of the nested Archimedean copula.
An (unnested) Archimedean copula has depth 1
.
dim
of nacopulas.
F2 <- onacopula("F", C(1.9, 1, C(4.5, c(2,3)))) F2 F3 <- onacopula("Clayton", C(1.5, 3:1, C(2.5, 4:5, C(15, 9:6)))) nesdepth(F2) # 2 nesdepth(F3) # 3
F2 <- onacopula("F", C(1.9, 1, C(4.5, c(2,3)))) F2 F3 <- onacopula("Clayton", C(1.5, 3:1, C(2.5, 4:5, C(15, 9:6)))) nesdepth(F2) # 2 nesdepth(F3) # 3
Constructing (outer) nested Archimedean copulas (class
outer_nacopula
) is most conveniently done via
onacopula()
, using a nested notation.
Slightly less conveniently, but with the option to pass a list
structure, onacopulaL()
can be used, typically from inside
another function programmatically.
onacopula (family, nacStructure) onacopulaL(family, nacList) nac2list(x)
onacopula (family, nacStructure) onacopulaL(family, nacList) nac2list(x)
family |
either a |
nacStructure |
a “formula” of the form
Note that |
nacList |
a
|
x |
an |
onacopula[L]()
:
An outer nested Archimedean copula object, that is, of class
"outer_nacopula"
.
nac2list
: a list
exactly like the naclist
argument to onacopulaL
.
Those of the Archimedean families, for example, copGumbel
.
The class definitions "nacopula"
,
"outer_nacopula"
, and "acopula"
.
## Construct a ten-dimensional Joe copula with parameter such that ## Kendall's tau equals 0.5 theta <- copJoe@iTau(0.5) C10 <- onacopula("J",C(theta,1:10)) ## Equivalent construction with onacopulaL(): C10. <- onacopulaL("J",list(theta,1:10)) stopifnot(identical(C10, C10.), identical(nac2list(C10), list(theta, 1:10))) ## Construct a three-dimensional nested Gumbel copula with parameters ## such that Kendall's tau of the respective bivariate margins are 0.2 ## and 0.5. theta0 <- copGumbel@iTau(.2) theta1 <- copGumbel@iTau(.5) C3 <- onacopula("G", C(theta0, 1, C(theta1, c(2,3)))) ## Equivalent construction with onacopulaL(): str(NAlis <- list(theta0, 1, list(list(theta1, c(2,3))))) C3. <- onacopulaL("Gumbel", NAlis) stopifnot(identical(C3, C3.)) ## An exercise: assume you got the copula specs as character string: na3spec <- "C(theta0, 1, C(theta1, c(2,3)))" na3call <- str2lang(na3spec) # == parse(text = na3spec)[[1]] C3.s <- onacopula("Gumbel", na3call) stopifnot(identical(C3, C3.s)) ## Good error message if the component ("coordinate") indices are wrong ## or do not match: err <- try(onacopula("G", C(theta0, 2, C(theta1, c(3,2))))) ## Compute the probability of falling in [0,.01]^3 for this copula pCopula(rep(.01,3), C3) ## Compute the probability of falling in the cube [.99,1]^3 prob(C3, rep(.99, 3), rep(1, 3)) ## Construct a 6-dimensional, partially nested Gumbel copula of the form ## C_0(C_1(u_1, u_2), C_2(u_3, u_4), C_3(u_5, u_6)) theta <- 2:5 copG <- onacopulaL("Gumbel", list(theta[1], NULL, list(list(theta[2], c(1,2)), list(theta[3], c(3,4)), list(theta[4], c(5,6))))) set.seed(1) U <- rCopula(5000, copG) pairs(U, pch=".", gap=0, labels = as.expression( lapply(1:dim(copG), function(j) bquote(italic(U[.(j)]))) ))
## Construct a ten-dimensional Joe copula with parameter such that ## Kendall's tau equals 0.5 theta <- copJoe@iTau(0.5) C10 <- onacopula("J",C(theta,1:10)) ## Equivalent construction with onacopulaL(): C10. <- onacopulaL("J",list(theta,1:10)) stopifnot(identical(C10, C10.), identical(nac2list(C10), list(theta, 1:10))) ## Construct a three-dimensional nested Gumbel copula with parameters ## such that Kendall's tau of the respective bivariate margins are 0.2 ## and 0.5. theta0 <- copGumbel@iTau(.2) theta1 <- copGumbel@iTau(.5) C3 <- onacopula("G", C(theta0, 1, C(theta1, c(2,3)))) ## Equivalent construction with onacopulaL(): str(NAlis <- list(theta0, 1, list(list(theta1, c(2,3))))) C3. <- onacopulaL("Gumbel", NAlis) stopifnot(identical(C3, C3.)) ## An exercise: assume you got the copula specs as character string: na3spec <- "C(theta0, 1, C(theta1, c(2,3)))" na3call <- str2lang(na3spec) # == parse(text = na3spec)[[1]] C3.s <- onacopula("Gumbel", na3call) stopifnot(identical(C3, C3.s)) ## Good error message if the component ("coordinate") indices are wrong ## or do not match: err <- try(onacopula("G", C(theta0, 2, C(theta1, c(3,2))))) ## Compute the probability of falling in [0,.01]^3 for this copula pCopula(rep(.01,3), C3) ## Compute the probability of falling in the cube [.99,1]^3 prob(C3, rep(.99, 3), rep(1, 3)) ## Construct a 6-dimensional, partially nested Gumbel copula of the form ## C_0(C_1(u_1, u_2), C_2(u_3, u_4), C_3(u_5, u_6)) theta <- 2:5 copG <- onacopulaL("Gumbel", list(theta[1], NULL, list(list(theta[2], c(1,2)), list(theta[3], c(3,4)), list(theta[4], c(5,6))))) set.seed(1) U <- rCopula(5000, copG) pairs(U, pch=".", gap=0, labels = as.expression( lapply(1:dim(copG), function(j) bquote(italic(U[.(j)]))) ))
Build a new Archimedean copula by applying the outer power transformation to a given Archimedean copula.
opower(copbase, thetabase)
opower(copbase, thetabase)
copbase |
a "base" copula, that is, a copula of class
|
thetabase |
the univariate parameter |
a new acopula
object, namely the outer power copula
based on the provided copula family copbase
with fixed
parameter thetabase
. The transform introduces a parameter
theta
, so one obtains a parametric Archimedean family object as
return value.
The environment
of all function slots contains objects
cOP
(which is the outer power copula itself), copbase
,
and thetabase
.
Hofert, M. (2010), Sampling Nested Archimedean Copulas with Applications to CDO Pricing, Suedwestdeutscher Verlag fuer Hochschulschriften AG & Co. KG.
The class acopula
and our predefined "acopula"
family objects in acopula-families
.
## Construct an outer power Clayton copula with parameter thetabase such ## that Kendall's tau equals 0.2 thetabase <- copClayton@iTau(0.2) opC <- opower(copClayton, thetabase) # "acopula" obj. (unspecified theta) ## Construct a 3d nested Archimedean copula based on opC, that is, a nested ## outer power Clayton copula. The parameters theta are chosen such that ## Kendall's tau equals 0.4 and 0.6 for the outer and inner sector, ## respectively. theta0 <- opC@iTau(0.4) theta1 <- opC@iTau(0.6) opC3d <- onacopulaL(opC, list(theta0, 1, list(list(theta1, 2:3)))) ## or opC3d <- onacopula(opC, C(theta0, 1, C(theta1, c(2,3)))) ## Compute the corresponding lower and upper tail-dependence coefficients rbind(theta0 = c( lambdaL = opC@lambdaL(theta0), lambdaU = opC@lambdaU(theta0) # => opC3d has upper tail dependence ), theta1 = c( lambdaL = opC@lambdaL(theta1), lambdaU = opC@lambdaU(theta1) # => opC3d has upper tail dependence )) ## Sample opC3d n <- 1000 U <- rnacopula(n, opC3d) ## Plot the generated vectors of random variates of the nested outer ## power Clayton copula. splom2(U) ## Construct such random variates "by hand" ## (1) draw V0 and V01 V0 <- opC@ V0(n, theta0) V01 <- opC@V01(V0, theta0, theta1) ## (2) build U U <- cbind( opC@psi(rexp(n)/V0, theta0), opC@psi(rexp(n)/V01, theta1), opC@psi(rexp(n)/V01, theta1))
## Construct an outer power Clayton copula with parameter thetabase such ## that Kendall's tau equals 0.2 thetabase <- copClayton@iTau(0.2) opC <- opower(copClayton, thetabase) # "acopula" obj. (unspecified theta) ## Construct a 3d nested Archimedean copula based on opC, that is, a nested ## outer power Clayton copula. The parameters theta are chosen such that ## Kendall's tau equals 0.4 and 0.6 for the outer and inner sector, ## respectively. theta0 <- opC@iTau(0.4) theta1 <- opC@iTau(0.6) opC3d <- onacopulaL(opC, list(theta0, 1, list(list(theta1, 2:3)))) ## or opC3d <- onacopula(opC, C(theta0, 1, C(theta1, c(2,3)))) ## Compute the corresponding lower and upper tail-dependence coefficients rbind(theta0 = c( lambdaL = opC@lambdaL(theta0), lambdaU = opC@lambdaU(theta0) # => opC3d has upper tail dependence ), theta1 = c( lambdaL = opC@lambdaL(theta1), lambdaU = opC@lambdaU(theta1) # => opC3d has upper tail dependence )) ## Sample opC3d n <- 1000 U <- rnacopula(n, opC3d) ## Plot the generated vectors of random variates of the nested outer ## power Clayton copula. splom2(U) ## Construct such random variates "by hand" ## (1) draw V0 and V01 V0 <- opC@ V0(n, theta0) V01 <- opC@V01(V0, theta0, theta1) ## (2) build U U <- cbind( opC@psi(rexp(n)/V0, theta0), opC@psi(rexp(n)/V01, theta1), opC@psi(rexp(n)/V01, theta1))
A version of graphics' package pairs()
,
particularly useful for visualizing dependence in multivariate
(copula) data.
pairs2(x, labels = NULL, labels.null.lab = "U", ...)
pairs2(x, labels = NULL, labels.null.lab = "U", ...)
x |
a numeric |
labels |
the variable names, typically unspecified. |
labels.null.lab |
the |
... |
further arguments, passed to |
splom2()
for a similar function based on
splom()
.
## Create a 100 x 7 matrix of random variates from a t distribution ## with four degrees of freedom and plot the generated data U <- matrix(rt(700, df = 4), ncol = 7) pairs2(U, pch = ".")
## Create a 100 x 7 matrix of random variates from a t distribution ## with four degrees of freedom and plot the generated data U <- matrix(rt(700, df = 4), ncol = 7) pairs2(U, pch = ".")
pairsColList()
creates a list
containing
information about colors for a given matrix of (approximate aka
“pseudo”) p-values. These colors are used in
pairsRosenblatt()
for visualizing a graphical
goodness-of-fit test based on pairwise Rosenblatt transformed data.
pairsRosenblatt(cu.u, pvalueMat=pviTest(pairwiseIndepTest(cu.u)), method = c("scatter", "QQchisq", "QQgamma", "PPchisq", "PPgamma", "none"), g1, g2, col = "B&W.contrast", colList = pairsColList(pvalueMat, col=col), main=NULL, sub = gpviString(pvalueMat, name = "pp-values"), panel = NULL, do.qqline = TRUE, keyOpt = list(title="pp-value", rug.at=pvalueMat), ...) pairsColList(P, pdiv = c(1e-04, 0.001, 0.01, 0.05, 0.1, 0.5), signif.P = 0.05, pmin0 = 1e-05, bucketCols = NULL, fgColMat = NULL, bgColMat = NULL, col = "B&W.contrast", BWcutoff = 170, bg.col = c("ETHCL", "zurich", "zurich.by.fog", "baby", "heat", "greenish"), bg.ncol.gap = floor(length(pdiv)/3), bg.col.bottom = NULL, bg.col.top = NULL, ...)
pairsRosenblatt(cu.u, pvalueMat=pviTest(pairwiseIndepTest(cu.u)), method = c("scatter", "QQchisq", "QQgamma", "PPchisq", "PPgamma", "none"), g1, g2, col = "B&W.contrast", colList = pairsColList(pvalueMat, col=col), main=NULL, sub = gpviString(pvalueMat, name = "pp-values"), panel = NULL, do.qqline = TRUE, keyOpt = list(title="pp-value", rug.at=pvalueMat), ...) pairsColList(P, pdiv = c(1e-04, 0.001, 0.01, 0.05, 0.1, 0.5), signif.P = 0.05, pmin0 = 1e-05, bucketCols = NULL, fgColMat = NULL, bgColMat = NULL, col = "B&W.contrast", BWcutoff = 170, bg.col = c("ETHCL", "zurich", "zurich.by.fog", "baby", "heat", "greenish"), bg.ncol.gap = floor(length(pdiv)/3), bg.col.bottom = NULL, bg.col.top = NULL, ...)
cu.u |
|
pvalueMat |
|
method |
Note: These methods merely just set |
g1 |
|
g2 |
|
colList |
|
main |
title. |
sub |
sub-title with a smart default containing a global (p)p-value. |
panel |
a |
do.qqline |
if |
keyOpt |
argument passed to |
... |
additional arguments passed to |
P |
|
pdiv |
numeric vector of strictly increasing p-values in
(0,1) that determine the “buckets” for the background
colors of |
signif.P |
significance level (must be an element of |
pmin0 |
a Note that |
bucketCols |
|
fgColMat |
|
bgColMat |
|
col |
foreground color (defaults to "B&W.contrast" which switches
black/white according to |
BWcutoff |
number in (0, 255) for switching
foreground color if |
bg.col |
color scheme for the background colors. |
bg.ncol.gap |
number of colors left out as "gap" for
color buckets below/above |
bg.col.bottom |
|
bg.col.top |
|
Extra arguments of pairsRosenblatt()
are passed to
.pairsCond()
, these notably may include key
, true
by default, which draws a color key for the colors used as panel
background encoding (pseudo) p-values.
pairsColList()
is basically an auxiliary function to specify
the colors used in the graphical goodness-of-fit test as
conducted by pairsRosenblatt()
. The latter is described in
detail in Hofert and Mächler (2013).
See also demo(gof_graph)
.
pairsRosenblatt
: invisibly, the result of .pairsCond()
.
pairsColList
: a named list
with components
Hofert, M. and Mächler, M. (2014) A graphical goodness-of-fit test for dependence models in higher dimensions; Journal of Computational and Graphical Statistics, 23(3), 700–716. doi:10.1080/10618600.2013.812518
pairwiseCcop()
for the tools behind the scenes.
demo(gof_graph)
for examples.
## 2-dim example {d = 2} =============== ## ## "t" Copula with 22. degrees of freedom; and (pairwise) tau = 0.5 nu <- 2.2 # degrees of freedom ## Define the multivariate distribution tCop <- ellipCopula("t", param=iTau(ellipCopula("t", df=nu), tau = 0.5), dim=2, df=nu) set.seed(19) X <- qexp(rCopula(n = 400, tCop)) ## H0 (wrongly): a Normal copula, with correct tau copH0 <- ellipCopula("normal", param=iTau(ellipCopula("normal"), tau = 0.5)) ## create array of pairwise copH0-transformed data columns cu.u <- pairwiseCcop(pobs(X), copula = copH0) ## compute pairwise matrix of p-values and corresponding colors pwIT <- pairwiseIndepTest(cu.u, N=200) # (d,d)-matrix of test results round(pmat <- pviTest(pwIT), 3) # pick out p-values ## .286 and .077 pairsRosenblatt(cu.u, pvalueMat= pmat) ### A shortened version of demo(gof_graph) ------------------------------- N <- 32 ## too small, for "testing"; realistically, use a larger one: if(FALSE) N <- 100 ## 5d Gumbel copula ########## n <- 250 # sample size d <- 5 # dimension family <- "Gumbel" # copula family tau <- 0.5 set.seed(17) ## define and sample the copula (= H0 copula), build pseudo-observations cop <- getAcop(family) th <- cop@iTau(tau) # correct parameter value copH0 <- onacopulaL(family, list(th, 1:d)) # define H0 copula U. <- pobs(rCopula(n, cop=copH0)) ## create array of pairwise copH0-transformed data columns cu.u <- pairwiseCcop(U., copula = copH0) ## compute pairwise matrix of p-values and corresponding colors pwIT <- pairwiseIndepTest(cu.u, N=N, verbose=interactive()) # (d,d)-matrix of test results round(pmat <- pviTest(pwIT), 3) # pick out p-values ## Here (with seed=1): no significant ones, smallest = 0.0603 ## Plots --------------------- ## plain (too large plot symbols here) pairsRosenblatt(cu.u, pvalueMat=pmat, pch=".") ## with title, no subtitle pwRoto <- "Pairwise Rosenblatt transformed observations" pairsRosenblatt(cu.u, pvalueMat=pmat, pch=".", main=pwRoto, sub=NULL) ## two-line title including expressions, and centered title <- list(paste(pwRoto, "to test"), substitute(italic(H[0]:C~~bold("is Gumbel with"~~tau==tau.)), list(tau.=tau))) line.main <- c(4, 1.4) pairsRosenblatt(cu.u, pvalueMat=pmat, pch=".", main=title, line.main=line.main, main.centered=TRUE) ## Q-Q plots -- can, in general, better detect outliers pairsRosenblatt(cu.u, pvalueMat=pmat, method="QQchisq", cex=0.2)
## 2-dim example {d = 2} =============== ## ## "t" Copula with 22. degrees of freedom; and (pairwise) tau = 0.5 nu <- 2.2 # degrees of freedom ## Define the multivariate distribution tCop <- ellipCopula("t", param=iTau(ellipCopula("t", df=nu), tau = 0.5), dim=2, df=nu) set.seed(19) X <- qexp(rCopula(n = 400, tCop)) ## H0 (wrongly): a Normal copula, with correct tau copH0 <- ellipCopula("normal", param=iTau(ellipCopula("normal"), tau = 0.5)) ## create array of pairwise copH0-transformed data columns cu.u <- pairwiseCcop(pobs(X), copula = copH0) ## compute pairwise matrix of p-values and corresponding colors pwIT <- pairwiseIndepTest(cu.u, N=200) # (d,d)-matrix of test results round(pmat <- pviTest(pwIT), 3) # pick out p-values ## .286 and .077 pairsRosenblatt(cu.u, pvalueMat= pmat) ### A shortened version of demo(gof_graph) ------------------------------- N <- 32 ## too small, for "testing"; realistically, use a larger one: if(FALSE) N <- 100 ## 5d Gumbel copula ########## n <- 250 # sample size d <- 5 # dimension family <- "Gumbel" # copula family tau <- 0.5 set.seed(17) ## define and sample the copula (= H0 copula), build pseudo-observations cop <- getAcop(family) th <- cop@iTau(tau) # correct parameter value copH0 <- onacopulaL(family, list(th, 1:d)) # define H0 copula U. <- pobs(rCopula(n, cop=copH0)) ## create array of pairwise copH0-transformed data columns cu.u <- pairwiseCcop(U., copula = copH0) ## compute pairwise matrix of p-values and corresponding colors pwIT <- pairwiseIndepTest(cu.u, N=N, verbose=interactive()) # (d,d)-matrix of test results round(pmat <- pviTest(pwIT), 3) # pick out p-values ## Here (with seed=1): no significant ones, smallest = 0.0603 ## Plots --------------------- ## plain (too large plot symbols here) pairsRosenblatt(cu.u, pvalueMat=pmat, pch=".") ## with title, no subtitle pwRoto <- "Pairwise Rosenblatt transformed observations" pairsRosenblatt(cu.u, pvalueMat=pmat, pch=".", main=pwRoto, sub=NULL) ## two-line title including expressions, and centered title <- list(paste(pwRoto, "to test"), substitute(italic(H[0]:C~~bold("is Gumbel with"~~tau==tau.)), list(tau.=tau))) line.main <- c(4, 1.4) pairsRosenblatt(cu.u, pvalueMat=pmat, pch=".", main=title, line.main=line.main, main.centered=TRUE) ## Q-Q plots -- can, in general, better detect outliers pairsRosenblatt(cu.u, pvalueMat=pmat, method="QQchisq", cex=0.2)
Methods for function persp
to draw perspective plots (of
two dimensional distributions from package copula).
## S4 method for signature 'Copula' persp(x, FUN, n.grid = 26, delta = 0, xlab = "u1", ylab = "u2", zlab = deparse(substitute(FUN))[1], zlim = NULL, theta = -30, phi = 30, expand = 0.618, ticktype = "detail", ...) ## S4 method for signature 'mvdc' persp(x, FUN, xlim, ylim, n.grid = 26, xlab = "x1", ylab = "x2", zlab = deparse(substitute(FUN))[1], theta = -30, phi = 30, expand = 0.618, ticktype = "detail", ...)
## S4 method for signature 'Copula' persp(x, FUN, n.grid = 26, delta = 0, xlab = "u1", ylab = "u2", zlab = deparse(substitute(FUN))[1], zlim = NULL, theta = -30, phi = 30, expand = 0.618, ticktype = "detail", ...) ## S4 method for signature 'mvdc' persp(x, FUN, xlim, ylim, n.grid = 26, xlab = "x1", ylab = "x2", zlab = deparse(substitute(FUN))[1], theta = -30, phi = 30, expand = 0.618, ticktype = "detail", ...)
x |
|
FUN |
|
n.grid |
the number of grid points used in each dimension. This
can be a vector of length two, giving the number of grid points used
in x- and y-direction, respectively; the function |
delta |
A small number in |
xlim , ylim
|
The |
xlab , ylab , zlab , zlim , theta , phi , expand , ticktype , ...
|
Arguments
for (the default method of) |
invisible
; a list with the following components:
x , y
|
The numeric vectors, as passed to |
z |
The matrix of evaluated |
persp |
the |
Perspective plots for both "copula"
or
"mvdc"
objects, see x
in the
Arguments section.
The contour-methods
for drawing contour lines of the
same functions.
persp(claytonCopula(2), pCopula, main = "CDF of claytonCopula(2)") persp( frankCopula(1.5), dCopula, main = "Density of frankCopula(1.5)") persp( frankCopula(1.5), dCopula, main = "c_[frank(1.5)](.)", zlim = c(0,2)) ## Examples with negative tau: (th1 <- iTau(amhCopula(), -0.1)) persp(amhCopula(th1), dCopula) persp(amhCopula(th1), pCopula, ticktype = "simple") # no axis ticks persp( frankCopula(iTau( frankCopula(), -0.1)), dCopula) persp(claytonCopula(iTau(claytonCopula(), -0.1)), dCopula) ## cCop.2 <- function(u, copula, ...) cCopula(u, copula, ...)[,2] persp( amhCopula(iTau( amhCopula(), -0.1)), cCop.2, main="cCop(AMH...)[,2]") persp( frankCopula(iTau( frankCopula(), -0.1)), cCop.2, main="cCop(frankC)[,2]") ## and Clayton also looks "the same" ... ## MVDC Examples ------------------------------------ mvNN <- mvdc(gumbelCopula(3), c("norm", "norm"), list(list(mean = 0, sd = 1), list(mean = 1))) persp(mvNN, dMvdc, xlim=c(-2, 2), ylim=c(-1, 3), main = "Density") persp(mvNN, pMvdc, xlim=c(-2, 2), ylim=c(-1, 3), main = "Cumulative Distr.")
persp(claytonCopula(2), pCopula, main = "CDF of claytonCopula(2)") persp( frankCopula(1.5), dCopula, main = "Density of frankCopula(1.5)") persp( frankCopula(1.5), dCopula, main = "c_[frank(1.5)](.)", zlim = c(0,2)) ## Examples with negative tau: (th1 <- iTau(amhCopula(), -0.1)) persp(amhCopula(th1), dCopula) persp(amhCopula(th1), pCopula, ticktype = "simple") # no axis ticks persp( frankCopula(iTau( frankCopula(), -0.1)), dCopula) persp(claytonCopula(iTau(claytonCopula(), -0.1)), dCopula) ## cCop.2 <- function(u, copula, ...) cCopula(u, copula, ...)[,2] persp( amhCopula(iTau( amhCopula(), -0.1)), cCop.2, main="cCop(AMH...)[,2]") persp( frankCopula(iTau( frankCopula(), -0.1)), cCop.2, main="cCop(frankC)[,2]") ## and Clayton also looks "the same" ... ## MVDC Examples ------------------------------------ mvNN <- mvdc(gumbelCopula(3), c("norm", "norm"), list(list(mean = 0, sd = 1), list(mean = 1))) persp(mvNN, dMvdc, xlim=c(-2, 2), ylim=c(-1, 3), main = "Density") persp(mvNN, pMvdc, xlim=c(-2, 2), ylim=c(-1, 3), main = "Cumulative Distr.")
Constructs a Plackett copula (class "plackettCopula}"
) with its
corresponding parameter.
plackettCopula(param)
plackettCopula(param)
param |
a number (numeric vector of length one) specifying the non negative parameter. |
A Plackett copula object of class "plackettCopula"
.
Plackett, R. L. (1965). A Class of Bivariate Distributions. Journal of the American Statistical Association 60, 516–522.
The "plackettCopula"
class;
ellipCopula
, archmCopula
.
plackett.cop <- plackettCopula(param=2) lambda(plackett.cop) # 0 0 : no tail dependencies ## For really large param values (here, 1e20 and Inf are equivalent): set.seed(1); Xe20 <- rCopula(5000, plackettCopula(1e20)) set.seed(1); Xinf <- rCopula(5000, plackettCopula(Inf)) stopifnot(all.equal(Xe20, Xinf))
plackett.cop <- plackettCopula(param=2) lambda(plackett.cop) # 0 0 : no tail dependencies ## For really large param values (here, 1e20 and Inf are equivalent): set.seed(1); Xe20 <- rCopula(5000, plackettCopula(1e20)) set.seed(1); Xinf <- rCopula(5000, plackettCopula(Inf)) stopifnot(all.equal(Xe20, Xinf))
The Plackett copula class.
Objects can be created by new("plackettCopula", ...)
but are
typically produced by plackettCopula(alpha)
.
dimension
:Numeric (scalar), the dimension of the copula.
exprdist
:a length two expression
with
expressions for the CDF and PDF of the copula.
parameters
:a number (numeric vector of length one) specifying the non negative parameter.
param.names
:the "character"
string "alpha"
.
param.lowbnd
:the number 0
.
param.upbnd
:the number Inf
.
fullname
:(deprecated; do not use!)
Typical copula methods work, see "plackettCopula"
and use methods(class = "plackettCopula")
.
Class "plackettCopula"
extends class "copula"
directly.
Nelsen, R. B. (2006), An introduction to Copulas, Springer, New York.
str(plackettCopula()) plackettCopula()@exprdist[["cdf"]] methods(class = "plackettCopula") contourplot2(plackettCopula(7), pCopula) wireframe2(plackettCopula(5), dCopula, main= "plackettCopula(5)")
str(plackettCopula()) plackettCopula()@exprdist[["cdf"]] methods(class = "plackettCopula") contourplot2(plackettCopula(7), pCopula) wireframe2(plackettCopula(5), dCopula, main= "plackettCopula(5)")
Methods for plot()
to draw a scatter plot of a random
sample from bivariate distributions from package copula.
## S4 method for signature 'Copula,ANY' plot(x, n, xlim = 0:1, ylim = 0:1, xlab = quote(U[1]), ylab = quote(U[2]), main = NULL, ...) ## S4 method for signature 'mvdc,ANY' plot(x, n, xlim = NULL, ylim = NULL, xlab = quote(X[1]), ylab = quote(X[2]), ...)
## S4 method for signature 'Copula,ANY' plot(x, n, xlim = 0:1, ylim = 0:1, xlab = quote(U[1]), ylab = quote(U[2]), main = NULL, ...) ## S4 method for signature 'mvdc,ANY' plot(x, n, xlim = NULL, ylim = NULL, xlab = quote(X[1]), ylab = quote(X[2]), ...)
x |
a bivariate |
n |
when |
xlim , ylim
|
the x- and y-axis limits. |
xlab , ylab
|
the x- and y-axis labels. |
main |
the main title; when true, shows the call's |
... |
additional arguments passed to |
invisible().
splom2()
for a scatter-plot matrix based on
splom()
.
## For 2-dim. 'copula' objects ------------------------- ## Plot uses n compula samples n <- 1000 # sample size set.seed(271) # (reproducibility) plot(tCopula(-0.8, df = 1.25), n = n) # automatic main title! nu <- 3 # degrees of freedom tau <- 0.5 # Kendall's tau th <- iTau(tCopula(df = nu), tau) # corresponding parameter cop <- tCopula(th, df = nu) # define 2-d copula object plot(cop, n = n) ## For 2-dim. 'mvdc' objects --------------------------- mvNN <- mvdc(cop, c("norm", "norm"), list(list(mean = 0, sd = 1), list(mean = 1))) plot(mvNN, n = n)
## For 2-dim. 'copula' objects ------------------------- ## Plot uses n compula samples n <- 1000 # sample size set.seed(271) # (reproducibility) plot(tCopula(-0.8, df = 1.25), n = n) # automatic main title! nu <- 3 # degrees of freedom tau <- 0.5 # Kendall's tau th <- iTau(tCopula(df = nu), tau) # corresponding parameter cop <- tCopula(th, df = nu) # define 2-d copula object plot(cop, n = n) ## For 2-dim. 'mvdc' objects --------------------------- mvNN <- mvdc(cop, c("norm", "norm"), list(list(mean = 0, sd = 1), list(mean = 1))) plot(mvNN, n = n)
For a (nested) Archimedean copula (object of class nacopula
)
x
, pCopula(u, x)
(or also currently still pnacopula(x, u)
)
evaluates the copula x
at the given vector or matrix u
.
## S4 method for signature 'matrix,nacopula' pCopula(u, copula, ...) ## *Deprecated*: pnacopula(x, u)
## S4 method for signature 'matrix,nacopula' pCopula(u, copula, ...) ## *Deprecated*: pnacopula(x, u)
copula , x
|
(nested) Archimedean copula of dimension |
u |
a |
... |
unused: potential optional arguments passed from and to methods. |
The value of an Archimedean copula with generator
at
is given by
The value of a nested Archimedean copula is defined similarly. Note that a d-dimensional copula is called nested Archimedean if it is an Archimedean copula with arguments possibly replaced by other nested Archimedean copulas.
A numeric
in which is the copula evaluated
at
u
. (Currently not parallelized.)
pCopula(u, copula)
is a generic function with
methods for all our copula classes, see pCopula
.
## Construct a three-dimensional nested Joe copula with parameters ## chosen such that the Kendall's tau of the respective bivariate margins ## are 0.2 and 0.5. theta0 <- copJoe@iTau(.2) theta1 <- copJoe@iTau(.5) C3 <- onacopula("J", C(theta0, 1, C(theta1, c(2,3)))) ## Evaluate this copula at the vector u u <- c(.7,.8,.6) pCopula(u, C3) ## Evaluate this copula at the matrix v v <- matrix(runif(300), ncol=3) pCopula(v, C3) ## Back-compatibility check stopifnot(identical( pCopula (u, C3), suppressWarnings( pnacopula(C3, u))), identical( pCopula (v, C3), suppressWarnings( pnacopula(C3, v))))
## Construct a three-dimensional nested Joe copula with parameters ## chosen such that the Kendall's tau of the respective bivariate margins ## are 0.2 and 0.5. theta0 <- copJoe@iTau(.2) theta1 <- copJoe@iTau(.5) C3 <- onacopula("J", C(theta0, 1, C(theta1, c(2,3)))) ## Evaluate this copula at the vector u u <- c(.7,.8,.6) pCopula(u, C3) ## Evaluate this copula at the matrix v v <- matrix(runif(300), ncol=3) pCopula(v, C3) ## Back-compatibility check stopifnot(identical( pCopula (u, C3), suppressWarnings( pnacopula(C3, u))), identical( pCopula (v, C3), suppressWarnings( pnacopula(C3, v))))
Compute the pseudo-observations for the given data matrix.
pobs(x, na.last = "keep", ties.method = eval(formals(rank)$ties.method), lower.tail = TRUE)
pobs(x, na.last = "keep", ties.method = eval(formals(rank)$ties.method), lower.tail = TRUE)
x |
|
na.last |
string passed to |
ties.method |
|
lower.tail |
|
Given realizations
,
of a random vector
,
the pseudo-observations are defined via
for
and
, where
denotes the rank of
among all
,
. When there are
no ties in any of the coordinate samples of
x
, the
pseudo-observations can thus also be computed by component-wise applying the
marginal empirical distribution functions to the data and scaling the result by
. This asymptotically negligible scaling factor is used to
force the variates to fall inside the open unit hypercube, for example, to
avoid problems with density evaluation at the boundaries. Note that
pobs(, lower.tail=FALSE)
simply returns 1-pobs()
.
matrix
(or vector
) of the same
dimensions as x
containing the pseudo-observations.
## Simple definition of the function: pobs ## Draw from a multivariate normal distribution d <- 10 set.seed(1) P <- Matrix::nearPD(matrix(pmin(pmax(runif(d*d), 0.3), 0.99), ncol=d))$mat diag(P) <- rep(1, d) n <- 500 x <- MASS::mvrnorm(n, mu = rep(0, d), Sigma = P) ## Compute pseudo-observations (should roughly follow a Gauss ## copula with correlation matrix P) u <- pobs(x) plot(u[,5],u[,10], xlab=quote(italic(U)[1]), ylab=quote(italic(U)[2])) ## All components: pairwise plot pairs(u, gap=0, pch=".", labels = as.expression( lapply(1:d, function(j) bquote(italic(U[.(j)]))) ))
## Simple definition of the function: pobs ## Draw from a multivariate normal distribution d <- 10 set.seed(1) P <- Matrix::nearPD(matrix(pmin(pmax(runif(d*d), 0.3), 0.99), ncol=d))$mat diag(P) <- rep(1, d) n <- 500 x <- MASS::mvrnorm(n, mu = rep(0, d), Sigma = P) ## Compute pseudo-observations (should roughly follow a Gauss ## copula with correlation matrix P) u <- pobs(x) plot(u[,5],u[,10], xlab=quote(italic(U)[1]), ylab=quote(italic(U)[2])) ## All components: pairwise plot pairs(u, gap=0, pch=".", labels = as.expression( lapply(1:d, function(j) bquote(italic(U[.(j)]))) ))
and Debye FunctionsCompute the polylogarithm function ,
initially defined as the power series,
for , and then more generally (by analytic continuation) as
and
Currently, mainly the case of negative integer is well supported,
as that is used for some of the Archimedean copula densities.
For ,
is also called
‘dilogarithm’ or “Spence's function”. The
"default"
method uses the dilog
or
complex_dilog
function from package gsl,
respectively when .
Also compute the Debye_n functions, for and
, in a
slightly more general manner than the gsl package functions
debye_1
and debye_2
(which cannot deal with
non-finite x
.)
polylog(z, s, method = c("default", "sum", "negI-s-Stirling", "negI-s-Eulerian", "negI-s-asymp-w"), logarithm = FALSE, is.log.z = FALSE, is.logmlog = FALSE, asymp.w.order = 0, n.sum) debye1(x) debye2(x)
polylog(z, s, method = c("default", "sum", "negI-s-Stirling", "negI-s-Eulerian", "negI-s-asymp-w"), logarithm = FALSE, is.log.z = FALSE, is.logmlog = FALSE, asymp.w.order = 0, n.sum) debye1(x) debye2(x)
z |
numeric or complex vector |
s |
complex number; current implementation is aimed at
|
method |
a string specifying the algorithm to be used. |
logarithm |
logical specified to return log(Li.(.)) instead of Li.(.) |
is.log.z |
logical; if TRUE, the specified |
is.logmlog |
logical; if TRUE, the specified argument |
asymp.w.order |
currently only default is implemented. |
n.sum |
for |
x |
numeric vector, may contain |
Almost entirely taken from https://en.wikipedia.org/wiki/Polylogarithm:
For integer values of the polylogarithm order, the following
explicit expressions are obtained by repeated application of
to
:
, etc.
Accordingly, the polylogarithm reduces to a ratio of polynomials in z, and is therefore a rational function of z, for all nonpositive integer orders. The general case may be expressed as a finite sum:
where are the Stirling numbers of the second kind.
Equivalent formulae applicable to negative integer orders are (Wood 1992, § 6) ...
where are the
Eulerian numbers; see also
Eulerian
.
numeric/complex vector as z
, or x
, respectively.
Wikipedia (2011) Polylogarithm, https://en.wikipedia.org/wiki/Polylogarithm.
Wood, D. C. (June 1992). The Computation of Polylogarithms. Technical Report 15-92. Canterbury, UK: University of Kent Computing Laboratory. https://www.cs.kent.ac.uk/pubs/1992/110/.
Apostol, T. M. (2010), "Polylogarithm", in the NIST Handbook of Mathematical Functions, https://dlmf.nist.gov/25.12
Lewin, L. (1981). Polylogarithms and Associated Functions. New York: North-Holland. ISBN 0-444-00550-1.
For Debye functions: Levin (1981) above, and
Wikipedia (2014) Debye function,
https://en.wikipedia.org/wiki/Debye_function.
The polylogarithm is used in MLE for some Archimedean copulas; see
emle
;
The Debye functions are used for tau
or
rho
computations of the Frank copula.
## The dilogarithm, polylog(z, s = 2) = Li_2(.) -- mathmatically defined on C \ [1, Inf) ## so x -> 1 is a limit case: polylog(z = 1, s = 2) ## in the limit, should be equal to pi^2 / 6 ## Default method uses GSL's dilog(): rLi2 <- curve(polylog(x, 2), -5, 1, n= 1+ 6*64, col=2, lwd=2) abline(c(0,1), h=0,v=0:1, lty=3, col="gray40") ## "sum" method gives the same for |z| < 1 and large number of terms: ii <- which(abs(rLi2$x) < 1) stopifnot(all.equal(rLi2$y[ii], polylog(rLi2$x[ii], 2, "sum", n.sum = 1e5), tolerance = 1e-15)) z1 <- c(0.95, 0.99, 0.995, 0.999, 0.9999) L <- polylog( z1, s=-3,method="negI-s-Euler") # close to Inf LL <- polylog( log(z1), s=-3,method="negI-s-Euler",is.log.z=TRUE) LLL <- polylog(log(-log(z1)),s=-3,method="negI-s-Euler",is.logmlog=TRUE) all.equal(L, LL) all.equal(L, LLL) p.Li <- function(s.set, from = -2.6, to = 1/4, ylim = c(-1, 0.5), colors = c("orange","brown", palette()), n = 201, ...) { s.set <- sort(s.set, decreasing = TRUE) s <- s.set[1] # <_ for auto-ylab curve(polylog(x, s, method="negI-s-Stirling"), from, to, col=colors[1], ylim=ylim, n=n, ...) abline(h=0,v=0, col="gray") for(is in seq_along(s.set)[-1]) curve(polylog(x, s=s.set[is], method="negI-s-Stirling"), add=TRUE, col = colors[is], n=n) s <- rev(s.set) legend("bottomright",paste("s =",s), col=colors[2-s], lty=1, bty="n") } ## yellow is unbearable (on white): palette(local({p <- palette(); p[p=="yellow"] <- "goldenrod"; p})) ## Wikipedia page plot (+/-): p.Li(1:-3, ylim= c(-.8, 0.6), colors = c(2:4,6:7)) ## and a bit more: p.Li(1:-5) ## For the range we need it: ccol <- c(NA,NA, rep(palette(),10)) p.Li(-1:-20, from=0, to=.99, colors=ccol, ylim = c(0, 10)) ## log-y scale: p.Li(-1:-20, from=0, to=.99, colors=ccol, ylim = c(.01, 1e7), log = "y", yaxt = "n") if(require(sfsmisc)) eaxis(2) else axis(2)
## The dilogarithm, polylog(z, s = 2) = Li_2(.) -- mathmatically defined on C \ [1, Inf) ## so x -> 1 is a limit case: polylog(z = 1, s = 2) ## in the limit, should be equal to pi^2 / 6 ## Default method uses GSL's dilog(): rLi2 <- curve(polylog(x, 2), -5, 1, n= 1+ 6*64, col=2, lwd=2) abline(c(0,1), h=0,v=0:1, lty=3, col="gray40") ## "sum" method gives the same for |z| < 1 and large number of terms: ii <- which(abs(rLi2$x) < 1) stopifnot(all.equal(rLi2$y[ii], polylog(rLi2$x[ii], 2, "sum", n.sum = 1e5), tolerance = 1e-15)) z1 <- c(0.95, 0.99, 0.995, 0.999, 0.9999) L <- polylog( z1, s=-3,method="negI-s-Euler") # close to Inf LL <- polylog( log(z1), s=-3,method="negI-s-Euler",is.log.z=TRUE) LLL <- polylog(log(-log(z1)),s=-3,method="negI-s-Euler",is.logmlog=TRUE) all.equal(L, LL) all.equal(L, LLL) p.Li <- function(s.set, from = -2.6, to = 1/4, ylim = c(-1, 0.5), colors = c("orange","brown", palette()), n = 201, ...) { s.set <- sort(s.set, decreasing = TRUE) s <- s.set[1] # <_ for auto-ylab curve(polylog(x, s, method="negI-s-Stirling"), from, to, col=colors[1], ylim=ylim, n=n, ...) abline(h=0,v=0, col="gray") for(is in seq_along(s.set)[-1]) curve(polylog(x, s=s.set[is], method="negI-s-Stirling"), add=TRUE, col = colors[is], n=n) s <- rev(s.set) legend("bottomright",paste("s =",s), col=colors[2-s], lty=1, bty="n") } ## yellow is unbearable (on white): palette(local({p <- palette(); p[p=="yellow"] <- "goldenrod"; p})) ## Wikipedia page plot (+/-): p.Li(1:-3, ylim= c(-.8, 0.6), colors = c(2:4,6:7)) ## and a bit more: p.Li(1:-5) ## For the range we need it: ccol <- c(NA,NA, rep(palette(),10)) p.Li(-1:-20, from=0, to=.99, colors=ccol, ylim = c(0, 10)) ## log-y scale: p.Li(-1:-20, from=0, to=.99, colors=ccol, ylim = c(.01, 1e7), log = "y", yaxt = "n") if(require(sfsmisc)) eaxis(2) else axis(2)
Evaluate a univariate polynomial at x
(typically a vector), that is,
compute, for a given vector of coefficients coef
, the polynomial
coef[1] + coef[2]*x + ... + coef[p+1]*x^p
.
polynEval(coef, x)
polynEval(coef, x)
coef |
numeric vector. If a vector, |
x |
numeric vector or array. |
The stable Horner rule is used for evaluation.
Using the C code speeds up the already fast R code available in
polyn.eval()
in package sfsmisc.
numeric vector or array, with the same dimensions as x
,
containing the polynomial values .
For a much more sophisticated treatment of polynomials, use the
polynom
package (for example, evaluation can be done via
predict.polynomial
).
polynEval(c(1,-2,1), x = -2:7) # (x - 1)^2 polynEval(c(0, 24, -50, 35, -10, 1), x = matrix(0:5, 2,3)) # 5 zeros!
polynEval(c(1,-2,1), x = -2:7) # (x - 1)^2 polynEval(c(0, 24, -50, 35, -10, 1), x = matrix(0:5, 2,3)) # 5 zeros!
Print a compact overview of a nested Archimedean copula, that is, an
object of class "nacopula"
.
Calling printNacopula
explicitly allows to customize the
printing behavior. Otherwise, the show()
method calls
printNacopula
with default arguments only.
printNacopula(x, labelKids=NA, deltaInd=, indent.str="", digits=getOption("digits"), width=getOption("width"), ...)
printNacopula(x, labelKids=NA, deltaInd=, indent.str="", digits=getOption("digits"), width=getOption("width"), ...)
x |
an R object of class |
labelKids |
logical specifying if child copulas should be labeled;
If |
deltaInd |
by how much should each child be indented more
than its parent? (non-negative integer). The default is three
with |
indent.str |
a |
digits , width
|
number of significant digits, and desired print
width; see |
... |
potentially further arguments, passed to methods. |
invisibly, x
.
C8 <- onacopula("F", C(1.9, 1, list(K1 = C(5.7, c(2,5)), abc= C(5.0, c(3,4,6), list(L2 = C(11.5, 7:8)))))) C8 # -> printNacopula(C8) printNacopula(C8, delta=10) printNacopula(C8, labelKids=TRUE)
C8 <- onacopula("F", C(1.9, 1, list(K1 = C(5.7, c(2,5)), abc= C(5.0, c(3,4,6), list(L2 = C(11.5, 7:8)))))) C8 # -> printNacopula(C8) printNacopula(C8, delta=10) printNacopula(C8, labelKids=TRUE)
Compute probabilities of a dimensional random vector
distributed according to a given copula
x
to
fall in a hypercube , where
and
denote the
lower and upper corners of the hypercube, respectively.
prob(x, l, u)
prob(x, l, u)
x |
copula of dimension |
l , u
|
|
A numeric
in which is the probability
.
pCopula(.)
.
## Construct a three-dimensional nested Joe copula with parameters ## chosen such that the Kendall's tau of the respective bivariate margins ## are 0.2 and 0.5. theta0 <- copJoe@iTau(.2) theta1 <- copJoe@iTau(.5) C3 <- onacopula("J", C(theta0, 1, C(theta1, c(2,3)))) ## Compute the probability of a random vector distributed according to ## this copula to fall inside the cube with lower point l and upper ## point u. l <- c(.7,.8,.6) u <- c(1,1,1) prob(C3, l, u) ## ditto for a bivariate normal copula with rho = 0.8 : prob(normalCopula(0.8), c(.2,.4), c(.3,.6))
## Construct a three-dimensional nested Joe copula with parameters ## chosen such that the Kendall's tau of the respective bivariate margins ## are 0.2 and 0.5. theta0 <- copJoe@iTau(.2) theta1 <- copJoe@iTau(.5) C3 <- onacopula("J", C(theta0, 1, C(theta1, c(2,3)))) ## Compute the probability of a random vector distributed according to ## this copula to fall inside the cube with lower point l and upper ## point u. l <- c(.7,.8,.6) u <- c(1,1,1) prob(C3, l, u) ## ditto for a bivariate normal copula with rho = 0.8 : prob(normalCopula(0.8), c(.2,.4), c(.3,.6))
A Q-Q plot (possibly) with rugs and pointwise approximate (via the
Central Limit Theorem) two-sided 1- confidence intervals.
qqplot2(x, qF, log = "", qqline.args = if(log=="x" || log=="y") list(untf=TRUE) else list(), rug.args = list(tcl=-0.6*par("tcl")), alpha = 0.05, CI.args = list(col="gray40"), CI.mtext = list(text=paste0("Pointwise asymptotic ", 100*(1-alpha), "% confidence intervals"), side=4, cex=0.6*par("cex.main"), adj=0, col="gray40"), main = quote(bold(italic(F)~~"Q-Q plot")), main.args = list(text=main, side=3, line=1.1, cex=par("cex.main"), font=par("font.main"), adj=par("adj"), xpd=NA), xlab = "Theoretical quantiles", ylab = "Sample quantiles", file="", width=6, height=6, ...)
qqplot2(x, qF, log = "", qqline.args = if(log=="x" || log=="y") list(untf=TRUE) else list(), rug.args = list(tcl=-0.6*par("tcl")), alpha = 0.05, CI.args = list(col="gray40"), CI.mtext = list(text=paste0("Pointwise asymptotic ", 100*(1-alpha), "% confidence intervals"), side=4, cex=0.6*par("cex.main"), adj=0, col="gray40"), main = quote(bold(italic(F)~~"Q-Q plot")), main.args = list(text=main, side=3, line=1.1, cex=par("cex.main"), font=par("font.main"), adj=par("adj"), xpd=NA), xlab = "Theoretical quantiles", ylab = "Sample quantiles", file="", width=6, height=6, ...)
x |
|
qF |
(theoretical) quantile function against which the Q-Q plot is created. |
log |
|
qqline.args |
argument |
rug.args |
argument |
alpha |
significance level. |
CI.args |
argument |
CI.mtext |
argument |
main |
title (can be an expression; use "" for no title). |
main.args |
argument |
xlab |
x axis label. |
ylab |
y axis label. |
file |
file name including the extension “.pdf”. |
width |
width parameter of |
height |
height parameter of |
... |
additional arguments passed to |
See the source code for how the confidence intervals are constructed precisely.
invisible()
.
plot()
for the underlying plot function,
qqline()
for how the Q-Q line is implemented,
rug()
for how the rugs are constructed,
lines()
for how the confidence intervals are drawn, and
mtext()
for how the title and information about the
confidence intervals is printed.
pdf()
for plotting to pdf.
n <- 250 df <- 7 set.seed(1) x <- rchisq(n, df=df) ## Q-Q plot against the true quantiles (of a chi^2_3 distribution) qqplot2(x, qF = function(p) qchisq(p, df=df), main = substitute(bold(italic(chi[NU])~~"Q-Q Plot"), list(NU=df))) ## in log-log scale qqplot2(x, qF = function(p) qchisq(p, df=df), log="xy", main = substitute(bold(italic(chi[NU])~~"Q-Q Plot"), list(NU=df))) ## Q-Q plot against wrong quantiles (of an Exp(1) distribution) qqplot2(x, qF=qexp, main = quote(bold(Exp(1)~~"Q-Q Plot")))
n <- 250 df <- 7 set.seed(1) x <- rchisq(n, df=df) ## Q-Q plot against the true quantiles (of a chi^2_3 distribution) qqplot2(x, qF = function(p) qchisq(p, df=df), main = substitute(bold(italic(chi[NU])~~"Q-Q Plot"), list(NU=df))) ## in log-log scale qqplot2(x, qF = function(p) qchisq(p, df=df), log="xy", main = substitute(bold(italic(chi[NU])~~"Q-Q Plot"), list(NU=df))) ## Q-Q plot against wrong quantiles (of an Exp(1) distribution) qqplot2(x, qF=qexp, main = quote(bold(Exp(1)~~"Q-Q Plot")))
Test for assessing the radial symmetry of the underlying multivariate copula based on the empirical copula. The test statistic is a multivariate extension of the definition adopted in the first reference. An approximate p-value for the test statistic is obtained by means of a appropriate bootstrap which can take the presence of ties in the component series of the data into accont; see the second reference.
radSymTest(x, N = 1000, ties = NA)
radSymTest(x, N = 1000, ties = NA)
x |
a data matrix that will be transformed to pseudo-observations. |
N |
number of boostrap iterations to be used to simulate realizations of the test statistic under the null hypothesis. |
ties |
logical; if |
More details are available in the second reference.
An object of class
htest
which is a list,
some of the components of which are
statistic |
value of the test statistic. |
p.value |
corresponding approximate p-value. |
Genest, C. and G. Nešlehová, J. (2014). On tests of radial symmetry for bivariate copulas. Statistical Papers 55, 1107–1119.
Kojadinovic, I. (2017). Some copula inference procedures adapted to the presence of ties. Computational Statistics and Data Analysis 112, 24–41, https://arxiv.org/abs/1609.05519.
exchTest
, exchEVTest
, gofCopula
.
## Data from radially symmetric copulas radSymTest(rCopula(200, frankCopula(3))) radSymTest(rCopula(200, normalCopula(0.7, dim = 3))) ## Data from non radially symmetric copulas radSymTest(rCopula(200, claytonCopula(3))) radSymTest(rCopula(200, gumbelCopula(2, dim=3)))
## Data from radially symmetric copulas radSymTest(rCopula(200, frankCopula(3))) radSymTest(rCopula(200, normalCopula(0.7, dim = 3))) ## Data from non radially symmetric copulas radSymTest(rCopula(200, claytonCopula(3))) radSymTest(rCopula(200, gumbelCopula(2, dim=3)))
Five years of daily log-returns (from 1996 to 2000) of Intel (INTC), Microsoft (MSFT) and General Electric (GE) stocks. These data were analysed in Chapter 5 of McNeil, Frey and Embrechts (2005).
data(rdj, package="copula")
data(rdj, package="copula")
A data frame of 1262 daily log-returns from 1996 to 2000.
Date
the date, of class "Date"
.
INTC
daily log-return of the Intel stock
MSFT
daily log-return of the Microsoft stock
GE
daily log-return of the General Electric
McNeil, A. J., Frey, R., and Embrechts, P. (2005). Quantitative Risk Management: Concepts, Techniques, Tools. Princeton University Press.
data(rdj) str(rdj)# 'Date' is of class "Date" with(rdj, { matplot(Date, rdj[,-1], type = "o", xaxt = "n", ylim = .15* c(-1,1), main = paste("rdj - data; n =", nrow(rdj))) Axis(Date, side=1) }) legend("top", paste(1:3, names(rdj[,-1])), col=1:3, lty=1:3, bty="n") x <- rdj[, -1] # '-1' : not the Date ## a t-copula (with a vague inital guess of Rho entries) tCop <- tCopula(rep(.2, 3), dim=3, dispstr="un", df=10, df.fixed=TRUE) ft <- fitCopula(tCop, data = pobs(x)) ft ft@copula # the fitted t-copula as tCopula object system.time( g.C <- gofCopula(claytonCopula(dim=3), as.matrix(x), simulation = "mult") ) ## 5.3 sec system.time( g.t <- gofCopula(ft@copula, as.matrix(x), simulation = "mult") ) ## 8.1 sec
data(rdj) str(rdj)# 'Date' is of class "Date" with(rdj, { matplot(Date, rdj[,-1], type = "o", xaxt = "n", ylim = .15* c(-1,1), main = paste("rdj - data; n =", nrow(rdj))) Axis(Date, side=1) }) legend("top", paste(1:3, names(rdj[,-1])), col=1:3, lty=1:3, bty="n") x <- rdj[, -1] # '-1' : not the Date ## a t-copula (with a vague inital guess of Rho entries) tCop <- tCopula(rep(.2, 3), dim=3, dispstr="un", df=10, df.fixed=TRUE) ft <- fitCopula(tCop, data = pobs(x)) ft ft@copula # the fitted t-copula as tCopula object system.time( g.C <- gofCopula(claytonCopula(dim=3), as.matrix(x), simulation = "mult") ) ## 5.3 sec system.time( g.t <- gofCopula(ft@copula, as.matrix(x), simulation = "mult") ) ## 8.1 sec
Generating random variates of an exponentially tilted stable distribution of the form
with parameters
,
,
and
and corresponding
Laplace-Stieltjes transform
see the references for more details about this distribution.
retstable(alpha, V0, h = 1, method = NULL) retstableR(alpha, V0, h = 1)
retstable(alpha, V0, h = 1, method = NULL) retstableR(alpha, V0, h = 1)
alpha |
parameter in |
V0 |
vector of values in |
h |
parameter in |
method |
a character string denoting the method to use, currently
either |
retstableR
is a pure R version of "MH"
, however, not as fast as
retstable
(implemented in C, based on both methods) and therefore not
recommended in simulations when run time matters.
A vector of variates from ; see above.
Devroye, L. (2009) Random variate generation for exponentially and polynomially tilted stable distributions, ACM Transactions on Modeling and Computer Simulation 19, 18, 1–20.
Hofert, M. (2011) Efficiently sampling nested Archimedean copulas, Computational Statistics & Data Analysis 55, 57–70.
Hofert, M. (2012), Sampling exponentially tilted stable distributions, ACM Transactions on Modeling and Computer Simulation 22, 1.
rstable1
for sampling stable distributions.
## Draw random variates from an exponentially tilted stable distribution ## with given alpha, V0, and h = 1 alpha <- .2 V0 <- rgamma(200, 1) rETS <- retstable(alpha, V0) ## Distribution plot the random variates -- log-scaled hist(log(rETS), prob=TRUE) lines(density(log(rETS)), col=2) rug (log(rETS))
## Draw random variates from an exponentially tilted stable distribution ## with given alpha, V0, and h = 1 alpha <- .2 V0 <- rgamma(200, 1) rETS <- retstable(alpha, V0) ## Distribution plot the random variates -- log-scaled hist(log(rETS), prob=TRUE) lines(density(log(rETS)), col=2) rug (log(rETS))
rF01Frank
:
Generate a vector of random variates
with Laplace-Stieltjes transform
for the given realizations of Frank's
and the
parameters
such that
. This distribution
appears on sampling nested Frank copulas. The parameter
rej
is used
to determine the cut-off point of two algorithms that are involved in
sampling . If
a rejection from
of Joe is applied (see
rF01Joe
; the meaning of the parameter approx
is
explained below), otherwise a sum is sampled with a logarithmic
envelope for each summand.
rF01Joe
:
Generate a vector of random variates
with Laplace-Stieltjes transform
for the given realizations of Joe's
and
the parameter
. This distribution
appears on sampling nested Joe copulas. Here,
, where
such
that
. The parameter
approx
denotes the largest number of summands in the
sum-representation of before the asymptotic
is used to sample .
rF01Frank(V0, theta0, theta1, rej, approx) rF01Joe(V0, alpha, approx)
rF01Frank(V0, theta0, theta1, rej, approx) rF01Joe(V0, alpha, approx)
V0 |
a vector of random variates from |
theta0 , theta1 , alpha
|
parameters
|
rej |
parameter value as described above. |
approx |
parameter value as described above. |
A vector of positive integer
s of length n
containing the generated random variates.
Hofert, M. (2011). Efficiently sampling nested Archimedean copulas. Computational Statistics & Data Analysis 55, 57–70.
rFFrank
, rFJoe
, rSibuya
,
and rnacopula
.
## Sample n random variates V0 ~ F0 for Frank and Joe with parameter ## chosen such that Kendall's tau equals 0.2 and plot histogram n <- 1000 theta0.F <- copFrank@iTau(0.2) V0.F <- copFrank@V0(n,theta0.F) hist(log(V0.F), prob=TRUE); lines(density(log(V0.F)), col=2, lwd=2) theta0.J <- copJoe@iTau(0.2) V0.J <- copJoe@V0(n,theta0.J) hist(log(V0.J), prob=TRUE); lines(density(log(V0.J)), col=2, lwd=2) ## Sample corresponding V01 ~ F01 for Frank and Joe and plot histogram ## copFrank@V01 calls rF01Frank(V0, theta0, theta1, rej=1, approx=10000) ## copJoe@V01 calls rF01Joe(V0, alpha, approx=10000) theta1.F <- copFrank@iTau(0.5) V01.F <- copFrank@V01(V0.F,theta0.F,theta1.F) hist(log(V01.F), prob=TRUE); lines(density(log(V01.F)), col=2, lwd=2) theta1.J <- copJoe@iTau(0.5) V01.J <- copJoe@V01(V0.J,theta0.J,theta1.J) hist(log(V01.J), prob=TRUE); lines(density(log(V01.J)), col=2, lwd=2)
## Sample n random variates V0 ~ F0 for Frank and Joe with parameter ## chosen such that Kendall's tau equals 0.2 and plot histogram n <- 1000 theta0.F <- copFrank@iTau(0.2) V0.F <- copFrank@V0(n,theta0.F) hist(log(V0.F), prob=TRUE); lines(density(log(V0.F)), col=2, lwd=2) theta0.J <- copJoe@iTau(0.2) V0.J <- copJoe@V0(n,theta0.J) hist(log(V0.J), prob=TRUE); lines(density(log(V0.J)), col=2, lwd=2) ## Sample corresponding V01 ~ F01 for Frank and Joe and plot histogram ## copFrank@V01 calls rF01Frank(V0, theta0, theta1, rej=1, approx=10000) ## copJoe@V01 calls rF01Joe(V0, alpha, approx=10000) theta1.F <- copFrank@iTau(0.5) V01.F <- copFrank@V01(V0.F,theta0.F,theta1.F) hist(log(V01.F), prob=TRUE); lines(density(log(V01.F)), col=2, lwd=2) theta1.J <- copJoe@iTau(0.5) V01.J <- copJoe@V01(V0.J,theta0.J,theta1.J) hist(log(V01.J), prob=TRUE); lines(density(log(V01.J)), col=2, lwd=2)
Generate a vector of variates from the distribution
function
with Laplace-Stieltjes transform
for Frank, or
for Joe, respectively,
where and
denote two parameters
of Frank (that is,
) and Joe (that is,
) satisfying
and
.
rFFrank(n, theta0, theta1, rej) rFJoe(n, alpha)
rFFrank(n, theta0, theta1, rej) rFJoe(n, alpha)
n |
number of variates from |
theta0 |
parameter |
theta1 |
parameter |
rej |
method switch for |
alpha |
parameter |
rFFrank(n, theta0, theta1, rej)
calls
rF01Frank(rep(1,n), theta0, theta1, rej, 1)
and
rFJoe(n, alpha)
calls rSibuya(n, alpha)
.
numeric vector of random variates of length
n
.
rF01Frank
, rF01Joe
, also for references.
rSibuya
, and rnacopula
.
## Simple definition of the functions: rFFrank rFJoe
## Simple definition of the functions: rFFrank rFJoe
Generating random variates from a Log(p) distribution with probability mass function
where . The implemented algorithm is the
one named “LK” in Kemp (1981).
rlog(n, p, Ip = 1 - p)
rlog(n, p, Ip = 1 - p)
n |
sample size, that is, length of the resulting vector of random variates. |
p |
parameter in |
Ip |
|
For documentation and didactical purposes, rlogR
is a pure-R
implementation of rlog
. However, rlogR
is not as fast as
rlog
(the latter being implemented in C).
A vector of positive integer
s of length n
containing the
generated random variates.
In the copula package, the Log(p) distribution is needed only
for generating Frank copula observations, namely in
copFrank@V0()
, where , i.e.,
p = -expm1(-theta)
and Ip = exp(-theta)
.
For large it would be desirable to pass
-theta
to rlog()
instead of p
. This has not yet been implemented.
Kemp, A. W. (1981), Efficient Generation of Logarithmically Distributed Pseudo-Random Variables, Journal of the Royal Statistical Society: Series C (Applied Statistics) 30, 3, 249–253.
## Sample n random variates from a Log(p) distribution and plot a ## "histogram" n <- 1000 p <- .5 X <- rlog(n, p) table(X) ## distribution on the integers {1, 2, ..} ## ==> The following plot is more reasonable than a hist(X, prob=TRUE) : plot(table(X)/n, type="h", lwd=10, col="gray70") ## case closer to numerical boundary: lV <- log10(V <- rlog(10000, Ip = 2e-9))# Ip = exp(-theta) <==> theta ~= 20 hV <- hist(lV, plot=FALSE) dV <- density(lV) ## Plot density and histogram on log scale with nice axis labeling & ticks: plot(dV, xaxt="n", ylim = c(0, max(hV$density, dV$y)), main = "Density of [log-transformed] Log(p), p=0.999999..") abline(h=0, lty=3); rug(lV); lines(hV, freq=FALSE, col = "light blue"); lines(dV) rx <- range(pretty(par("usr")[1:2])) sx <- outer(1:9, 10^(max(0,rx[1]):rx[2])) axis(1, at=log10(sx), labels= FALSE, tcl = -0.3) axis(1, at=log10(sx[1,]), labels= formatC(sx[1,]), tcl = -0.75)
## Sample n random variates from a Log(p) distribution and plot a ## "histogram" n <- 1000 p <- .5 X <- rlog(n, p) table(X) ## distribution on the integers {1, 2, ..} ## ==> The following plot is more reasonable than a hist(X, prob=TRUE) : plot(table(X)/n, type="h", lwd=10, col="gray70") ## case closer to numerical boundary: lV <- log10(V <- rlog(10000, Ip = 2e-9))# Ip = exp(-theta) <==> theta ~= 20 hV <- hist(lV, plot=FALSE) dV <- density(lV) ## Plot density and histogram on log scale with nice axis labeling & ticks: plot(dV, xaxt="n", ylim = c(0, max(hV$density, dV$y)), main = "Density of [log-transformed] Log(p), p=0.999999..") abline(h=0, lty=3); rug(lV); lines(hV, freq=FALSE, col = "light blue"); lines(dV) rx <- range(pretty(par("usr")[1:2])) sx <- outer(1:9, 10^(max(0,rx[1]):rx[2])) axis(1, at=log10(sx), labels= FALSE, tcl = -0.3) axis(1, at=log10(sx[1,]), labels= formatC(sx[1,]), tcl = -0.75)
Randomly construct a nested Archimedean copula model,
rnacModel(family, d, pr.comp, rtau0 = function() rbeta(1, 2,4), order=c("random", "each", "seq"), digits.theta = 2)
rnacModel(family, d, pr.comp, rtau0 = function() rbeta(1, 2,4), order=c("random", "each", "seq"), digits.theta = 2)
family |
the Archimedean family |
d |
integer >=2; the dimension |
pr.comp |
probability of a direct component on each level |
rtau0 |
a |
order |
string indicating how the component IDs are selected. |
digits.theta |
integer specifying the number of digits to round the theta values. |
an object of outer_nacopula
.
rnacopula
for generating -dimensional
observations from an (outer)
nacopula
, e.g., from
the result of rnacModel()
.
## Implicitly tests the function {with validity of outer_nacopula ..} set.seed(11) for(i in 1:40) { m1 <- rnacModel("Gumbel", d=sample(20:25, 1), pr.comp = 0.3, rtau0 = function() 0.25) m2 <- rnacModel("Joe", d=3, pr.comp = 0.1, order="each") mC <- rnacModel("Clayton", d=20, pr.comp = 0.3, rtau0 = function() runif(1, 0.1, 0.5)) mF <- rnacModel("Frank", d=sample(20:25, 1), pr.comp = 0.3, order="seq") }
## Implicitly tests the function {with validity of outer_nacopula ..} set.seed(11) for(i in 1:40) { m1 <- rnacModel("Gumbel", d=sample(20:25, 1), pr.comp = 0.3, rtau0 = function() 0.25) m2 <- rnacModel("Joe", d=3, pr.comp = 0.1, order="each") mC <- rnacModel("Clayton", d=20, pr.comp = 0.3, rtau0 = function() runif(1, 0.1, 0.5)) mF <- rnacModel("Frank", d=sample(20:25, 1), pr.comp = 0.3, order="seq") }
Random number generation for nested Archimedean copulas (of class
outer_nacopula
, specifically), aka
sampling nested Archimedean copulas will generate n
random vectors of dimension (=
dim(x)
).
rnacopula(n, copula, x, ...)
rnacopula(n, copula, x, ...)
n |
integer specifying the sample size, that is, the number of
copula-distributed random vectors |
copula |
an R object of |
x |
only for back compatibility: former name of |
... |
possibly further arguments for the given copula family. |
The generation happens by calling rnchild()
on
each child copula (which itself recursively descends the tree implied
by the nested Archimedean structure). The algorithm is based on a
mixture representation of the generic distribution functions
and
and is presented in
McNeil (2008) and Hofert (2011a). Details about how to efficiently sample the
distribution functions
and
can be found in
Hofert (2010), Hofert (2012), and Hofert and Mächler (2011).
numeric
matrix containing the generated vectors of
random variates from the nested Archimedean copula object copula
.
McNeil, A. J. (2008). Sampling nested Archimedean copulas. Journal of Statistical Computation and Simulation 78, 6, 567–581.
Hofert, M. (2010). Efficiently sampling nested Archimedean copulas. Computational Statistics & Data Analysis 55, 57–70.
Hofert, M. (2012), A stochastic representation and sampling algorithm for nested Archimedean copulas. Journal of Statistical Computation and Simulation, 82, 9, 1239–1255.
Hofert, M. (2012). Sampling exponentially tilted stable distributions. ACM Transactions on Modeling and Computer Simulation 22, 1 (3rd article).
Hofert, M. and Mächler, M. (2011). Nested Archimedean Copulas Meet R: The nacopula Package. Journal of Statistical Software 39, 9, 1–20.
rnchild
; classes "nacopula"
and
"outer_nacopula"
; see also onacopula()
.
rnacModel
creates random nacopula models, i.e.,
the input copula
for rnacopula(n, copula)
.
Further, those of the Archimedean families, for example,
copGumbel
.
## Construct a three-dimensional nested Clayton copula with parameters ## chosen such that the Kendall's tau of the respective bivariate margins ## are 0.2 and 0.5 : C3 <- onacopula("C", C(copClayton@iTau(0.2), 1, C(copClayton@iTau(0.5), c(2,3)))) C3 ## Sample n vectors of random variates from this copula. This involves ## sampling exponentially tilted stable distributions n <- 1000 U <- rnacopula(n, C3) ## Plot the drawn vectors of random variates splom2(U)
## Construct a three-dimensional nested Clayton copula with parameters ## chosen such that the Kendall's tau of the respective bivariate margins ## are 0.2 and 0.5 : C3 <- onacopula("C", C(copClayton@iTau(0.2), 1, C(copClayton@iTau(0.5), c(2,3)))) C3 ## Sample n vectors of random variates from this copula. This involves ## sampling exponentially tilted stable distributions n <- 1000 U <- rnacopula(n, C3) ## Plot the drawn vectors of random variates splom2(U)
Method for generating vectors of random numbers of nested Archimedean copulas which are child copulas.
rnchild(x, theta0, V0, ...)
rnchild(x, theta0, V0, ...)
x |
an |
theta0 |
the parameter (vector) of the parent Archimedean copula
which contains |
V0 |
a |
... |
possibly further arguments for the given copula family. |
The generation is done recursively, descending the tree implied by the
nested Archimedean structure. The algorithm is based on a mixture
representation and requires sampling
given random variates
. Calling
"rnchild"
is only intended for experts. The typical call of
this function takes place through rnacopula()
.
a list with components
U |
a |
indcol |
an |
rnacopula
, also for the references.
Further, classes "nacopula"
and
"outer_nacopula"
; see also onacopula()
.
## Construct a three-dimensional nested Clayton copula with parameters ## chosen such that the Kendall's tau of the respective bivariate margins ## are 0.2 and 0.5. theta0 <- copClayton@iTau(.2) theta1 <- copClayton@iTau(.5) C3 <- onacopula("C", C(theta0, 1, C(theta1, c(2,3)))) ## Sample n random variates V0 ~ F0 (a Gamma(1/theta0,1) distribution) n <- 1000 V0 <- copClayton@V0(n, theta0) ## Given these variates V0, sample the child copula, that is, the bivariate ## nested Clayton copula with parameter theta1 U23 <- rnchild(C3@childCops[[1]], theta0, V0) ## Now build the three-dimensional vectors of random variates by hand U1 <- copClayton@psi(rexp(n)/V0, theta0) U <- cbind(U1, U23$U) ## Plot the vectors of random variates from the three-dimensional nested ## Clayton copula splom2(U)
## Construct a three-dimensional nested Clayton copula with parameters ## chosen such that the Kendall's tau of the respective bivariate margins ## are 0.2 and 0.5. theta0 <- copClayton@iTau(.2) theta1 <- copClayton@iTau(.5) C3 <- onacopula("C", C(theta0, 1, C(theta1, c(2,3)))) ## Sample n random variates V0 ~ F0 (a Gamma(1/theta0,1) distribution) n <- 1000 V0 <- copClayton@V0(n, theta0) ## Given these variates V0, sample the child copula, that is, the bivariate ## nested Clayton copula with parameter theta1 U23 <- rnchild(C3@childCops[[1]], theta0, V0) ## Now build the three-dimensional vectors of random variates by hand U1 <- copClayton@psi(rexp(n)/V0, theta0) U <- cbind(U1, U23$U) ## Plot the vectors of random variates from the three-dimensional nested ## Clayton copula splom2(U)
Constructs a “reflected” or “rotated” copula from an initial copula and a vector of logicals indicating which dimension to “flip”.
rotCopula(copula, flip = TRUE)
rotCopula(copula, flip = TRUE)
copula |
an object of class |
flip |
|
A “rotated” or “reflected” copula object of class "rotCopula"
.
of a "rotCopula"
object
copula
:Object of class "copula"
.
flip
:logical
vector of length
(the copula dimension) specifying which margins are flipped;
corresponds to the
flip
argument of rotCopula()
.
dimension
:the copula dimension , an
integer
.
parameters
:numeric
vector specifying
the parameters.
param.lowbnd
, and param.upbnd
:numeric
vector of the same length as parameters
, specifying
(component wise) bounds for each of the parameters.
param.names
:character
vector (of same
length as parameters
) with parameter names.
fullname
:deprecated; a character
string describing the
rotated copula.
When there are an even number of flips, the resulting copula
can be seen as a rotated version of copula
. In the
other cases, e.g., flip = c(FALSE,TRUE)
in 2d, it is rather a
a reflected or “mirrored” copula.
fitCopula
for fitting such copulas to data,
gofCopula
for goodness-of-fit tests for such copulas.
## Two-dimensional examples: First the Clayton(3) survival copula: survC <- rotCopula(claytonCopula(3)) # default: flip = 'all TRUE' contourplot2(survC, dCopula) ## Now, a reflected Clayton copula: r10C <- rotCopula(claytonCopula(3), flip = c(TRUE, FALSE)) contourplot2(r10C, dCopula, nlevels = 20, main = "dCopula(<rotCopula>)") contourplot2(r10C, pCopula, nlevels = 20, main = "pCopula(<rotCopula>)") rho(r10C) tau(r10C) # -0.6 n <- 1000 u <- rCopula(n, r10C) rho.n <- cor(u[,1], u[,2], method = "spearman") tau.n <- cor(u[,1], u[,2], method = "kendall") ## "Calibration" rc. <- rotCopula(claytonCopula(), flip = c(TRUE, FALSE)) iRho(rc., rho.n) iTau(rc., tau.n) ## Fitting fitCopula(rc., data = pobs(u), method = "irho") fitCopula(rc., data = pobs(u), method = "itau") fitCopula(rc., data = pobs(u), method = "mpl") ## Goodness-of-fit testing -- the first, parametric bootstrap, is *really* slow ## Not run: gofCopula(rc., x = u) gofCopula(rc., x = u, simulation = "mult") ## A four-dimensional example: a rotated Frank copula rf <- rotCopula(frankCopula(10, dim = 4), flip = c(TRUE, FALSE, TRUE, FALSE)) n <- 1000 u <- rCopula(n, rf) splom2(u) pCopula(c(0.6,0.7,0.6,0.8), rf) C.n(cbind(0.6,0.7,0.6,0.8), u) ## Fitting (rf. <- rotCopula(frankCopula(dim=4), flip = c(TRUE, FALSE, TRUE, FALSE))) ## fitCopula(rf., data = pobs(u), method = "irho") ## FIXME above: not related to rotCopula but frankCopula fitCopula(rf., data = pobs(u), method = "itau") fitCopula(rf., data = pobs(u), method = "mpl") ## Goodness-of-fit testing (first ("PB") is really slow, see above): ## Not run: gofCopula(rf., x = u) gofCopula(rf., x = u, simulation = "mult") # takes 3.7 sec [lynne, 2015]
## Two-dimensional examples: First the Clayton(3) survival copula: survC <- rotCopula(claytonCopula(3)) # default: flip = 'all TRUE' contourplot2(survC, dCopula) ## Now, a reflected Clayton copula: r10C <- rotCopula(claytonCopula(3), flip = c(TRUE, FALSE)) contourplot2(r10C, dCopula, nlevels = 20, main = "dCopula(<rotCopula>)") contourplot2(r10C, pCopula, nlevels = 20, main = "pCopula(<rotCopula>)") rho(r10C) tau(r10C) # -0.6 n <- 1000 u <- rCopula(n, r10C) rho.n <- cor(u[,1], u[,2], method = "spearman") tau.n <- cor(u[,1], u[,2], method = "kendall") ## "Calibration" rc. <- rotCopula(claytonCopula(), flip = c(TRUE, FALSE)) iRho(rc., rho.n) iTau(rc., tau.n) ## Fitting fitCopula(rc., data = pobs(u), method = "irho") fitCopula(rc., data = pobs(u), method = "itau") fitCopula(rc., data = pobs(u), method = "mpl") ## Goodness-of-fit testing -- the first, parametric bootstrap, is *really* slow ## Not run: gofCopula(rc., x = u) gofCopula(rc., x = u, simulation = "mult") ## A four-dimensional example: a rotated Frank copula rf <- rotCopula(frankCopula(10, dim = 4), flip = c(TRUE, FALSE, TRUE, FALSE)) n <- 1000 u <- rCopula(n, rf) splom2(u) pCopula(c(0.6,0.7,0.6,0.8), rf) C.n(cbind(0.6,0.7,0.6,0.8), u) ## Fitting (rf. <- rotCopula(frankCopula(dim=4), flip = c(TRUE, FALSE, TRUE, FALSE))) ## fitCopula(rf., data = pobs(u), method = "irho") ## FIXME above: not related to rotCopula but frankCopula fitCopula(rf., data = pobs(u), method = "itau") fitCopula(rf., data = pobs(u), method = "mpl") ## Goodness-of-fit testing (first ("PB") is really slow, see above): ## Not run: gofCopula(rf., x = u) gofCopula(rf., x = u, simulation = "mult") # takes 3.7 sec [lynne, 2015]
Given a matrix of iid multivariate data from a meta-elliptical or
meta-Archimedean model, RSpobs()
computes pseudo-observations
of the radial part and the vector
which
follows a uniform distribution on the unit sphere (for elliptical
copulas) or the unit simplex (for Archimedean copulas). These
quantities can be used for (graphical) goodness-of-fit tests, for example.
RSpobs(x, do.pobs = TRUE, method = c("ellip", "archm"), ...)
RSpobs(x, do.pobs = TRUE, method = c("ellip", "archm"), ...)
x |
an |
do.pobs |
|
method |
|
... |
additional arguments passed to the implemented methods. These can be
|
The construction of the pseudo-obersvations of the radial part and the uniform distribution on the unit sphere/simplex is described in Genest, Hofert, G. Nešlehová (2014).
A list
with components R
(an
-
vector
containing the pseudo-observations
of the radial part) and S
(an -
matrix
containing the pseudo-observations of the
uniform distribution (on the unit sphere/simplex)).
Genest, C., Hofert, M., G. Nešlehová, J., (2014). Is the dependence Archimedean, elliptical, or what? To be submitted.
pobs()
for computing the “classical”
pseudo-observations.
set.seed(100) n <- 250 # sample size d <- 5 # dimension nu <- 3 # degrees of freedom ## Build a mean vector and a dispersion matrix, ## and generate multivariate t_nu data: mu <- rev(seq_len(d)) # d, d-1, .., 1 L <- diag(d) # identity in dim d L[lower.tri(L)] <- 1:(d*(d-1)/2)/d # Cholesky factor (diagonal > 0) Sigma <- crossprod(L) # pos.-def. dispersion matrix (*not* covariance of X) X <- rep(mu, each=n) + mvtnorm::rmvt(n, sigma=Sigma, df=nu) # multiv. t_nu data ## note: this is *wrong*: mvtnorm::rmvt(n, mean=mu, sigma=Sigma, df=nu) ## compute pseudo-observations of the radial part and uniform distribution ## once for 1a), once for 1b) below RS.t <- RSpobs(X, method="ellip", qQg=function(p) qt(p, df=nu)) # 'correct' RS.norm <- RSpobs(X, method="ellip", qQg=qnorm) # for testing 'wrong' distribution stopifnot(length(RS.norm$R) == n, length(RS.t$R) == n, dim(RS.norm$S) == c(n,d), dim(RS.t$S) == c(n,d)) ## 1) Graphically testing the radial part ## 1a) Q-Q plot of R against the correct quantiles qqplot2(RS.t$R, qF=function(p) sqrt(d*qf(p, df1=d, df2=nu)), main.args=list(text= substitute(bold(italic(F[list(d.,nu.)](r^2/d.))~~"Q-Q Plot"), list(d.=d, nu.=nu)), side=3, cex=1.3, line=1.1, xpd=NA)) ## 1b) Q-Q plot of R against the quantiles of F_R for a multivariate normal ## distribution qqplot2(RS.norm$R, qF=function(p) sqrt(qchisq(p, df=d)), main.args=list(text= substitute(bold(italic(chi[D_]) ~~ "Q-Q Plot"), list(D_=d)), side=3, cex=1.3, line=1.1, xpd=NA)) ## 2) Graphically testing the angular distribution ## auxiliary function qqp <- function(k, Bmat) { d <- ncol(Bmat) + 1 qqplot2(Bmat[,k], qF = function(p) qbeta(p, shape1=k/2, shape2=(d-k)/2), main.args=list(text= substitute(plain("Beta")(s1,s2) ~~ bold("Q-Q Plot"), list(s1 = k/2, s2 = (d-k)/2)), side=3, cex=1.3, line=1.1, xpd=NA)) } ## 2a) Q-Q plot of the 'correct' angular distribution ## (Bmat[,k] should follow a Beta(k/2, (d-k)/2) distribution) Bmat.t <- gofBTstat(RS.t$S) qqp(1, Bmat=Bmat.t) # k=1 qqp(3, Bmat=Bmat.t) # k=3 ## 2b) Q-Q plot of the 'wrong' angular distribution Bmat.norm <- gofBTstat(RS.norm$S) qqp(1, Bmat=Bmat.norm) # k=1 qqp(3, Bmat=Bmat.norm) # k=3 ## 3) Graphically check independence between radial part and B_1 and B_3 ## 'correct' distributions (multivariate t) plot(pobs(cbind(RS.t$R, Bmat.t[,1])), # k = 1 xlab=quote(italic(R)), ylab=quote(italic(B)[1]), main=quote(bold("Rank plot between"~~italic(R)~~"and"~~italic(B)[1]))) plot(pobs(cbind(RS.t$R, Bmat.t[,3])), # k = 3 xlab=quote(italic(R)), ylab=quote(italic(B)[3]), main=quote(bold("Rank plot between"~~italic(R)~~"and"~~italic(B)[3]))) ## 'wrong' distributions (multivariate normal) plot(pobs(cbind(RS.norm$R, Bmat.norm[,1])), # k = 1 xlab=quote(italic(R)), ylab=quote(italic(B)[1]), main=quote(bold("Rank plot between"~~italic(R)~~"and"~~italic(B)[1]))) plot(pobs(cbind(RS.norm$R, Bmat.norm[,3])), # k = 3 xlab=quote(italic(R)), ylab=quote(italic(B)[3]), main=quote(bold("Rank plot between"~~italic(R)~~"and"~~italic(B)[3])))
set.seed(100) n <- 250 # sample size d <- 5 # dimension nu <- 3 # degrees of freedom ## Build a mean vector and a dispersion matrix, ## and generate multivariate t_nu data: mu <- rev(seq_len(d)) # d, d-1, .., 1 L <- diag(d) # identity in dim d L[lower.tri(L)] <- 1:(d*(d-1)/2)/d # Cholesky factor (diagonal > 0) Sigma <- crossprod(L) # pos.-def. dispersion matrix (*not* covariance of X) X <- rep(mu, each=n) + mvtnorm::rmvt(n, sigma=Sigma, df=nu) # multiv. t_nu data ## note: this is *wrong*: mvtnorm::rmvt(n, mean=mu, sigma=Sigma, df=nu) ## compute pseudo-observations of the radial part and uniform distribution ## once for 1a), once for 1b) below RS.t <- RSpobs(X, method="ellip", qQg=function(p) qt(p, df=nu)) # 'correct' RS.norm <- RSpobs(X, method="ellip", qQg=qnorm) # for testing 'wrong' distribution stopifnot(length(RS.norm$R) == n, length(RS.t$R) == n, dim(RS.norm$S) == c(n,d), dim(RS.t$S) == c(n,d)) ## 1) Graphically testing the radial part ## 1a) Q-Q plot of R against the correct quantiles qqplot2(RS.t$R, qF=function(p) sqrt(d*qf(p, df1=d, df2=nu)), main.args=list(text= substitute(bold(italic(F[list(d.,nu.)](r^2/d.))~~"Q-Q Plot"), list(d.=d, nu.=nu)), side=3, cex=1.3, line=1.1, xpd=NA)) ## 1b) Q-Q plot of R against the quantiles of F_R for a multivariate normal ## distribution qqplot2(RS.norm$R, qF=function(p) sqrt(qchisq(p, df=d)), main.args=list(text= substitute(bold(italic(chi[D_]) ~~ "Q-Q Plot"), list(D_=d)), side=3, cex=1.3, line=1.1, xpd=NA)) ## 2) Graphically testing the angular distribution ## auxiliary function qqp <- function(k, Bmat) { d <- ncol(Bmat) + 1 qqplot2(Bmat[,k], qF = function(p) qbeta(p, shape1=k/2, shape2=(d-k)/2), main.args=list(text= substitute(plain("Beta")(s1,s2) ~~ bold("Q-Q Plot"), list(s1 = k/2, s2 = (d-k)/2)), side=3, cex=1.3, line=1.1, xpd=NA)) } ## 2a) Q-Q plot of the 'correct' angular distribution ## (Bmat[,k] should follow a Beta(k/2, (d-k)/2) distribution) Bmat.t <- gofBTstat(RS.t$S) qqp(1, Bmat=Bmat.t) # k=1 qqp(3, Bmat=Bmat.t) # k=3 ## 2b) Q-Q plot of the 'wrong' angular distribution Bmat.norm <- gofBTstat(RS.norm$S) qqp(1, Bmat=Bmat.norm) # k=1 qqp(3, Bmat=Bmat.norm) # k=3 ## 3) Graphically check independence between radial part and B_1 and B_3 ## 'correct' distributions (multivariate t) plot(pobs(cbind(RS.t$R, Bmat.t[,1])), # k = 1 xlab=quote(italic(R)), ylab=quote(italic(B)[1]), main=quote(bold("Rank plot between"~~italic(R)~~"and"~~italic(B)[1]))) plot(pobs(cbind(RS.t$R, Bmat.t[,3])), # k = 3 xlab=quote(italic(R)), ylab=quote(italic(B)[3]), main=quote(bold("Rank plot between"~~italic(R)~~"and"~~italic(B)[3]))) ## 'wrong' distributions (multivariate normal) plot(pobs(cbind(RS.norm$R, Bmat.norm[,1])), # k = 1 xlab=quote(italic(R)), ylab=quote(italic(B)[1]), main=quote(bold("Rank plot between"~~italic(R)~~"and"~~italic(B)[1]))) plot(pobs(cbind(RS.norm$R, Bmat.norm[,3])), # k = 3 xlab=quote(italic(R)), ylab=quote(italic(B)[3]), main=quote(bold("Rank plot between"~~italic(R)~~"and"~~italic(B)[3])))
Generate random numbers of the stable distribution
with characteristic
exponent ,
skewness
,
scale
, and
location
; see Nolan (2010) for
the parameterization
. The case
is understood as the unit jump at
.
rstable1(n, alpha, beta, gamma = 1, delta = 0, pm = 1)
rstable1(n, alpha, beta, gamma = 1, delta = 0, pm = 1)
n |
an |
alpha |
characteristic exponent |
beta |
skewness |
gamma |
scale |
delta |
location |
pm |
0 or 1, denoting which parametrization (as by Nolan) is used. |
We use the approach of John Nolan for generating random variates of
stable distributions. The function rstable1
provides two basic
parametrizations, by default,
pm = 1
, the so called “S”, “S1”, or “1”
parameterization. This is the parameterization used by Samorodnitsky and
Taqqu (1994), and is a slight modification of Zolotarev's (A)
parameterization. It is the form with the most simple form of the
characteristic function; see Nolan (2010, p. 8).
pm = 0
is the “S0” parameterization: based on the (M)
representation of Zolotarev for an alpha stable distribution with
skewness beta. Unlike the Zolotarev (M) parameterization, gamma and
delta are straightforward scale and shift parameters. This
representation is continuous in all 4 parameters.
A numeric
vector of length n
containing the
generated random variates.
Chambers, J. M., Mallows, C. L., and Stuck, B. W. (1976), A Method for Simulating Stable Random Variables, J. Amer. Statist. Assoc. 71, 340–344.
Nolan, J. P. (2012), Stable Distributions—Models for Heavy Tailed Data, Birkhaeuser, in progress.
Samoridnitsky, G. and Taqqu, M. S. (1994), Stable Non-Gaussian Random Processes, Stochastic Models with Infinite Variance, Chapman and Hall, New York.
rstable
which also allows the
2-parametrization and provides further functionality for
stable distributions.
# Generate and plot a series of stable random variates set.seed(1953) r <- rstable1(n = 1000, alpha = 1.9, beta = 0.3) plot(r, type = "l", main = "stable: alpha=1.9 beta=0.3", col = "steelblue"); grid() hist(r, "Scott", prob = TRUE, ylim = c(0,0.3), main = "Stable S(1.9, 0.3; 1)") lines(density(r), col="red2", lwd = 2)
# Generate and plot a series of stable random variates set.seed(1953) r <- rstable1(n = 1000, alpha = 1.9, beta = 0.3) plot(r, type = "l", main = "stable: alpha=1.9 beta=0.3", col = "steelblue"); grid() hist(r, "Scott", prob = TRUE, ylim = c(0,0.3), main = "Stable S(1.9, 0.3; 1)") lines(density(r), col="red2", lwd = 2)
safeUroot()
as a “safe” version of
uniroot()
searches for a root (that is, zero) of the
function f
with respect to its first argument.
“Safe” means searching for the correct interval
= c(lower,upper)
if sign(f(x))
does not satisfy the
requirements at the interval end points; see the ‘Details’
section.
safeUroot(f, interval, ..., lower = min(interval), upper = max(interval), f.lower = f(lower, ...), f.upper = f(upper, ...), Sig = NULL, check.conv = FALSE, tol = .Machine$double.eps^0.25, maxiter = 1000, trace = 0)
safeUroot(f, interval, ..., lower = min(interval), upper = max(interval), f.lower = f(lower, ...), f.upper = f(upper, ...), Sig = NULL, check.conv = FALSE, tol = .Machine$double.eps^0.25, maxiter = 1000, trace = 0)
f |
function |
interval |
interval |
... |
additional named or unnamed arguments to be passed
to |
lower , upper
|
lower and upper endpoint of search interval |
f.lower , f.upper
|
function value at |
Sig |
desired sign of |
check.conv |
logical indicating whether a convergence warning of the
underlying |
tol |
the desired accuracy, that is, convergence tolerance. |
maxiter |
maximal number of iterations |
trace |
number determining tracing |
If it is known how changes sign at the root
, that is, if the function is increasing or decreasing there,
Sig
can be specified, typically as ,
to require
at the solution. In that case, the search interval
must
be such that
and
.
Otherwise, by default, when Sig=NULL
, the search interval
must satisfy
.
In both cases, when the requirement is not satisfied, safeUroot()
tries to enlarge the interval until the requirement is satisfied.
A list with four components, root
, f.root
, iter
and estim.prec
; see uniroot
.
f1 <- function(x) (121 - x^2)/(x^2+1) f2 <- function(x) exp(-x)*(x - 12) try(uniroot(f1, c(0,10))) try(uniroot(f2, c(0,2))) ##--> error: f() .. end points not of opposite sign ## where as safeUroot() simply first enlarges the search interval: safeUroot(f1, c(0,10),trace=1) safeUroot(f2, c(0,2), trace=2) ## no way to find a zero of a positive function: try( safeUroot(exp, c(0,2), trace=TRUE) ) ## Convergence checking : safeUroot(sinc, c(0,5), maxiter=4) #-> "just" a warning try( # an error, now with check.conv=TRUE safeUroot(sinc, c(0,5), maxiter=4, check.conv=TRUE) )
f1 <- function(x) (121 - x^2)/(x^2+1) f2 <- function(x) exp(-x)*(x - 12) try(uniroot(f1, c(0,10))) try(uniroot(f2, c(0,2))) ##--> error: f() .. end points not of opposite sign ## where as safeUroot() simply first enlarges the search interval: safeUroot(f1, c(0,10),trace=1) safeUroot(f2, c(0,2), trace=2) ## no way to find a zero of a positive function: try( safeUroot(exp, c(0,2), trace=TRUE) ) ## Convergence checking : safeUroot(sinc, c(0,5), maxiter=4) #-> "just" a warning try( # an error, now with check.conv=TRUE safeUroot(sinc, c(0,5), maxiter=4, check.conv=TRUE) )
Computes the serial independence test based on the empirical copula
process as proposed in Ghoudi et al.(2001) and Genest and Rémillard (2004).
The test, which is the serial analog of indepTest
, can be
seen as composed of three steps:
a simulation step, which consists in simulating the distribution of the test statistics under serial independence for the sample size under consideration;
the test itself, which consists in computing the approximate p-values of the test statistics with respect to the empirical distributions obtained in step (i);
the display of a graphic, called a dependogram, enabling to understand the type of departure from serial independence, if any.
More details can be found in the articles cited in the reference section.
serialIndepTestSim(n, lag.max, m=lag.max+1, N=1000, verbose = interactive()) serialIndepTest(x, d, alpha=0.05)
serialIndepTestSim(n, lag.max, m=lag.max+1, N=1000, verbose = interactive()) serialIndepTest(x, d, alpha=0.05)
n |
length of the time series when simulating the distribution of the test statistics under serial independence. |
lag.max |
maximum lag. |
m |
maximum cardinality of the subsets of 'lags' for which a test
statistic is to be computed. It makes sense to consider |
N |
number of repetitions when simulating under serial independence. |
verbose |
a logical specifying if progress
should be displayed via |
x |
numeric vector containing the time series whose serial independence is to be tested. |
d |
object of class |
alpha |
significance level used in the computation of the critical values for the test statistics. |
See the references below for more details, especially the third and fourth ones.
The function serialIndepTestSim()
returns an object of S3 class
"serialIndepTestDist"
with list components sample.size
,
lag.max
, max.card.subsets
, number.repetitons
,
subsets
(list of the subsets for which test statistics have
been computed), subsets.binary
(subsets in binary 'integer'
notation), dist.statistics.independence
(a N
line matrix
containing the values of the test statistics for each subset and each
repetition) and dist.global.statistic.independence
(a vector a
length N
containing the values of the serial version of the
global Cramér-von Mises test statistic for each repetition — see
last reference p.175).
The function serialIndepTest()
returns an object of S3 class
"indepTest"
with list components subsets
,
statistics
, critical.values
, pvalues
,
fisher.pvalue
(a p-value resulting from a combination à la
Fisher of the subset statistic p-values), tippett.pvalue
(a
p-value resulting from a combination à la Tippett of the subset
statistic p-values), alpha
(global significance level of the
test), beta
(1 - beta
is the significance level per
statistic), global.statistic
(value of the global Cramér-von
Mises statistic derived directly from the serial independence
empirical copula process — see last reference p 175) and
global.statistic.pvalue
(corresponding p-value).
The former argument print.every
is deprecated and not
supported anymore; use verbose
instead.
Deheuvels, P. (1979). La fonction de dépendance empirique et ses propriétés: un test non paramétrique d'indépendance, Acad. Roy. Belg. Bull. Cl. Sci., 5th Ser. 65:274–292.
Deheuvels, P. (1981), A non parametric test for independence, Publ. Inst. Statist. Univ. Paris. 26:29–50.
Genest, C. and Rémillard, B. (2004) Tests of independence and randomness based on the empirical copula process. Test 13, 335–369.
Genest, C., Quessy, J.-F., and Rémillard, B. (2006) Local efficiency of a Cramer-von Mises test of independence. Journal of Multivariate Analysis 97, 274–294.
Genest, C., Quessy, J.-F., and Rémillard, B. (2007) Asymptotic local efficiency of Cramér-von Mises tests for multivariate independence. The Annals of Statistics 35, 166–191.
indepTest
,
multIndepTest
,
multSerialIndepTest
,
dependogram
## AR 1 process ar <- numeric(200) ar[1] <- rnorm(1) for (i in 2:200) ar[i] <- 0.5 * ar[i-1] + rnorm(1) x <- ar[101:200] ## In order to test for serial independence, the first step consists ## in simulating the distribution of the test statistics under ## serial independence for the same sample size, i.e. n=100. ## As we are going to consider lags up to 3, i.e., subsets of ## {1,...,4} whose cardinality is between 2 and 4 containing {1}, ## we set lag.max=3. This may take a while... d <- serialIndepTestSim(100,3) ## The next step consists in performing the test itself: test <- serialIndepTest(x,d) ## Let us see the results: test ## Display the dependogram: dependogram(test,print=TRUE) ## NB: In order to save d for future use, the saveRDS() function can be used.
## AR 1 process ar <- numeric(200) ar[1] <- rnorm(1) for (i in 2:200) ar[i] <- 0.5 * ar[i-1] + rnorm(1) x <- ar[101:200] ## In order to test for serial independence, the first step consists ## in simulating the distribution of the test statistics under ## serial independence for the same sample size, i.e. n=100. ## As we are going to consider lags up to 3, i.e., subsets of ## {1,...,4} whose cardinality is between 2 and 4 containing {1}, ## we set lag.max=3. This may take a while... d <- serialIndepTestSim(100,3) ## The next step consists in performing the test itself: test <- serialIndepTest(x,d) ## Let us see the results: test ## Display the dependogram: dependogram(test,print=TRUE) ## NB: In order to save d for future use, the saveRDS() function can be used.
Set or change the parameter (
theta
) of
a copula. The name ‘theta’ has been from its use in (nested)
Archimedean copulas, where x
is of class
"acopula"
or
"outer_nacopula"
. This is used for constructing
copula models with specified parameter, as, for example, in
onacopula()
, or also gofCopula
.
setTheta(x, value, na.ok = TRUE, noCheck = FALSE, freeOnly = TRUE, ...) ## S4 method for signature 'acopula,ANY' setTheta(x, value, na.ok = TRUE, noCheck = FALSE, freeOnly = TRUE, ...) ## S4 method for signature 'copula,ANY' setTheta(x, value, na.ok = TRUE, noCheck = FALSE, freeOnly = TRUE, ...) ## S4 method for signature 'Xcopula,ANY' setTheta(x, value, na.ok = TRUE, noCheck = FALSE, freeOnly = TRUE, ...) ## S4 method for signature 'outer_nacopula,numeric' setTheta(x, value, na.ok = TRUE, noCheck = FALSE, freeOnly = TRUE, ...) ## S4 method for signature 'khoudrajiCopula,ANY' setTheta(x, value, na.ok = TRUE, noCheck = FALSE, freeOnly = TRUE, ...) ## S4 method for signature 'mixCopula,ANY' setTheta(x, value, na.ok = TRUE, noCheck = FALSE, freeOnly = TRUE, treat.negative = c("set.0", "warn.set0", "stop"), ...)
setTheta(x, value, na.ok = TRUE, noCheck = FALSE, freeOnly = TRUE, ...) ## S4 method for signature 'acopula,ANY' setTheta(x, value, na.ok = TRUE, noCheck = FALSE, freeOnly = TRUE, ...) ## S4 method for signature 'copula,ANY' setTheta(x, value, na.ok = TRUE, noCheck = FALSE, freeOnly = TRUE, ...) ## S4 method for signature 'Xcopula,ANY' setTheta(x, value, na.ok = TRUE, noCheck = FALSE, freeOnly = TRUE, ...) ## S4 method for signature 'outer_nacopula,numeric' setTheta(x, value, na.ok = TRUE, noCheck = FALSE, freeOnly = TRUE, ...) ## S4 method for signature 'khoudrajiCopula,ANY' setTheta(x, value, na.ok = TRUE, noCheck = FALSE, freeOnly = TRUE, ...) ## S4 method for signature 'mixCopula,ANY' setTheta(x, value, na.ok = TRUE, noCheck = FALSE, freeOnly = TRUE, treat.negative = c("set.0", "warn.set0", "stop"), ...)
x |
an R object of class |
value |
parameter value or vector, |
... |
further arguments for methods. |
na.ok |
logical indicating if |
noCheck |
logical indicating if parameter constraint checks should be skipped. |
freeOnly |
logical indicating that only non-fixed aka
“free” parameters are to be set. If true as by default,
|
treat.negative |
a |
an R object of the same class as x
, with the main parameter (vector)
(often called theta
) set to value
.
the “inverse” function, a “getter” method, is
getTheta()
.
myC <- setTheta(copClayton, 0.5) myC ## Frank copula with Kendall's tau = 0.8 : (myF.8 <- setTheta(copFrank, iTau(copFrank, tau = 0.8))) # negative theta is ok for dim = 2 : myF <- setTheta(copFrank, -2.5, noCheck=TRUE) myF@tau(myF@theta) # -0.262 myT <- setTheta(tCopula(df.fixed=TRUE), 0.7) stopifnot(all.equal(myT, tCopula(0.7, df.fixed=TRUE), check.environment=FALSE, tolerance=0)) (myT2 <- setTheta(tCopula(dim=3, df.fixed=TRUE), 0.7)) ## Setting 'rho' and 'df' --- for default df.fixed=FALSE : (myT3 <- setTheta(tCopula(dim=3), c(0.7, 4)))
myC <- setTheta(copClayton, 0.5) myC ## Frank copula with Kendall's tau = 0.8 : (myF.8 <- setTheta(copFrank, iTau(copFrank, tau = 0.8))) # negative theta is ok for dim = 2 : myF <- setTheta(copFrank, -2.5, noCheck=TRUE) myF@tau(myF@theta) # -0.262 myT <- setTheta(tCopula(df.fixed=TRUE), 0.7) stopifnot(all.equal(myT, tCopula(0.7, df.fixed=TRUE), check.environment=FALSE, tolerance=0)) (myT2 <- setTheta(tCopula(dim=3, df.fixed=TRUE), 0.7)) ## Setting 'rho' and 'df' --- for default df.fixed=FALSE : (myT3 <- setTheta(tCopula(dim=3), c(0.7, 4)))
Methods for function show
in package copula.
see Copula
.
(see fitMvdc
): and
(see fitCopula
): these call the
(hidden) print
method, with its default argument.
Using print()
instead, allows to set digits
, e.g.
see fitCopula
.
The Sibuya distribution can be
defined by its Laplace transform
its distribution function
(where denotes the beta function) or its probability
mass function
where .
pSibuya
evaluates the distribution function.
dSibuya
evaluates the probability mass function.
rSibuya
generates random variates from
with
the algorithm described in Hofert (2011), Proposition 3.2.
dsumSibuya
gives the probability mass function of the
-fold convolution of Sibuya variables, that is, the sum of
independent Sibuya random variables,
, where
.
This probability mass function can be shown (see Hofert (2010, pp. 99)) to be
rSibuya(n, alpha) dSibuya(x, alpha, log=FALSE) pSibuya(x, alpha, lower.tail=TRUE, log.p=FALSE) dsumSibuya(x, n, alpha, method=c("log", "direct", "diff", "exp.log", "Rmpfr", "Rmpfr0", "RmpfrM", "Rmpfr0M"), mpfr.ctrl = list(minPrec = 21, fac = 1.25, verbose=TRUE), log=FALSE)
rSibuya(n, alpha) dSibuya(x, alpha, log=FALSE) pSibuya(x, alpha, lower.tail=TRUE, log.p=FALSE) dsumSibuya(x, n, alpha, method=c("log", "direct", "diff", "exp.log", "Rmpfr", "Rmpfr0", "RmpfrM", "Rmpfr0M"), mpfr.ctrl = list(minPrec = 21, fac = 1.25, verbose=TRUE), log=FALSE)
n |
for |
alpha |
parameter in |
x |
vector of |
log , log.p
|
|
lower.tail |
|
method |
character string specifying which computational method is to be applied. Implemented are:
|
mpfr.ctrl |
for |
The Sibuya distribution has no finite moments, that is, specifically infinite mean and variance.
For documentation and didactical purposes, rSibuyaR
is a pure-R
implementation of rSibuya
, of course slower than rSibuya
as the latter is implemented in C.
Note that the sum to evaluate for dsumSibuya
is numerically
highly challenging, even already for small
values (for example,
),
and therefore should be used with care. It may require high-precision
arithmetic which can be accessed with
method="Rmpfr"
(and the
Rmpfr package).
A vector of positive integer
s of
length n
containing the generated random variates.
a vector of
probabilities of the same length as x
.
a vector of probabilities, positive if and only if
x >= n
and of the same length as x
(or n
if
that is longer).
Hofert, M. (2010). Sampling Nested Archimedean Copulas with Applications to CDO Pricing. Südwestdeutscher Verlag fuer Hochschulschriften AG & Co. KG.
Hofert, M. (2011). Efficiently sampling nested Archimedean copulas. Computational Statistics & Data Analysis 55, 57–70.
rFJoe
and rF01Joe
(where rSibuya
is
applied).
## Sample n random variates from a Sibuya(alpha) distribution and plot a ## histogram n <- 1000 alpha <- .4 X <- rSibuya(n, alpha) hist(log(X), prob=TRUE); lines(density(log(X)), col=2, lwd=2)
## Sample n random variates from a Sibuya(alpha) distribution and plot a ## histogram n <- 1000 alpha <- .4 X <- rSibuya(n, alpha) hist(log(X), prob=TRUE); lines(density(log(X)), col=2, lwd=2)
SMI.12
contains the close prices of all 20 constituents of
the Swiss Market Index (SMI) from 2011-09-09 to 2012-03-28.
data(SMI.12)
data(SMI.12)
SMI.12
is conceptually a multivariate time series,
here simply stored as numeric
matrix
,
where the rownames
are dates (of week days).
The format is:
num [1:141, 1:20] 16.1 15.7 15.7 16.1 16.6 ... - attr(*, "dimnames")=List of 2 ..$ : chr [1:141] "2011-09-09" "2011-09-12" "2011-09-13" "2011-09-14" ... ..$ : chr [1:20] "ABBN" "ATLN" "ADEN" "CSGN" ...
... from 2011-09-09 to 2012-03-28
lSMI
is the list of the original data (before NA
“imputation”).
The data was drawn from Yahoo! Finance.
data(SMI.12) ## maybe head(SMI.12) str(D.12 <- as.Date(rownames(SMI.12))) summary(D.12) matplot(D.12, SMI.12, type="l", log = "y", main = "The 20 SMI constituents (2011-09 -- 2012-03)", xaxt="n", xlab = "2011 / 2012") Axis(D, side=1) if(FALSE) { ##--- This worked up to mid 2012, but no longer --- begSMI <- "2011-09-09" endSMI <- "2012-03-28" ##-- read *public* data ------------------------------ stopifnot(require(zoo), # -> to access all the zoo methods require(tseries)) symSMI <- c("ABBN.VX","ATLN.VX","ADEN.VX","CSGN.VX","GIVN.VX","HOLN.VX", "BAER.VX","NESN.VX","NOVN.VX","CFR.VX", "ROG.VX", "SGSN.VX", "UHR.VX", "SREN.VX","SCMN.VX","SYNN.VX","SYST.VX","RIGN.VX", "UBSN.VX","ZURN.VX") lSMI <- sapply(symSMI, function(sym) get.hist.quote(instrument = sym, start= begSMI, end= endSMI, quote = "Close", provider = "yahoo", drop=TRUE)) ## check if stock data have the same length for each company. sapply(lSMI, length) ## "concatenate" all: SMIo <- do.call(cbind, lSMI) ## and fill in the NAs : SMI.12 <- na.fill(SMIo, "extend") colnames(SMI.12) <- sub("\\.VX", "", colnames(SMI.12)) SMI.12 <- as.matrix(SMI.12) }##---- --- original download zoo.there <- "package:zoo" %in% search() if(zoo.there || require("zoo")) { stopifnot(identical(SMI.12, local({ S <- as.matrix(na.fill(do.call(cbind, lSMI), "extend")) colnames(S) <- sub("\\.VX", "", colnames(S)); S }))) if(!zoo.there) detach("package:zoo") }
data(SMI.12) ## maybe head(SMI.12) str(D.12 <- as.Date(rownames(SMI.12))) summary(D.12) matplot(D.12, SMI.12, type="l", log = "y", main = "The 20 SMI constituents (2011-09 -- 2012-03)", xaxt="n", xlab = "2011 / 2012") Axis(D, side=1) if(FALSE) { ##--- This worked up to mid 2012, but no longer --- begSMI <- "2011-09-09" endSMI <- "2012-03-28" ##-- read *public* data ------------------------------ stopifnot(require(zoo), # -> to access all the zoo methods require(tseries)) symSMI <- c("ABBN.VX","ATLN.VX","ADEN.VX","CSGN.VX","GIVN.VX","HOLN.VX", "BAER.VX","NESN.VX","NOVN.VX","CFR.VX", "ROG.VX", "SGSN.VX", "UHR.VX", "SREN.VX","SCMN.VX","SYNN.VX","SYST.VX","RIGN.VX", "UBSN.VX","ZURN.VX") lSMI <- sapply(symSMI, function(sym) get.hist.quote(instrument = sym, start= begSMI, end= endSMI, quote = "Close", provider = "yahoo", drop=TRUE)) ## check if stock data have the same length for each company. sapply(lSMI, length) ## "concatenate" all: SMIo <- do.call(cbind, lSMI) ## and fill in the NAs : SMI.12 <- na.fill(SMIo, "extend") colnames(SMI.12) <- sub("\\.VX", "", colnames(SMI.12)) SMI.12 <- as.matrix(SMI.12) }##---- --- original download zoo.there <- "package:zoo" %in% search() if(zoo.there || require("zoo")) { stopifnot(identical(SMI.12, local({ S <- as.matrix(na.fill(do.call(cbind, lSMI), "extend")) colnames(S) <- sub("\\.VX", "", colnames(S)); S }))) if(!zoo.there) detach("package:zoo") }
Methods splom2()
to draw scatter-plot matrices of
(random samples of) distributions from package copula.
## S4 method for signature 'matrix' splom2(x, varnames = NULL, varnames.null.lab = "U", xlab = "", col.mat = NULL, bg.col.mat = NULL, ...) ## ditto an identical 'data.frame' method ## S4 method for signature 'Copula' splom2(x, n, ...) ## S4 method for signature 'mvdc' splom2(x, n, varnames.null.lab = "X", ...)
## S4 method for signature 'matrix' splom2(x, varnames = NULL, varnames.null.lab = "U", xlab = "", col.mat = NULL, bg.col.mat = NULL, ...) ## ditto an identical 'data.frame' method ## S4 method for signature 'Copula' splom2(x, n, ...) ## S4 method for signature 'mvdc' splom2(x, n, varnames.null.lab = "X", ...)
x |
a |
n |
when |
varnames |
the variable names, typically unspecified. |
varnames.null.lab |
the |
xlab |
the x-axis label. |
col.mat |
a matrix of colors (or one color) for the plot symbols;
if |
bg.col.mat |
a matrix of colors for the background (the default is the
setting as obtained from
|
... |
additional arguments passed to the underlying
|
From splom()
, an R object of class "trellis"
.
pairs2()
for a similar function (for matrices and data
frames) based on pairs()
.
The lattice-based cloud2-methods
for 3D data, and
wireframe2-methods
and
contourplot2-methods
for functions.
## For 'matrix' objects ## Create a 100 x 7 matrix of random variates from a t distribution ## with four degrees of freedom and plot the generated data n <- 1000 # sample size d <- 3 # dimension nu <- 4 # degrees of freedom tau <- 0.5 # Kendall's tau th <- iTau(tCopula(df = nu), tau) # corresponding parameter cop <- tCopula(th, dim = d, df = nu) # define copula object set.seed(271) U <- rCopula(n, copula = cop) splom2(U) ## For 'copula' objects set.seed(271) splom2(cop, n = n) # same as above ## For 'rotCopula' objects: ---> Examples in rotCopula ## For 'mvdc' objects mvNN <- mvdc(cop, c("norm", "norm", "exp"), list(list(mean = 0, sd = 1), list(mean = 1), list(rate = 2))) splom2(mvNN, n = n)
## For 'matrix' objects ## Create a 100 x 7 matrix of random variates from a t distribution ## with four degrees of freedom and plot the generated data n <- 1000 # sample size d <- 3 # dimension nu <- 4 # degrees of freedom tau <- 0.5 # Kendall's tau th <- iTau(tCopula(df = nu), tau) # corresponding parameter cop <- tCopula(th, dim = d, df = nu) # define copula object set.seed(271) U <- rCopula(n, copula = cop) splom2(U) ## For 'copula' objects set.seed(271) splom2(cop, n = n) # same as above ## For 'rotCopula' objects: ---> Examples in rotCopula ## For 'mvdc' objects mvNN <- mvdc(cop, c("norm", "norm", "exp"), list(list(mean = 0, sd = 1), list(mean = 1), list(rate = 2))) splom2(mvNN, n = n)
Compute Eulerian numbers and Stirling numbers of the first and second
kind, possibly vectorized for all “at once”.
Stirling1(n, k) Stirling2(n, k, method = c("lookup.or.store", "direct")) Eulerian (n, k, method = c("lookup.or.store", "direct")) Stirling1.all(n) Stirling2.all(n) Eulerian.all (n)
Stirling1(n, k) Stirling2(n, k, method = c("lookup.or.store", "direct")) Eulerian (n, k, method = c("lookup.or.store", "direct")) Stirling1.all(n) Stirling2.all(n) Eulerian.all (n)
n |
positive integer ( |
k |
integer in |
method |
for |
Eulerian numbers: the number of permutations of 1,2,...,n with exactly
ascents (or exactly
descents).
Stirling numbers of the first kind: times
the number of permutations of 1,2,...,n with exactly
cycles.
Stirling numbers of the second kind: is the number of ways of partitioning a set
of
elements into
non-empty subsets.
,
or
, respectively.
Eulerian.all(n)
is the same as sapply(0:(n-1), Eulerian, n=n)
(for ),
Stirling1.all(n)
is the same as sapply(1:n, Stirling1, n=n)
,
andStirling2.all(n)
is the same as sapply(1:n, Stirling2, n=n)
,
but more efficient.
For typical double precision arithmetic,Eulerian*(n, *)
overflow (to Inf
) for ,
Stirling1*(n, *)
overflow (to Inf
) for
, and
Stirling2*(n, *)
overflow (to Inf
) for .
Eulerians:
NIST Digital Library of Mathematical Functions, 26.14: https://dlmf.nist.gov/26.14
Stirling numbers:
Abramowitz and Stegun 24,1,4 (p. 824-5 ; Table 24.4, p.835); Closed Form : p.824 "C."
NIST Digital Library of Mathematical Functions, 26.8: https://dlmf.nist.gov/26.8
Stirling1(7,2) Stirling2(7,3) Stirling1.all(9) Stirling2.all(9)
Stirling1(7,2) Stirling2(7,3) Stirling1.all(9) Stirling2.all(9)
Compute Kendall's Tau of an Ali-Mikhail-Haq ("AMH") or Joe Archimedean
copula with parameter theta
. In both cases, analytical
expressions are available, but need alternatives in some cases.
tauAMH()
:Analytically, given as
for theta
;
numerically, care has to be taken when
,
avoiding accuracy loss already, for example, for
as
large as
theta = 0.001
.
tauJoe()
:Analytically,
the infinite sum can be expressed by three
(
psigamma
) function terms.
tauAMH(theta) tauJoe(theta, method = c("hybrid", "digamma", "sum"), noTerms=446)
tauAMH(theta) tauJoe(theta, method = c("hybrid", "digamma", "sum"), noTerms=446)
theta |
numeric vector with values in |
method |
string specifying the method for |
noTerms |
the number of summation terms for the |
tauAMH()
:For small theta
(), we use Taylor series
approximations of up to order 7,
where we found that dropping the last two terms (e.g., only using 5 terms
from the term Taylor polynomial) is actually numerically
advantageous.
tauJoe()
:The "sum"
method simply replaces the infinite sum by a finite
sum (with noTerms
terms. The more accurate or faster methods,
use analytical summation formulas, using the digamma
aka function, see, e.g.,
https://en.wikipedia.org/wiki/Digamma_function#Series_formula.
The smallest sensible value, i.e.,
th
for which
tauJoe(th) == -1
is easily determined via
str(uniroot(function(th) tauJoe(th)-(-1), c(0.1, 0.3), tol = 1e-17), digits=12)
to be 0.2387339899
.
a vector of the same length as theta
(), with
values
for tauAMH
: in ,
of
,
numerically accurately, to at least around 12 decimal digits.
for tauJoe
: in [-1,1].
acopula-families
, and their class definition,
"acopula"
. etau()
for
method-of-moments estimators based on Kendall's tau.
tauAMH(c(0, 2^-40, 2^-20)) curve(tauAMH, 0, 1) curve(tauAMH, -1, 1)# negative taus as well curve(tauAMH, 1e-12, 1, log="xy") # linear, tau ~= 2/9*theta in the limit curve(tauJoe, 1, 10) curve(tauJoe, 0.2387, 10)# negative taus (*not* valid for Joe: no 2-monotone psi()!)
tauAMH(c(0, 2^-40, 2^-20)) curve(tauAMH, 0, 1) curve(tauAMH, -1, 1)# negative taus as well curve(tauAMH, 1e-12, 1, log="xy") # linear, tau ~= 2/9*theta in the limit curve(tauJoe, 1, 10) curve(tauJoe, 0.2387, 10)# negative taus (*not* valid for Joe: no 2-monotone psi()!)
These data consist of log concentrations of 7 chemical elements in 655 water samples collected near Grand Junction, CO (from the Montrose quad-rangle of Western Colorado). Concentrations were measured for the following elements: Uranium (U), Lithium (Li), Cobalt (Co), Potassium (K), Cesium (Cs), Scandium (Sc), And Titanium (Ti).
data(uranium, package="copula")
data(uranium, package="copula")
A data frame with 655 observations of the following 7 variables:
U
(numeric) log concentration of Uranium.
Li
(numeric) log concentration of Lithium.
Co
(numeric) log concentration of Colbalt.
K
(numeric) log concentration of Potassium.
Cs
(numeric) log concentration of Cesium.
Sc
(numeric) log concentration of Scandum.
Ti
(numeric) log concentration of Titanium.
Cook, R. D. and Johnson, M. E. (1986) Generalized BurrParetologistic distributions with applications to a uranium exploration data set. Technometrics 28, 123–131.
data(uranium)
data(uranium)
Computing antithetic variates or Latin hypercube samples.
rAntitheticVariates(u) rLatinHypercube(u, ...)
rAntitheticVariates(u) rLatinHypercube(u, ...)
u |
a |
... |
additional arguments passed to the underlying |
rAntitheticVariates()
takes any copula sample , builds
, and returns the two matrices in the form of an array; this
can be used for the variance-reduction method of (componentwise)
antithetic variates.
rLatinHypercube()
takes any copula sample,
shifts its marginal ranks minus 1 by standard uniforms and then
divides by the sample size in order to obtain a Latin hypercubed
sample.
rAntitheticVariates()
array
of dimension
, say
r
, where r[,,1]
contains the original sample u
and r[,,2]
contains
the sample 1-u
.
rLatinHypercube()
matrix
of the same
dimensions as u
.
Cambou, M., Hofert, M. and Lemieux, C. (2016). Quasi-random numbers for copula models. Statistics and Computing, 1–23.
Packham, N. and Schmidt, W. M. (2010). Latin hypercube sampling with dependence and applications in finance. Journal of Computational Finance 13(3), 81–111.
### 1 Basic plots ############################################################## ## Generate data from a Gumbel copula cop <- gumbelCopula(iTau(gumbelCopula(), tau = 0.5)) n <- 1e4 set.seed(271) U <- rCopula(n, copula = cop) ## Transform the sample to a Latin Hypercube sample U.LH <- rLatinHypercube(U) ## Plot ## Note: The 'variance-reducing property' is barely visible, but that's okay layout(rbind(1:2)) plot(U, xlab = quote(U[1]), ylab = quote(U[2]), pch = ".", main = "U") plot(U.LH, xlab = quote(U[1]), ylab = quote(U[2]), pch = ".", main = "U.LH") layout(1) # reset layout ## Transform the sample to an Antithetic variate sample U.AV <- rAntitheticVariates(U) stopifnot(identical(U.AV[,,1], U ), identical(U.AV[,,2], 1-U)) ## Plot original sample and its corresponding (componentwise) antithetic variates layout(rbind(1:2)) plot(U.AV[,,1], xlab = quote(U[1]), ylab = quote(U[2]), pch=".", main= "U") plot(U.AV[,,2], xlab = quote(U[1]), ylab = quote(U[2]), pch=".", main= "1 - U") layout(1) # reset layout ### 2 Small variance-reduction study for exceedance probabilities ############## ## Auxiliary function for approximately computing P(U_1 > u_1,..., U_d > u_d) ## by Monte Carlo simulation based on pseudo-random numbers, Latin hypercube ## sampling and quasi-random numbers. sProb <- function(n, copula, u) { d <- length(u) stopifnot(n >= 1, inherits(copula, "Copula"), 0 < u, u < 1, d == dim(copula)) umat <- rep(u, each = n) ## Pseudo-random numbers U <- rCopula(n, copula = copula) PRNG <- mean(rowSums(U > umat) == d) ## Latin hypercube sampling (based on the recycled 'U') U. <- rLatinHypercube(U) LHS <- mean(rowSums(U. > umat) == d) ## Quasi-random numbers U.. <- cCopula(sobol(n, d = d, randomize = TRUE), copula = copula, inverse = TRUE) QRNG <- mean(rowSums(U.. > umat) == d) ## Return c(PRNG = PRNG, LHS = LHS, QRNG = QRNG) } ## Simulate the probabilities of falling in (u_1,1] x ... x (u_d,1] library(qrng) # for quasi-random numbers (Xtras <- copula:::doExtras()) # determine whether examples will be extra (long) B <- if(Xtras) 500 else 100 # number of replications n <- if(Xtras) 1000 else 200 # sample size d <- 2 # dimension; note: for d > 2, the true value depends on the seed nu <- 3 # degrees of freedom th <- iTau(tCopula(df = nu), tau = 0.5) # correlation parameter cop <- tCopula(param = th, dim = d, df = nu) # t copula u <- rep(0.99, d) # lower-left endpoint of the considered cube set.seed(42) # for reproducibility true <- prob(cop, l = u, u = rep(1, d)) # true exceedance probability system.time(res <- replicate(B, sProb(n, copula = cop, u = u))) ## "abbreviations": PRNG <- res["PRNG",] LHS <- res["LHS" ,] QRNG <- res["QRNG",] ## Compute the variance-reduction factors and % improvements vrf <- var(PRNG) / var(LHS) # variance reduction factor w.r.t. LHS vrf. <- var(PRNG) / var(QRNG) # variance reduction factor w.r.t. QRNG pim <- (var(PRNG) - var(LHS)) / var(PRNG) *100 # improvement w.r.t. LHS pim. <- (var(PRNG) - var(QRNG))/ var(PRNG) *100 # improvement w.r.t. QRNG ## Boxplot boxplot(list(PRNG = PRNG, LHS = LHS, QRNG = QRNG), notch = TRUE, main = substitute("Simulated exceedance probabilities" ~ P(bold(U) > bold(u))~~ "for a" ~ t[nu.]~"copula", list(nu. = nu)), sub = sprintf( "Variance-reduction factors and %% improvements: %.1f (%.0f%%), %.1f (%.0f%%)", vrf, pim, vrf., pim.)) abline(h = true, lty = 3) # true value mtext(sprintf("B = %d replications with n = %d and d = %d", B, n, d), side = 3)
### 1 Basic plots ############################################################## ## Generate data from a Gumbel copula cop <- gumbelCopula(iTau(gumbelCopula(), tau = 0.5)) n <- 1e4 set.seed(271) U <- rCopula(n, copula = cop) ## Transform the sample to a Latin Hypercube sample U.LH <- rLatinHypercube(U) ## Plot ## Note: The 'variance-reducing property' is barely visible, but that's okay layout(rbind(1:2)) plot(U, xlab = quote(U[1]), ylab = quote(U[2]), pch = ".", main = "U") plot(U.LH, xlab = quote(U[1]), ylab = quote(U[2]), pch = ".", main = "U.LH") layout(1) # reset layout ## Transform the sample to an Antithetic variate sample U.AV <- rAntitheticVariates(U) stopifnot(identical(U.AV[,,1], U ), identical(U.AV[,,2], 1-U)) ## Plot original sample and its corresponding (componentwise) antithetic variates layout(rbind(1:2)) plot(U.AV[,,1], xlab = quote(U[1]), ylab = quote(U[2]), pch=".", main= "U") plot(U.AV[,,2], xlab = quote(U[1]), ylab = quote(U[2]), pch=".", main= "1 - U") layout(1) # reset layout ### 2 Small variance-reduction study for exceedance probabilities ############## ## Auxiliary function for approximately computing P(U_1 > u_1,..., U_d > u_d) ## by Monte Carlo simulation based on pseudo-random numbers, Latin hypercube ## sampling and quasi-random numbers. sProb <- function(n, copula, u) { d <- length(u) stopifnot(n >= 1, inherits(copula, "Copula"), 0 < u, u < 1, d == dim(copula)) umat <- rep(u, each = n) ## Pseudo-random numbers U <- rCopula(n, copula = copula) PRNG <- mean(rowSums(U > umat) == d) ## Latin hypercube sampling (based on the recycled 'U') U. <- rLatinHypercube(U) LHS <- mean(rowSums(U. > umat) == d) ## Quasi-random numbers U.. <- cCopula(sobol(n, d = d, randomize = TRUE), copula = copula, inverse = TRUE) QRNG <- mean(rowSums(U.. > umat) == d) ## Return c(PRNG = PRNG, LHS = LHS, QRNG = QRNG) } ## Simulate the probabilities of falling in (u_1,1] x ... x (u_d,1] library(qrng) # for quasi-random numbers (Xtras <- copula:::doExtras()) # determine whether examples will be extra (long) B <- if(Xtras) 500 else 100 # number of replications n <- if(Xtras) 1000 else 200 # sample size d <- 2 # dimension; note: for d > 2, the true value depends on the seed nu <- 3 # degrees of freedom th <- iTau(tCopula(df = nu), tau = 0.5) # correlation parameter cop <- tCopula(param = th, dim = d, df = nu) # t copula u <- rep(0.99, d) # lower-left endpoint of the considered cube set.seed(42) # for reproducibility true <- prob(cop, l = u, u = rep(1, d)) # true exceedance probability system.time(res <- replicate(B, sProb(n, copula = cop, u = u))) ## "abbreviations": PRNG <- res["PRNG",] LHS <- res["LHS" ,] QRNG <- res["QRNG",] ## Compute the variance-reduction factors and % improvements vrf <- var(PRNG) / var(LHS) # variance reduction factor w.r.t. LHS vrf. <- var(PRNG) / var(QRNG) # variance reduction factor w.r.t. QRNG pim <- (var(PRNG) - var(LHS)) / var(PRNG) *100 # improvement w.r.t. LHS pim. <- (var(PRNG) - var(QRNG))/ var(PRNG) *100 # improvement w.r.t. QRNG ## Boxplot boxplot(list(PRNG = PRNG, LHS = LHS, QRNG = QRNG), notch = TRUE, main = substitute("Simulated exceedance probabilities" ~ P(bold(U) > bold(u))~~ "for a" ~ t[nu.]~"copula", list(nu. = nu)), sub = sprintf( "Variance-reduction factors and %% improvements: %.1f (%.0f%%), %.1f (%.0f%%)", vrf, pim, vrf., pim.)) abline(h = true, lty = 3) # true value mtext(sprintf("B = %d replications with n = %d and d = %d", B, n, d), side = 3)
Generic function and methods wireframe2()
to draw
(lattice) wireframe
(aka
“perspective”) plots of two-dimensional distributions from
package copula.
## S4 method for signature 'matrix' wireframe2(x, xlim = range(x[,1], finite = TRUE), ylim = range(x[,2], finite = TRUE), zlim = range(x[,3], finite = TRUE), xlab = NULL, ylab = NULL, zlab = NULL, alpha.regions = 0.5, scales = list(arrows = FALSE, col = "black"), par.settings = standard.theme(color = FALSE), draw.4.pCoplines = FALSE, ...) ## _identical_ method for 'data.frame' as for 'matrix' ## S4 method for signature 'Copula' wireframe2(x, FUN, n.grid = 26, delta = 0, xlim = 0:1, ylim = 0:1, zlim = NULL, xlab = quote(u[1]), ylab = quote(u[2]), zlab = list(deparse(substitute(FUN))[1], rot = 90), draw.4.pCoplines = identical(FUN, pCopula), ...) ## S4 method for signature 'mvdc' wireframe2(x, FUN, n.grid = 26, xlim, ylim, zlim = NULL, xlab = quote(x[1]), ylab = quote(x[2]), zlab = list(deparse(substitute(FUN))[1], rot = 90), ...)
## S4 method for signature 'matrix' wireframe2(x, xlim = range(x[,1], finite = TRUE), ylim = range(x[,2], finite = TRUE), zlim = range(x[,3], finite = TRUE), xlab = NULL, ylab = NULL, zlab = NULL, alpha.regions = 0.5, scales = list(arrows = FALSE, col = "black"), par.settings = standard.theme(color = FALSE), draw.4.pCoplines = FALSE, ...) ## _identical_ method for 'data.frame' as for 'matrix' ## S4 method for signature 'Copula' wireframe2(x, FUN, n.grid = 26, delta = 0, xlim = 0:1, ylim = 0:1, zlim = NULL, xlab = quote(u[1]), ylab = quote(u[2]), zlab = list(deparse(substitute(FUN))[1], rot = 90), draw.4.pCoplines = identical(FUN, pCopula), ...) ## S4 method for signature 'mvdc' wireframe2(x, FUN, n.grid = 26, xlim, ylim, zlim = NULL, xlab = quote(x[1]), ylab = quote(x[2]), zlab = list(deparse(substitute(FUN))[1], rot = 90), ...)
x |
a |
xlim , ylim , zlim
|
the x-, y- and z-axis limits. |
xlab , ylab , zlab
|
the x-, y- and z-axis labels. |
scales |
|
FUN |
the |
n.grid |
the number of grid points used in each dimension. This
can be a vector of length two, giving the number of grid points used
in x- and y-direction, respectively; the function |
delta |
a small number in |
draw.4.pCoplines |
logical indicating if the 4 known border
segments of a copula distribution function, i.e., |
alpha.regions , par.settings , ...
|
additional arguments passed to
the underlying |
An object of class “trellis” as returned by wireframe()
.
Wireframe plots for objects of class "matrix"
,
"data.frame"
, "Copula"
or
"mvdc"
.
The persp-methods
for drawing perspective plots via base graphics.
The lattice-based contourplot2-methods
.
## For 'matrix' objects ## The Frechet--Hoeffding bounds W and M n.grid <- 26 u <- seq(0, 1, length.out = n.grid) grid <- expand.grid("u[1]" = u, "u[2]" = u) W <- function(u) pmax(0, rowSums(u)-1) # lower bound W M <- function(u) apply(u, 1, min) # upper bound M x.W <- cbind(grid, "W(u[1],u[2])" = W(grid)) # evaluate W on 'grid' x.M <- cbind(grid, "M(u[1],u[2])" = M(grid)) # evaluate M on 'grid' wireframe2(x.W) wireframe2(x.W, shade = TRUE) # plot of W wireframe2(x.M, drape = TRUE) # plot of M ## use with custom colors: myPalette <- colorRampPalette(c("white","darkgreen", "gold", "red")) myPar <- lattice::standard.theme(region = myPalette(16)) wireframe2(x.W, shade = TRUE, par.settings = myPar) ## For 'Copula' objects cop <- frankCopula(-4) wireframe2(cop, pCopula) # the copula wireframe2(cop, pCopula, shade = TRUE) # ditto, "shaded" (but color=FALSE) wireframe2(cop, pCopula, shade = TRUE, par.settings=list()) # use lattice default colors wireframe2(cop, pCopula, shade = TRUE, col.4 = "gray60") # ditto, "shaded"+grid wireframe2(cop, pCopula, drape = TRUE, xlab = quote(x[1])) # adjusting an axis label wireframe2(cop, dCopula, delta=0.01) # the density wireframe2(cop, dCopula, shade = TRUE, par.settings=list()) # use lattice default colors wireframe2(cop, dCopula) # => the density is set to 0 on the margins wireframe2(cop, function(u, copula) dCopula(u, copula, log=TRUE), shade = TRUE, par.settings = myPar, zlab = list(quote(log(c(u[1],u[2]))), rot=90), main = "dCopula(.., log=TRUE)") ## For 'mvdc' objects mvNN <- mvdc(gumbelCopula(3), c("norm", "norm"), list(list(mean = 0, sd = 1), list(mean = 1))) wireframe2(mvNN, dMvdc, xlim=c(-2, 2), ylim=c(-1, 3)) wireframe2(mvNN, dMvdc, xlim=c(-2, 2), ylim=c(-1, 3), shade=TRUE, par.settings=list())
## For 'matrix' objects ## The Frechet--Hoeffding bounds W and M n.grid <- 26 u <- seq(0, 1, length.out = n.grid) grid <- expand.grid("u[1]" = u, "u[2]" = u) W <- function(u) pmax(0, rowSums(u)-1) # lower bound W M <- function(u) apply(u, 1, min) # upper bound M x.W <- cbind(grid, "W(u[1],u[2])" = W(grid)) # evaluate W on 'grid' x.M <- cbind(grid, "M(u[1],u[2])" = M(grid)) # evaluate M on 'grid' wireframe2(x.W) wireframe2(x.W, shade = TRUE) # plot of W wireframe2(x.M, drape = TRUE) # plot of M ## use with custom colors: myPalette <- colorRampPalette(c("white","darkgreen", "gold", "red")) myPar <- lattice::standard.theme(region = myPalette(16)) wireframe2(x.W, shade = TRUE, par.settings = myPar) ## For 'Copula' objects cop <- frankCopula(-4) wireframe2(cop, pCopula) # the copula wireframe2(cop, pCopula, shade = TRUE) # ditto, "shaded" (but color=FALSE) wireframe2(cop, pCopula, shade = TRUE, par.settings=list()) # use lattice default colors wireframe2(cop, pCopula, shade = TRUE, col.4 = "gray60") # ditto, "shaded"+grid wireframe2(cop, pCopula, drape = TRUE, xlab = quote(x[1])) # adjusting an axis label wireframe2(cop, dCopula, delta=0.01) # the density wireframe2(cop, dCopula, shade = TRUE, par.settings=list()) # use lattice default colors wireframe2(cop, dCopula) # => the density is set to 0 on the margins wireframe2(cop, function(u, copula) dCopula(u, copula, log=TRUE), shade = TRUE, par.settings = myPar, zlab = list(quote(log(c(u[1],u[2]))), rot=90), main = "dCopula(.., log=TRUE)") ## For 'mvdc' objects mvNN <- mvdc(gumbelCopula(3), c("norm", "norm"), list(list(mean = 0, sd = 1), list(mean = 1))) wireframe2(mvNN, dMvdc, xlim=c(-2, 2), ylim=c(-1, 3)) wireframe2(mvNN, dMvdc, xlim=c(-2, 2), ylim=c(-1, 3), shade=TRUE, par.settings=list())
k
-fold cross-validationComputes the leave-one-out cross-validation criterion (or a
k
-fold version of it) for the hypothesized parametric copula
family using, by default, maximum pseudo-likelihood estimation.
The leave-one-out criterion is a crossvalidated log likelihood. It is
denoted by in Grønneberg and Hjort
(2014) and defined in equation (42) therein. When computed for several
parametric copula families, it is thus meaningful to select the family
maximizing the criterion.
For ,
the sample size, the
k
-fold version is
an approximation of the leave-one-out criterion that uses
randomly chosen (almost) equally sized data blocks instead of
.
When
is large,
-fold cross-validation is considerably
faster (if
is “small” compared to
).
xvCopula(copula, x, k = NULL, verbose = interactive(), ties.method = eval(formals(rank)$ties.method), ...)
xvCopula(copula, x, k = NULL, verbose = interactive(), ties.method = eval(formals(rank)$ties.method), ...)
copula |
object of class |
x |
a data matrix that will be transformed to pseudo-observations. |
k |
the number of data blocks; if |
verbose |
a logical indicating if progress of the cross validation
should be displayed via |
ties.method |
string specifying how ranks should be computed if
there are ties in any of the coordinate samples of |
... |
additional arguments passed to |
A real number equal to the cross-validation criterion multiplied by the sample size.
Note that k
-fold cross-validation with k < n
shuffles the lines of x
prior to forming the blocks. The
result thus depends on the value of the random seed.
The default estimation method is maximum pseudo-likelihood estimation
but this can be changed if necessary along with all the other
arguments of fitCopula()
.
Grønneberg, S., and Hjort, N.L. (2014) The copula information criteria. Scandinavian Journal of Statistics 41, 436–459.
fitCopula()
for the underlying estimation procedure and
gofCopula()
for goodness-of-fit tests.
## A two-dimensional data example ---------------------------------- x <- rCopula(200, claytonCopula(3)) ## Model (copula) selection -- takes time: each fits 200 copulas to 199 obs. xvCopula(gumbelCopula(), x) xvCopula(frankCopula(), x) xvCopula(joeCopula(), x) xvCopula(claytonCopula(), x) xvCopula(normalCopula(), x) xvCopula(tCopula(), x) xvCopula(plackettCopula(), x) ## The same with 5-fold cross-validation [to save time ...] set.seed(1) # k-fold is random (for k < n) ! xvCopula(gumbelCopula(), x, k=5) xvCopula(frankCopula(), x, k=5) xvCopula(joeCopula(), x, k=5) xvCopula(claytonCopula(), x, k=5) xvCopula(normalCopula(), x, k=5) xvCopula(tCopula(), x, k=5) xvCopula(plackettCopula(),x, k=5)
## A two-dimensional data example ---------------------------------- x <- rCopula(200, claytonCopula(3)) ## Model (copula) selection -- takes time: each fits 200 copulas to 199 obs. xvCopula(gumbelCopula(), x) xvCopula(frankCopula(), x) xvCopula(joeCopula(), x) xvCopula(claytonCopula(), x) xvCopula(normalCopula(), x) xvCopula(tCopula(), x) xvCopula(plackettCopula(), x) ## The same with 5-fold cross-validation [to save time ...] set.seed(1) # k-fold is random (for k < n) ! xvCopula(gumbelCopula(), x, k=5) xvCopula(frankCopula(), x, k=5) xvCopula(joeCopula(), x, k=5) xvCopula(claytonCopula(), x, k=5) xvCopula(normalCopula(), x, k=5) xvCopula(tCopula(), x, k=5) xvCopula(plackettCopula(),x, k=5)