Penalized precision matrix estimation using the graphical lasso (glasso) algorithm. Consider the case where \(X_{1}, ..., X_{n}\) are iid \(N_{p}(\mu, \Sigma)\) and we are tasked with estimating the precision matrix, denoted \(\Omega \equiv \Sigma^{-1}\). This function solves the following optimization problem:
\(\hat{\Omega}_{\lambda} = \arg\min_{\Omega \in S_{+}^{p}} \left\{ Tr\left(S\Omega\right) - \log \det\left(\Omega \right) + \lambda \left\| \Omega \right\|_{1} \right\}\)
where \(\lambda > 0\) and we define \(\left\|A \right\|_{1} = \sum_{i, j} \left| A_{ij} \right|\).
GLASSO(X = NULL, S = NULL, nlam = 10, lam.min.ratio = 0.01, lam = NULL, diagonal = FALSE, path = FALSE, crit.out = c("avg", "max"), crit.in = c("loss", "avg", "max"), tol.out = 1e-04, tol.in = 1e-04, maxit.out = 10000, maxit.in = 10000, adjmaxit.out = NULL, K = 5, crit.cv = c("loglik", "AIC", "BIC"), start = c("warm", "cold"), cores = 1, trace = c("progress", "print", "none"))
X | option to provide a nxp data matrix. Each row corresponds to a single observation and each column contains n observations of a single feature/variable. |
---|---|
S | option to provide a pxp sample covariance matrix (denominator n). If argument is |
nlam | number of |
lam.min.ratio | smallest |
lam | option to provide positive tuning parameters for penalty term. This will cause |
diagonal | option to penalize the diagonal elements of the estimated precision matrix (\(\Omega\)). Defaults to |
path | option to return the regularization path. This option should be used with extreme care if the dimension is large. If set to TRUE, cores must be set to 1 and errors and optimal tuning parameters will based on the full sample. Defaults to FALSE. |
crit.out | criterion for convergence in outer (blockwise) loop. Criterion |
crit.in | criterion for convergence in inner (lasso) loop. Criterion for convergence. Criterion |
tol.out | convergence tolerance for outer (blockwise) loop. Defaults to 1e-4. |
tol.in | convergence tolerance for inner (lasso) loop. Defaults to 1e-4. |
maxit.out | maximum number of iterations for outer (blockwise) loop. Defaults to 1e4. |
maxit.in | maximum number of iterations for inner (lasso) loop. Defaults to 1e4. |
adjmaxit.out | adjusted maximum number of iterations. During cross validation this option allows the user to adjust the maximum number of iterations after the first |
K | specify the number of folds for cross validation. |
crit.cv | cross validation criterion ( |
start | specify |
cores | option to run CV in parallel. Defaults to |
trace | option to display progress of CV. Choose one of |
returns class object GLASSOO
which includes:
function call.
number of iterations/
optimal tuning parameters.
grid of lambda values for CV.
maximum number of iterations for outer (blockwise) loop.
maximum number of iterations for inner (lasso) loop.
estimated penalized precision matrix.
estimated covariance matrix from the penalized precision matrix (inverse of Omega).
array containing the solution path. Solutions will be ordered by ascending lambda values.
minimum average cross validation error (cv.crit) for optimal parameters.
average cross validation error (cv.crit) across all folds.
cross validation errors (cv.crit).
For details on the implementation of 'GLASSOO', see the vignette https://mgallow.github.io/GLASSOO/.
Friedman, Jerome, Trevor Hastie, and Robert Tibshirani. 'Sparse inverse covariance estimation with the graphical lasso.' Biostatistics 9.3 (2008): 432-441.
Banerjee, Onureen, Ghauoui, Laurent El, and d'Aspremont, Alexandre. 2008. 'Model Selection through Sparse Maximum Likelihood Estimation for Multivariate Gaussian or Binary Data.' Journal of Machine Learning Research 9: 485-516.
Tibshirani, Robert. 1996. 'Regression Shrinkage and Selection via the Lasso.' Journal of the Royal Statistical Society. Series B (Methodological). JSTOR: 267-288.
Meinshausen, Nicolai and Buhlmann, Peter. 2006. 'High-Dimensional Graphs and Variable Selection with the Lasso.' The Annals of Statistics. JSTOR: 1436-1462.
Witten, Daniela M, Friedman, Jerome H, and Simon, Noah. 2011. 'New Insights and Faster computations for the Graphical Lasso.' Journal of Computation and Graphical Statistics. Taylor and Francis: 892-900.
Tibshirani, Robert, Bien, Jacob, Friedman, Jerome, Hastie, Trevor, Simon, Noah, Jonathan, Taylor, and Tibshirani, Ryan J. 'Strong Rules for Discarding Predictors in Lasso-Type Problems.' Journal of the Royal Statistical Society: Series B (Statistical Methodology). Wiley Online Library 74 (2): 245-266.
Ghaoui, Laurent El, Viallon, Vivian, and Rabbani, Tarek. 2010. 'Safe Feature Elimination for the Lasso and Sparse Supervised Learning Problems.' arXiv preprint arXiv: 1009.4219.
Osborne, Michael R, Presnell, Brett, and Turlach, Berwin A. 'On the Lasso and its Dual.' Journal of Computational and Graphical Statistics. Taylor and Francis 9 (2): 319-337.
Rothman, Adam. 2017. 'STAT 8931 notes on an algorithm to compute the Lasso-penalized Gausssian likelihood precision matrix estimator.'
# generate data from a sparse matrix # first compute covariance matrix S = matrix(0.7, nrow = 5, ncol = 5) for (i in 1:5){ for (j in 1:5){ S[i, j] = S[i, j]^abs(i - j) } } # generate 100 x 5 matrix with rows drawn from iid N_p set.seed(123) Z = matrix(rnorm(100*5), nrow = 100, ncol = 5) out = eigen(S, symmetric = TRUE) S.sqrt = out$vectors %*% diag(out$values^0.5) S.sqrt = S.sqrt %*% t(out$vectors) X = Z %*% S.sqrt # lasso penalty CV GLASSO(X)#> #> Call: GLASSO(X = X) #> #> Iterations: #> [1] 3 #> #> Tuning parameter: #> log10(lam) lam #> [1,] -1.544 0.029 #> #> Log-likelihood: -110.16675 #> #> Omega: #> [,1] [,2] [,3] [,4] [,5] #> [1,] 2.13226 -1.24669 0.00000 0.00000 0.18709 #> [2,] -1.24669 2.75122 -1.29912 -0.07341 0.00000 #> [3,] 0.00000 -1.29912 2.81744 -1.15682 -0.00113 #> [4,] 0.00000 -0.07341 -1.15682 2.46461 -1.17086 #> [5,] 0.18709 0.00000 -0.00113 -1.17086 1.86326