Cryptocurrencies like Bitcoin have become increasingly connected with traditional financial markets. Studying their dynamic relationships and volatility transmission is critical for:
Portfolio diversification
Risk management
Understanding financial contagion.
In this workshop, we combine time series modeling with network analysis to investigate these connections using R. You do not need to download any data, since we will use the quantmod package to retrieve data from Yahoo Finance.
# Load libraries for the workshop
library(dplyr) # Data manipulation (filter, mutate, summarise, etc.)
library(zoo) # Advanced time series handling (irregular dates, merging)
library(quantmod) # Download financial data from Yahoo Finance and handle OHLCV data
library(xts) # Time series objects (especially financial time series)
library(rmgarch) # Estimate multivariate GARCH models (including DCC-GARCH)
library(vars) # Estimate Vector AutoRegression (VAR) models, used for spillover
library(igraph) # Create network graphs based on correlations or connections
library(tidygraph) # Tidy interface to igraph (works well with ggraph)
library(ggplot2) # Create elegant plots based on the grammar of graphics
library(ggraph) # Specialized plotting of network graphs (extending ggplot2)
library(scales) # Pretty axis formatting for ggplot2 (percent, commas, etc.)
library(patchwork) # Combine multiple ggplots into a single layout (stacking plots) analysis
We collect daily prices for:
Bitcoin (BTC-USD)
Gold (GC=F)
S&P500 (^GSPC)
NASDAQ (^IXIC)
Note: the merged data will skip days when stock markets close.
# Define a list of asset tickers to download
tickers <- c("BTC-USD", "GC=F", "^GSPC", "^IXIC")
# BTC-USD: Bitcoin price from Yahoo Finance
# GC=F: Gold Futures price
# ^GSPC: S&P 500 index
# ^IXIC: NASDAQ index
# Download historical daily data from Yahoo Finance
getSymbols(tickers, src = "yahoo", from = "2018-01-01", to = Sys.Date())
## [1] "BTC-USD" "GC=F" "GSPC" "IXIC"
# Data is stored as xts objects in the global environment
# Extract Adjusted Close prices for each asset
btc <- Cl(`BTC-USD`) # Bitcoin adjusted close
gold <- Cl(`GC=F`) # Gold adjusted close
sp500 <- Cl(`GSPC`) # S&P500 adjusted close
nasdaq <- Cl(`IXIC`) # NASDAQ adjusted close
# Merge all asset prices into one xts object
prices <- na.omit(merge(btc, gold, sp500, nasdaq))
# Remove any missing values with na.omit()
# Compute daily log returns and rename
returns <- na.omit(diff(log(prices)))
colnames(returns) <- c("Bitcoin", "Gold", "SP500", "NASDAQ")
# Log returns are more statistically stable and stationarity-friendly for modeling
✅ We use log returns because financial time series are better behaved statistically after log-differencing.
The Dynamic Conditional Correlation Generalized Autoregressive Conditional Heteroskedasticity model (DCC-GARCH) is an econometric framework developed to jointly model time-varying volatility and time-varying correlations across multiple financial assets. It was proposed by Engle (2002) as a computationally feasible extension of multivariate GARCH models.
Engle, R. (2002). Dynamic conditional correlation: A simple class of multivariate generalized autoregressive conditional heteroskedasticity models. Journal of Business & Economic Statistics, 20(3), 339-350.
The GARCH (Generalized Autoregressive Conditional Heteroskedasticity) framework models how an asset’s conditional variance evolves over time, capturing features such as volatility clustering — the empirical observation that periods of high volatility tend to be followed by high volatility, and low volatility by low volatility.
In simple terms, GARCH explains how the volatility of each individual asset is changing based on its past shocks and volatility.
While GARCH captures time-varying volatility at the univariate level, it does not, by itself, model how the relationships between assets change.
Dynamic Conditional Correlation (DCC) extends the framework by allowing the conditional correlations between asset returns to evolve over time as well.
Specifically, DCC separates the modeling into two stages:
The dynamic correlation matrix is updated recursively, capturing periods of financial contagion, decoupling, or clustering — important phenomena in financial markets.
Flexibility: It allows for individual asset volatilities to behave differently (heteroskedasticity) while dynamically modeling how asset relationships change (correlation dynamics).
Computational Efficiency: Compared to full multivariate GARCH models (like VECH or BEKK), DCC-GARCH dramatically reduces the number of parameters to estimate, making it feasible for portfolios of many assets.
Financial Relevance: DCC-GARCH is crucial for risk management (e.g., Value-at-Risk aggregation), portfolio optimization, hedging strategies, and understanding financial contagion and systemic risk.
The DCC-GARCH model jointly captures time-varying second moments (volatilities) and conditional correlations in a multivariate setting. By disentangling the dynamics of volatilities from the dynamics of correlations, it offers a tractable yet rich framework to study the evolution of asset return interdependencies over time.
# 1. Specify the univariate GARCH(1,1) model for each asset
uspec <- ugarchspec(
mean.model = list(armaOrder = c(0,0)), # No ARMA terms in the mean (pure returns)
variance.model = list(model = "sGARCH", # Standard GARCH model for volatility
garchOrder = c(1,1)), # GARCH(1,1) structure (common and effective)
distribution.model = "norm" # Assume normal distribution for errors
)
# 2. Specify the DCC (Dynamic Conditional Correlation) model
spec_dcc <- dccspec(
uspec = multispec(replicate(4, uspec)), # Replicate the univariate GARCH spec for 4 assets
dccOrder = c(1,1), # DCC(1,1): first-order dynamics in conditional correlation
distribution = "mvnorm" # Multivariate normal distribution for joint returns
)
# 3. Fit the DCC-GARCH model to the return data
dcc_fit <- dccfit(spec_dcc, data = returns)
# This estimates both time-varying volatilities and time-varying correlations
A DCC-GARCH network is a visualization and analysis tool where assets (nodes) are connected based on the dynamic conditional correlations estimated from a DCC-GARCH model. Instead of just looking at correlation matrices over time, the network approach helps reveal structural patterns — who is connected to whom, how strong those relationships are, and how these relationships evolve.
# 1. Extract the full dynamic conditional correlation array from DCC-GARCH fit
correlations_array <- rcor(dcc_fit)
# correlations_array: 3-dimensional array (asset x asset x time)
# 2. Compute the time-averaged correlation matrix
avg_correlations <- apply(correlations_array, c(1,2), mean)
# Average across the time dimension to summarize overall correlations
# 3. Set diagonal elements (self-correlation) to 0
diag(avg_correlations) <- 0
# We are only interested in cross-asset correlations
# 4. Create an igraph network object from the correlation matrix
g <- graph_from_adjacency_matrix(
avg_correlations,
weighted = TRUE,
mode = "undirected",
diag = FALSE
)
# Edges represent the magnitude (and sign) of correlations between assets
# 5. Convert igraph to tidygraph and customize edge properties
tg <- as_tbl_graph(g) %>%
activate(edges) %>%
mutate(
edge_sign = ifelse(weight > 0, "positive", "negative"), # Assign edge type based on correlation sign
edge_color = ifelse(weight > 0, "positive", "negative"), # Color edges by sign (blue = positive, red = negative)
edge_width = abs(weight) * 8 # Edge thickness proportional to correlation strength
)
# 6. Plot the network using ggraph
ggraph(tg, layout = "fr") + # Fruchterman-Reingold layout: forces nodes apart nicely
geom_edge_link(aes(width = edge_width, color = edge_color), alpha = 0.8) + # Draw edges with color and width
geom_node_point(size = 8, color = "skyblue") + # Draw nodes (assets)
geom_node_text(aes(label = name), vjust = 1.8, size = 5) + # Label nodes
scale_edge_color_manual(
values = c(positive = "blue", negative = "red"), # Blue = positive, Red = negative
labels = c(positive = "Positive Correlation", negative = "Negative Correlation"),
name = "Correlation Sign" # Legend title
) +
theme_void() + # Clean background
ggtitle("DCC-GARCH Network: Bitcoin, Gold, SP500, NASDAQ") +
theme(plot.title = element_text(hjust = 0.5, size = 16)) + # Center title
guides(edge_color = guide_legend(title = "Correlation Sign"), edge_width = FALSE)
Let’s zoom into the Bitcoin-NASDAQ relationship over time.
# 1. Extract the time series of dynamic conditional correlation between Bitcoin and NASDAQ
dcc_bt_nasdaq <- correlations_array[1,4,]
# This selects:
# - 1st row (Bitcoin)
# - 4th column (NASDAQ)
# - all time points
# from the 3D correlation array
# 2. Convert the correlation series to a zoo time series object
dcc_bt_nasdaq_ts <- zoo(dcc_bt_nasdaq, order.by = index(returns))
# Attach proper date index (same as returns data)
# 3. Plot the dynamic conditional correlation over time
plot(dcc_bt_nasdaq_ts, type = "l", lwd = 2,
main = "Dynamic Conditional Correlation: Bitcoin vs NASDAQ",
ylab = "Correlation", xlab = "Time")
abline(h = 0, col = "gray", lty = 2)
# Add a horizontal line at 0 to visually separate positive and negative correlations
✅ We can see whether Bitcoin and NASDAQ become more or less correlated during different market events.
DCC-GARCH shows correlation shifts over time, which can be useful for portfolio management.
Volatility spillovers refer to the phenomenon where shocks or changes in volatility in one market or asset transmit to another market or asset.
In other words, instability is contagious: when asset A becomes more volatile, it causes asset B to become more volatile as well — beyond what would be expected from asset B’s own fundamentals.
Volatility spillovers are grounded in theories of interconnected markets, information transmission, and investor behavior under uncertainty:
Market integration: When markets are linked (through trade, finance, or sentiment), shocks propagate.
Information flows: New information in one market can affect expectations in related markets.
Herding and panic: Behavioral reactions can amplify volatility transmission, even across unrelated assets.
It is important to motivate the connectivity, since the volatility spillover analysis is a statistical exercise that does not establish causality. Academic roots often cite models like:
Engle’s ARCH and GARCH frameworks for time-varying volatility. Look at how conditional correlations change when volatility spikes.
Diebold and Yilmaz (2014), who extended their earlier work on returns connectedness (2009, 2012) to focus specifically on volatility spillovers, using forecast error variance decompositions (FEVD) in vector autoregressions (VARs).
Diebold, F. X., & Yilmaz, K. (2009). Measuring financial asset return and volatility spillovers, with application to global equity markets. The Economic Journal, 119(534), 158-171.
Diebold, F. X., & Yilmaz, K. (2012). Better to give than to receive: Predictive directional measurement of volatility spillovers. International Journal of Forecasting, 28(1), 57-66.
Diebold, F. X., & Yılmaz, K. (2014). On the network topology of variance decompositions: Measuring the connectedness of financial firms. Journal of Econometrics, 182(1), 119-134.
Let’s use the Diebold-Yilmaz (2014) method. Because we extract conditional volatilities from a DCC-GARCH model, and estimate a VAR on volatilities, the spillover we are measuring is volatility spillover — not return spillover.
# 1. Extract the conditional volatilities (standard deviations) from the DCC-GARCH model
cond_vol <- sigma(dcc_fit)
# cond_vol is a matrix: rows = time points, columns = assets
# 2. Rename the columns for easier interpretation
colnames(cond_vol) <- c("Bitcoin", "Gold", "SP500", "NASDAQ")
# 3. Convert to a data frame
vol_df <- as.data.frame(cond_vol)
# 4. Estimate a VAR model on the volatilities
var_model <- VAR(vol_df, p = 2, type = "const")
# p = 2 lags, include a constant term
# This models how today's volatility depends on past volatility shocks
# 5. Perform Forecast Error Variance Decomposition (FEVD)
fevd_result <- fevd(var_model, n.ahead = 10)
# Decompose forecast variance into contributions from own and others' shocks (10-step ahead horizon)
# 6. Build the FEVD matrix manually
fevd_matrix <- matrix(NA, 4, 4)
for (i in 1:4) {
fevd_matrix[i,] <- fevd_result[[i]][10,]
}
# Each row: target asset
# Each column: source asset
# Element (i,j): percentage of asset i's volatility explained by shocks from asset j
# 7. Set the matrix row and column names
colnames(fevd_matrix) <- rownames(fevd_matrix) <- c("Bitcoin", "Gold", "SP500", "NASDAQ")
# 8. Normalize rows so each row sums to 100%
spillover_table <- fevd_matrix / rowSums(fevd_matrix) * 100
# Rows now represent percentage shares
# 9. Compute the Total Spillover Index (TSI)
tsi <- (sum(spillover_table) - sum(diag(spillover_table))) / sum(spillover_table) * 100
# TSI measures the overall amount of cross-market volatility transmission (off-diagonal contribution)
# 10. Output the spillover table and TSI
round(spillover_table, 2) # Spillover Table: who influences whom
## Bitcoin Gold SP500 NASDAQ
## Bitcoin 97.43 0.00 1.21 1.35
## Gold 3.02 88.39 8.53 0.07
## SP500 7.98 1.83 89.01 1.17
## NASDAQ 7.42 1.04 86.42 5.12
tsi # Total Spillover Index: overall interconnectedness
## [1] 30.01218
✅ Shocks to volatility in one asset can cause increases in volatility in another asset. The Total Spillover Index measures overall connectedness of the system. Of course, we can do this by sub-periods as well.
We can compute a rolling spillover index to capture how spillovers change over time. (This will take a bit of time to run…)
# 1. Set the rolling window size
window_size <- 120
# 120 days ≈ 6 months — balances between smoothing and detecting changes
# 2. Initialize dimensions
n_obs <- nrow(vol_df) # Total number of observations (days)
n_assets <- ncol(vol_df) # Number of assets (Bitcoin, Gold, SP500, NASDAQ)
# 3. Prepare an empty vector to store spillover index values
spillover_series <- rep(NA, n_obs - window_size)
# 4. Rolling window estimation
for (i in 1:(n_obs - window_size)) {
# 4.1 Extract a rolling window of volatility data
window_data <- vol_df[i:(i + window_size - 1), ]
# 4.2 Fit a VAR model within the window
var_model <- VAR(window_data, p = 2, type = "const")
# 4.3 Forecast Error Variance Decomposition (FEVD)
fevd_result <- fevd(var_model, n.ahead = 10)
# 4.4 Build FEVD matrix for the window
fevd_matrix <- matrix(NA, n_assets, n_assets)
for (j in 1:n_assets) {
fevd_matrix[j,] <- fevd_result[[j]][10,] # Use 10-step ahead FEVD
}
# 4.5 Normalize rows so each row sums to 100
fevd_matrix <- fevd_matrix / rowSums(fevd_matrix) * 100
# 4.6 Calculate Total Spillover Index for this window
spillover_series[i] <- (sum(fevd_matrix) - sum(diag(fevd_matrix))) / sum(fevd_matrix) * 100
}
# 5. Create a time series object for the rolling spillovers
spillover_xts <- xts(spillover_series, order.by = index(cond_vol)[(window_size+1):n_obs])
# 6. Plot the Rolling Total Volatility Spillover Index
plot(spillover_xts, type = "l", col = "darkblue", lwd = 2,
main = "Rolling Total Volatility Spillover Index",
ylab = "Spillover (%)", xlab = "Time")
# Add a horizontal red line for the mean spillover
abline(h = mean(spillover_series, na.rm = TRUE), col = "red", lty = 2)
✅ Rolling spillovers allow us to identify crisis periods where connectedness spikes.
We can also study who is influencing who over time.
# 1. Extract conditional volatilities for Bitcoin and NASDAQ
btc_vol <- cond_vol[,1] # Bitcoin volatility
nasdaq_vol <- cond_vol[,4] # NASDAQ volatility
# 2. Merge the two volatilities into a single object
vol_pair <- na.omit(merge(btc_vol, nasdaq_vol))
colnames(vol_pair) <- c("Bitcoin", "NASDAQ")
# Clean and name the series
# 3. Initialize parameters for rolling window analysis
n_obs <- nrow(vol_pair) # Total number of observations
btc_to_nasdaq <- rep(NA, n_obs - window_size) # Empty vector to store results
nasdaq_to_btc <- rep(NA, n_obs - window_size)
# 4. Rolling window estimation of directional spillovers
for (i in 1:(n_obs - window_size)) {
# 4.1 Extract rolling window slice
window_data <- vol_pair[i:(i + window_size - 1), ]
# 4.2 Fit VAR(2) model
var_model <- VAR(window_data, p = 2, type = "const")
# 4.3 Forecast Error Variance Decomposition (FEVD)
fevd_result <- fevd(var_model, n.ahead = 10)
# 4.4 Build FEVD matrix
fevd_matrix <- matrix(NA, 2, 2)
for (j in 1:2) {
fevd_matrix[j,] <- fevd_result[[j]][10,]
}
# 4.5 Normalize rows to sum to 100
fevd_matrix <- fevd_matrix / rowSums(fevd_matrix) * 100
# 4.6 Extract directional spillovers
btc_to_nasdaq[i] <- fevd_matrix[2,1] # Bitcoin's impact on NASDAQ
nasdaq_to_btc[i] <- fevd_matrix[1,2] # NASDAQ's impact on Bitcoin
}
# 5. Create xts time series objects for plotting
btc_nasdaq_xts <- xts(btc_to_nasdaq, order.by = index(vol_pair)[(window_size+1):n_obs])
nasdaq_btc_xts <- xts(nasdaq_to_btc, order.by = index(vol_pair)[(window_size+1):n_obs])
# 6. Combine into a tidy data frame for ggplot2
spillover_df <- data.frame(
Date = index(btc_nasdaq_xts),
BTC_to_NASDAQ = coredata(btc_nasdaq_xts),
NASDAQ_to_BTC = coredata(nasdaq_btc_xts)
)
# 7. Plot 1: Bitcoin ➔ NASDAQ Spillover
p1 <- ggplot(spillover_df, aes(x = Date, y = BTC_to_NASDAQ)) +
geom_line(color = "blue", size = 1) +
labs(title = "Directional Spillover: Bitcoin ➔ NASDAQ",
x = "Time", y = "Spillover (%)") +
theme_minimal() +
theme(plot.title = element_text(hjust = 0.5))
# 8. Plot 2: NASDAQ ➔ Bitcoin Spillover
p2 <- ggplot(spillover_df, aes(x = Date, y = NASDAQ_to_BTC)) +
geom_line(color = "red", size = 1) +
labs(title = "Directional Spillover: NASDAQ ➔ Bitcoin",
x = "Time", y = "Spillover (%)") +
theme_minimal() +
theme(plot.title = element_text(hjust = 0.5))
# 9. Stack the two plots vertically using patchwork
p1 / p2
✅ This shows dynamic leadership between Bitcoin and NASDAQ volatility.
Spillovers from Bitcoin to NASDAQ are non-trivial, especially around 2020 (early COVID shock), 2021 (crypto boom and bust), and late 2022 to 2023 (FTX crash, broader crypto winter).
Spillovers from NASDAQ to Bitcoin are much smaller, typically below 10-20% most of the time, with occasional spikes above 20%-30%, but these are rarer.
When studying financial connectedness, it is important to distinguish between price movements and risk transmission.
Return spillover analysis focuses on how price changes in one asset directly influence price changes in another asset. In this framework, we are interested in information transmission across markets:
Leadership and lagging behavior between assets,
Predictability of returns,
How shocks to fundamentals or sentiment in one market propagate into others.
Analyzing return spillovers is especially valuable when:
Studying market integration or segmentation,
Identifying information flows during normal time,
Exploring price discovery dynamics across assets,
Understanding crypto vs traditional asset relationships.
Unlike volatility spillovers, which focus on risk transmission, return spillovers map the transmission of economic shocks and trading information.
# 1. Extract returns for Bitcoin and NASDAQ
btc_return <- returns[,1] # Bitcoin returns
nasdaq_return <- returns[,4] # NASDAQ returns
# 2. Merge the two return series into a single object
return_pair <- na.omit(merge(btc_return, nasdaq_return))
colnames(return_pair) <- c("Bitcoin", "NASDAQ")
# Clean and name the series
# 3. Initialize parameters for rolling window analysis
n_obs <- nrow(return_pair) # Total number of observations
btc_to_nasdaq <- rep(NA, n_obs - window_size) # Empty vector to store results
nasdaq_to_btc <- rep(NA, n_obs - window_size)
# 4. Rolling window estimation of directional return spillovers
for (i in 1:(n_obs - window_size)) {
# 4.1 Extract rolling window slice
window_data <- return_pair[i:(i + window_size - 1), ]
# 4.2 Fit VAR(2) model
var_model <- VAR(window_data, p = 2, type = "const")
# 4.3 Forecast Error Variance Decomposition (FEVD)
fevd_result <- fevd(var_model, n.ahead = 10)
# 4.4 Build FEVD matrix
fevd_matrix <- matrix(NA, 2, 2)
for (j in 1:2) {
fevd_matrix[j,] <- fevd_result[[j]][10,]
}
# 4.5 Normalize rows to sum to 100
fevd_matrix <- fevd_matrix / rowSums(fevd_matrix) * 100
# 4.6 Extract directional return spillovers
btc_to_nasdaq[i] <- fevd_matrix[2,1] # Bitcoin's return impact on NASDAQ
nasdaq_to_btc[i] <- fevd_matrix[1,2] # NASDAQ's return impact on Bitcoin
}
# 5. Create xts time series objects for plotting
btc_nasdaq_xts <- xts(btc_to_nasdaq, order.by = index(return_pair)[(window_size+1):n_obs])
nasdaq_btc_xts <- xts(nasdaq_to_btc, order.by = index(return_pair)[(window_size+1):n_obs])
# 6. Combine into a tidy data frame for ggplot2
spillover_df <- data.frame(
Date = index(btc_nasdaq_xts),
BTC_to_NASDAQ = coredata(btc_nasdaq_xts),
NASDAQ_to_BTC = coredata(nasdaq_btc_xts)
)
# 7. Plot 1: Bitcoin ➔ NASDAQ Return Spillover
p1 <- ggplot(spillover_df, aes(x = Date, y = BTC_to_NASDAQ)) +
geom_line(color = "blue", size = 1) +
labs(title = "Directional Return Spillover: Bitcoin ➔ NASDAQ",
x = "Time", y = "Spillover (%)") +
theme_minimal() +
theme(plot.title = element_text(hjust = 0.5))
# 8. Plot 2: NASDAQ ➔ Bitcoin Return Spillover
p2 <- ggplot(spillover_df, aes(x = Date, y = NASDAQ_to_BTC)) +
geom_line(color = "red", size = 1) +
labs(title = "Directional Return Spillover: NASDAQ ➔ Bitcoin",
x = "Time", y = "Spillover (%)") +
theme_minimal() +
theme(plot.title = element_text(hjust = 0.5))
# 9. Stack the two plots vertically using patchwork
p1 / p2
Return spillover Bitcoin ➔ NASDAQ are higher, especially during shocks.
NASDAQ ➔ Bitcoin low (< 10%). NASDAQ’s return shocks have a limited effect on Bitcoin returns overall.
Both return and volatility spillovers spike around March 2020 (COVID crash) and late 2022 (crypto crises). But volatility spillover remains elevated longer after shocks, while return spillover decays more quickly. Returns absorb new information quickly, but volatility (risk) takes longer to normalize.
Today, volatility spillover analysis is more popular in academic and professional research, especially in studies of systemic risk. Volatility spillovers dominate most contagion, financial stability, and crypto risk research papers.
In this workshop, we explored how to use R for analyzing cryptocurrency and traditional assets through dynamic correlation and volatility spillover frameworks.
Specifically, we learned how to:
Download and prepare financial and crypto data from Yahoo Finance,
Estimate DCC-GARCH models to track time-varying correlations,
Visualize relationships between Bitcoin, Gold, SP500, and NASDAQ,
Measure total volatility spillovers using the Diebold-Yilmaz framework,
Compare transmission roles between Bitcoin and NASDAQ over time.
⚠️ Important: These Tools Measure Connectivity, Not Causality ⚠️
It is crucial to emphasize that DCC-GARCH and volatility spillover analysis reveal statistical connectedness, but do not establish causality.
High correlation does not mean one asset causes movements in another.
Spillover indexes tell us that shocks propagate between markets — but not the underlying reasons why.
No causal inference can be drawn from simple correlation-based frameworks.
Thus, while these tools are essential first steps in uncovering market interdependence, they should be complemented with economic reasoning and causal modeling approaches (e.g., Structural VARs, Granger causality tests, or external instrumental variables) when making claims about economic mechanisms.