Overview and intent
Value at Risk (VaR) estimates the maximum expected loss over a specified horizon at a chosen confidence level, giving a single, comparable summary of market risk across portfolios and desks. It answers: over horizon T, with confidence level p, what is the worst loss not exceeded under normal conditions. It is widely used for risk limits, economic capital, and regulatory reporting.
Historical background
- VaR’s institutionalization followed the early 1990s derivatives growth and high-profile losses; J.P. Morgan’s RiskMetrics (1994) popularized variance–covariance VaR and made methodologies and data freely available.
- Regulatory adoption came via the Basel market risk framework (1996 Amendment), permitting internal models for capital with multiplier-based add-ons tied to back testing performance.
- Post-2008, critiques of VaR’s tail blindness led regulators to adopt Expected Shortfall (ES) for market risk capital (Basel FRTB), while VaR remains prevalent for internal limits and communication.
Definitions of VaR
- Quantile definition: For loss random variable L over horizon T, VaR at confidence p is the p-quantile of L, i.e., the smallest x such that P(L ≤ x) ≥ p.
- Interpretive form: “One-day 99% VaR = ₹X” means there is a 1% chance of losing more than ₹X in one day (under model assumptions and with no position changes).
- Reporting formats: absolute currency VaR, percentage VaR, and scaled metrics (e.g., per ₹100 crore notional) over daily/10‑day horizons at 95% or 99% confidence.
Assumptions for VaR calculation
- Holding period and no-trading: positions are held constant over the horizon, with mark-to-market valuation.
- Distributional assumptions: parametric VaR assumes returns are elliptically distributed (often normal or t), stationarity, and stable volatility/correlation over the window.
- Data representativeness: historical simulation assumes past return distributions reflect the future and that sampling captures relevant regimes.
- Linearity and mapping: variance–covariance methods often linearize nonlinear payoffs (delta-normal) or include higher Greeks (delta-gamma).
- Liquidity and exit costs: classic VaR ignores market impact, funding, and widening bid-ask, unless explicitly modeled (e.g., liquidity-adjusted VaR).
Building blocks of VaR
- Risk factors: rates, FX, equity prices, credit spreads, volatilities, and basis factors mapped to positions.
- Sensitivities/valuation: full revaluation or factor-based revaluation using Greeks and curve/key-rate buckets.
- Covariance structure: volatilities and correlations (or full scenarios) calibrated from rolling windows with decay or robust estimators.
- Horizon and scaling: daily vs 10‑day horizons; square-root-of-time scaling is approximate and sensitive to autocorrelation/volatility clustering.
- Confidence level: commonly 95% or 99%, balancing stability and tail awareness.
VaR methodology
- Variance–covariance (parametric, delta-normal): assumes linear exposures and multivariate normal returns; VaR = z_p × σ_portfolio, with σ from covariance matrix and weights/sensitivities.
- Delta–gamma (quadratic): extends to convex instruments (options), approximating P&L via first and second derivatives; may assume normal or use Cornish–Fisher adjustment.
- Historical simulation: applies empirical historical factor moves to today’s portfolio via full revaluation; the VaR is the chosen percentile of the simulated P&L distribution.
- Filtered/hybrid historical: scales historical returns by current volatility estimates (e.g., GARCH) to reflect volatility clustering and regime changes.
- Monte Carlo simulation: simulates risk-factor paths under an assumed stochastic model (e.g., GBM, Heston, jump-diffusion, copulas), with full revaluation and quantile extraction.
Comparison of methodologies
- Variance–covariance: fast, transparent, easy to attribute; struggles with fat tails, skew, and nonlinear payoffs without enhancements.
- Historical simulation: model-light and intuitive; sensitive to window choice and may understate risk if the sample lacks stressed episodes.
- Filtered historical: adapts to current volatility; adds complexity in model calibration and may still miss tail dependence.
- Monte Carlo: flexible for path-dependent and nonlinear exposures; computationally intensive and model-dependent (garbage-in, garbage-out).
- Delta–gamma: pragmatic for options with moderate convexity; accuracy falls in large moves or with strong skew/vol smiles unless re-marked to implied dynamics.
Advantages and disadvantages
- Variance–covariance
- Pros: simple, fast, analytically tractable, useful for large linear books and limit systems.
- Cons: normality and linearity assumptions; poor tail fit; correlation instability.
- Historical simulation
- Pros: few distributional assumptions; captures empirical dependence; straightforward to explain.
- Cons: backward-looking; window bias; procyclical (low VaR in calm, high VaR in stress).
- Filtered historical
- Pros: reflects changing volatility; better conditional risk estimates.
- Cons: model risk in filters; parameter instability.
- Monte Carlo
- Pros: handles complex payoffs, dynamic hedges, and multiple regimes; can include jumps and stochastic vol.
- Cons: heavy computation; model specification and calibration risk.
- Delta–gamma/Cornish–Fisher
- Pros: improves over linear normal for convex books with modest cost.
- Cons: approximation error in large shocks and strong nonlinearity.
Limitations of VaR
- Tail blindness: VaR does not describe the magnitude of losses beyond the chosen quantile; Expected Shortfall (ES) addresses this by averaging tail losses.
- Non-subadditivity under some distributions: VaR may fail coherence axioms (e.g., subadditivity), complicating portfolio aggregation; ES is coherent.
- Procyclicality: risk may look low in tranquil periods and spike in stress, creating cyclical leverage and deleveraging dynamics.
- Model and data risk: dependence on window, distributional assumptions, and mapping; structural breaks and regime shifts can invalidate estimates.
- Liquidity and funding gaps: classical VaR ignores market depth, liquidation costs, and margin dynamics; liquidity-adjusted VaR or add-ons are needed.
Extreme Value Theory (EVT)
- Objective: model the distribution of extreme losses (tails) using Peaks-Over-Threshold (POT) with Generalized Pareto Distribution (GPD) or block maxima with Generalized Extreme Value (GEV).
- Use in VaR/ES: fit the tail beyond a high threshold, then extrapolate to high quantiles (e.g., 99.5%, 99.9%) where historical data are sparse.
- Practical considerations: threshold selection via mean excess plots; declustering for serial dependence; parameter uncertainty and stability checks.
Stress testing
- Purpose: complement VaR by examining portfolio resilience under extreme but plausible scenarios and historical episodes (e.g., 2008 credit crunch, 2013 taper tantrum, 2020 COVID selloff).
- Types: historical stresses, hypothetical macro/market shocks, reverse stress tests that identify scenarios causing breach of risk appetite or capital limits.
- Implementation: define shocks at key risk factors and revalue fully; include liquidity, basis, and funding stresses; report P&L, breakevens, and vulnerabilities.
Back testing of VaR models
- VaR breach (exception) counting: compare realized daily P&L to daily VaR; the exception rate should approximate 1−p over time (e.g., ~1% for 99% VaR).
- Statistical tests: Kupiec’s proportion of failures (POF) for unconditional coverage; Christoffersen tests for independence and conditional coverage; traffic-light frameworks for model governance.
- Remediation: if exceptions cluster or exceed tolerance, recalibrate windows/parameters, enhance factor mapping, or move to richer methodologies; document model changes and governance.
Implementation tips for practitioners
- Choose horizon and confidence aligned to use case: limits may use 1‑day 99% VaR; capital planning may use 10‑day.
- Use key risk factor mapping with regular factor review; avoid over-aggregation that hides basis and curve risks.
- Complement VaR with ES, sensitivities (DV01/VaR per bp), scenario libraries, and liquidity metrics; integrate model risk and parameter uncertainty.
- Maintain robust data pipelines: outlier handling with justification, regime-aware windows, and periodic benchmarking against external datasets.
- Governance: independent validation, periodic back testing with exception analysis, and senior oversight with documented risk appetite and escalation paths.
Short glossary
- VaR: quantile of loss over T at confidence p.
- ES (CVaR): average loss conditional on exceeding VaR.
- POF test: checks whether exception frequency matches model confidence.
- EVT: statistical framework for modeling tails via GPD/GEV.
- Stress test: scenario-based loss assessment beyond typical VaR assumptions.
Risk Management Articles related to Model ‘C’ of CAIIB –Elective paper:






