Value at risk is a statistical technique which measures the level of financial risk in a portfolio over a specific time frame. For example, if a firm states that it has a 1% one week value at risk of $5 million; this would mean that for any given week, the firm would have a 1% chance of losing $5 million. In order words, 1 out of every 100 weeks, the firm would expect to have a loss of $5 million. This can be viewed as the standard deviation of portfolio value during “normal” market movements. The reason we look at it in terms of losses even though VaR compromises of some risk metric like volatility is because VaR is a type of risk measurement and the calculation of VaR would determine the amount of risk one’s portfolio is exposed to at a given time frame.

This is very important for institutions to manage risk. VaR has become popular because of its prospective nature, as compared to retrospective risk metrics like historical volatility. This VaR calculation is able to quantify market risk while trades are being taken. Despite its benefits, VaR is limited as it is unable to give any information about the severity of loss by which it is exceeded.

Mathematical Definition

Given a confidence level α in (0,1), the Value-at-Risk of a portfolio at α over the time period t is given by the smallest number l in R such that the probability of a loss L over a time interval t greater than l is α.

VaRα = inf {l in R, P(L ≤ l) > α}

History of Value at Risk

The first VaR measure ever published was probably by Leavens (1945). He did not explicitly identify a VaR metric but he mentioned multiple times about the “spread between probable losses and gains”. Later on, Markowitz (1952) and Roy (1952) independently published VaR measures that were surprisingly similar. Their work basically encapsulates optimizing reward for a given level of risk.

The 1970s and 1980s wrought sweeping changes for markets and technology. These advances and the catalytic effect of the 1987 Financial Crisis, gave rise to the need to implement and calculate more accurate risk measures. The first regulatory measures that roused VaR was initiated in 1980 by the Securities Exchange Commission (SEC) to make sure that financial services firms had the capital requirements to cushion losses that would be incurred with a 95% confidence over a 30 day interval in each respective asset class.

As markets became more volatile due to the increase in derivative products such as options, a modern era of risk measurement began. Due to the sheer numbers and complexity of some derivative instruments, the magnitudes of the risk in portfolios are often not easy to calculate and thus led the demand for portfolio level quantitative measures of market risk such as VaR. VaR was first used by major financial firms in the late 1980’s to measure their risk in their portfolios. J.P Morgan then released its RiskMetrics system in 1994. This provided a substantial growth of VaR as it was the first time VaR had been exposed beyond a small group of quants.

In 1997, the US SEC ruled that companies must now disclose their quantitative information about their derivatives activity and this made many major banks chose to implement the rule by including VaR information in their financial statements. Basel II which is the 2nd Basel Accord gave further catalytic qualities to the use of VaR.

In mathematics

Properties of Value at Risk

The following are the different general properties of Value at risk.

1) Normalised

VaRα(0) = 0

This means that the risk in holding no assets at all is zero.

2) Relevance

Relevance: if X ≥ 0, then VaRα(X) ≤ 0.

3) Monotonicity

Monotonicity: if X ≥ Y , then VaRα(X) ≤ VaRα(Y).

This means that if the portfolio of Y always has better values than X under almost all scenarios then the risk of Y should be less than the risk of X.

4) Positive homogeneity

Positive homogeneity: if λ ≥ 0,...