Theory¶
AJD Model¶
AJD process is a class of n-dimensional Markov processes, denoted by \(\boldsymbol{x}(t)\), in some state space \(D\subset \mathbb{R}^n\) and described by the following stochastic differential equation:
where the driving processes \(\boldsymbol{w}(t)\) is an n-dimensional standard Wiener process, and \(\boldsymbol{z}(t)\) is an inhomogeneous Compound Poisson Process (CPP) whose jumps are distributed according to \(F_{\boldsymbol{j}}(\cdot)\) on \(\mathbb{R}^n\) and arrive with intensity \(\lambda(\boldsymbol{x}(t)): D\rightarrow \mathbb{R}_{\geqslant 0}\). All of the drift \(\boldsymbol{\mu}(\cdot)\), instantaneous covariance matrix \(\boldsymbol{\sigma}(\cdot)\boldsymbol{\sigma}(\cdot)^T\) and jump intensity \(\lambda(\cdot)\) have affine dependence on the state vector \(\boldsymbol{x}(t)\), i.e., with the same notations as in Duffie et al. (2000) [1] , determined by coefficients \((K,H,l)\) as
\(\boldsymbol{\mu}(\boldsymbol{x}) = K_0 + K_1\boldsymbol{x}\), for \(K = (K_0,K_1)\in \mathbb{R}^n\times \mathbb{R}^{n\times n}\).
\((\boldsymbol{\sigma}(\boldsymbol{x})\boldsymbol{\sigma}(\boldsymbol{x})^T)_{ij} = (H_0)_{ij} + (H_1)_{ij}\cdot \boldsymbol{x}\), for \(H=(H_0,H_1)\in \mathbb{R}^{n\times n}\times \mathbb{R}^{n\times n\times n}\).
\(\lambda(\boldsymbol{x}) = l_0 + l_1\cdot \boldsymbol{x}\), for \(l=(l_0,l_1)\in \mathbb{R}\times\mathbb{R}^{n}\).
Since AJD models mainly find applications in a range of financial asset valuation and econometric problems due to its tractability, we take the Heston SV model and its AJD extensions as examples. Under the settings of SV models, including the Heston one, only part of the state is observable because the other part of the state, i.e., volatility, is usually latent. The partially observed state is no longer Markovian and the tractability of the Heston SV models has been limited to only having a closed-form conditional CF. However, we have found a recursive way to derive the moments and covariances in closed-form of the Heston SV models and its AJD extensions.
As a typical AJD process, the Heston SV model is seen as a baseline model hereafter. And I will focus on the derivation of moments and covariances for this baseline model.
Heston SV Model¶
The so-called Heston SV in this package is described by the following SDEs [2] ,
where \(s(t)\) is the asset price at time \(t\), \(v(t)\) is the instantaneous return variance at time \(t\), and \(w^s(t)\) and \(w^v(t)\) are two Wiener processes with correlation \(\rho\). Assume that the initial values \(s(0)\) and \(v(0)\) are independent of each other and also independent of \(w^s(t)\) and \(w^v(t)\). The variance process \(v(t)\) is a CIR process, which is also called square-root diffusion. The parameters \(\theta>0, k>0,\sigma_v>0\) determine the long-run mean, the mean reversion velocity, and the volatility of the variance process \(v(t)\), respectively, and satisfy the condition \(\sigma_v^2 \le 2k\theta\).
The variance process \(v(t)\) is a Markov process that has a steady-state gamma distribution with mean \(\theta\) and variance \(\theta \sigma_v^2/(2k)\), e.g., see [3] . Without loss of generality, throughout this package we assume that \(v(0)\) is distributed according to the steady-state distribution of \(v(t)\), which implies that \(v(t)\) is strictly stationary and ergodic (see [4] ).
Based on Itô formula, the log price process can be written as:
Let \(s_i \triangleq s(ih)\) [5] . For the return over the ith interval, denoted by \(y_i\), it’s defined as
Notations¶
I decompose \(w^s(t)\) as \(w^s(t) = \rho w^v(t) + \sqrt{1-\rho^2}w(t)\), where \(w(t)\) is another Wiener process which is independent of \(w^v(t)\). For notational simplicity, I define:
and
Usually, \(IV_{s,t}, IV_i\), and \(IV_{i,t}\) are referred to as Integrated Variance (volatility).
Then, \(y_i\) can be expressed as
Variance (volatility) process \(v(t)\) can be re-written as:
whose moment of order \(m\) is given as
Integrated Variance can be re-written as
Moment Derivation¶
Here I discuss how moments and covariances of \(y_n\) can be derived. Define
then
where \(\bar{y}_{n-1,t} = y_{n-1,t} - E[y_{n-1,t}]\) and \(\beta_{n-1,t} = (1-e^{-k[t-(n-1)h]})/(2k)\).
The lth central moment of \(y_{n-1,t}\), denoted by \(cm_l(y_{n-1,t})\), can be computed based on the following quantities:
where \(n_i\geq 0\) ( \(i=1,2,3,4,5\) ) are integers and \(\sum_{i=1}^{5}n_i=l\). I can calculate quantity (2) via a two-step process:
Compute the conditional expectation by fixing \(v_{n-1}\):
\[E[(e^{-kt}I\!E_{n-1,t})^{n_3}I_{n-1,t}^{n_4} I_{n-1,t}^{*n_{5}}|v_{n-1}].\]
Follow with the unconditional expectation with respect to \(v_{n−1}\):
\[E[\theta^{n_1}v_{n-1}^{n_2}E[(e^{-kt}I\!E_{n-1,t})^{n_3}I_{n-1,t}^{n_4} I_{n-1,t}^{*n_{5}}|v_{n-1}]].\]
It will be shown later that the conditional moment \(E[I\!E_{n-1,t}^{n_3}I_{n-1,t}^{n_4}I_{n-1,t}^{*n_{5}}|v_{n-1}]\) is a polynomial of \(v_{n-1}\), which implies that quantity (2) can be expressed as a function of moments of \(v_{n-1}\). By using equation (1), I can compute \(v_{n-1}\)’s moment of any order, further I can compute that of (2) as well.
First, I consider \(E[I\!E_{n-1,t}^{n_3}I_{n-1,t}^{n_4}I_{n-1,t}^{*n_{5}}|v_{n-1}]\). I separate \(I\!E_{n-1,t}^{n_3}I_{n-1,t}^{n_4}I_{n-1,t}^{*n_{5}}\) into two parts: \(I\!E_{n-1,t}^{n_3}I_{n-1,t}^{n_4}\) and \(I_{n-1,t}^{*n_{5}}\), since they are driven by two different Wiener processes \(w^v(t)\) and \(w(t)\), respectively. For \(I\!E_{n-1,t}^{n_3}I_{n-1,t}^{n_4}\), I have
where
Therefore, the conditional expectation
Itô process Moments - I¶
If \(v(t)\) is expanded as
then, \(E[I\!E_{n-1,t}^{n_3}I_{n-1,t}^{n_4}|v_{n-1}]\) can be expressed as
Moments of Low Orders¶
Order 1, i.e., \(n_3 + n_4 = 1\).
\((n_3,n_4) = (1,0): E[I\!E_{n-1,t}|v_{n-1}] = 0\)
\((n_3,n_4) = (0,1): E[I_{n-1,t}|v_{n-1}] = 0\)
Order 2, i.e., \(n_3 + n_4 = 2\).
\((n_3,n_4)\) |
Moment \(E[I\!E_{n-1,t}^{n_3}I_{n-1,t}^{n_4}|v_{n-1}]\) |
|
|---|---|---|
(2,0) |
\(e^{2kt}\frac{1}{2k}\theta + e^{kt+k(n-1)h}\frac{1}{k}(v_{n-1}-\theta) - e^{2k(n-1)h} \left( \frac{1}{k}v_{n-1} - \frac{1}{2k}\theta \right)\) |
|
(1,1) |
\(e^{kt}\frac{1}{k}\theta + [t-(n-1)h]e^{k(n-1)h}(v_{n-1}-\theta) - e^{k(n-1)h}\frac{1}{k}\theta\) |
|
(0,2) |
\(- e^{-kt+k(n-1)h}\frac{1}{k}(v_{n-1}-\theta) + [t-(n-1)h]\theta + (v_{n-1}-\theta)\frac{1}{k}\) |
|
Itô process Moments - II¶
For \(I_{n-1,t}^{*n_5}\), its derivative
Note that \(d(I\!E_{n-1,t}^{n_3}I_{n-1,t}^{n_4})dI_{n-1,t}^{*n_5} = 0\) because \(dw^v(t)dw(t) = 0\). Hence,
Therefore,
where quantities having \(dw(t)\) and \(dw^v(t)\) have been deleted because their expectations are 0.
Hence, \(E[I\!E_{n-1,t}^{n_3}I_{n-1,t}^{n_4}I_{n-1,t}^{*n_5}|v_{n-1}]\) can be expressed as
It should be noted that \(E[I_{n-1,t}^{*n_5}|v_{n-1}] = E[I_{n-1,t}^{n_5}|v_{n-1}]\).
Low Order Moments¶
Order 1, i.e., \(n_3 + n_4 + n_5= 1\).
\((n_3,n_4,n_5) = (1,0,0): E[I\!E_{n-1,t}|v_{n-1}] = 0\).
\((n_3,n_4,n_5) = (0,1,0): E[I_{n-1,t}|v_{n-1}] = 0\).
\((n_3,n_4,n_5) = (0,0,1): E[I_{n-1,t}^{*}|v_{n-1}] = 0\).
Order 2, i.e., \(n_3 + n_4 + n_5= 2\).
\((n_3,n_4,n_5=0)\) reduces to \((n_3,n_4)\), i.e., \(E[I\!E_{n-1,t}^{n_3}I_{n-1,t}^{n_4}|v_{n-1}]\).
\((n_3,n_4,n_5=1)\): \(E[I\!E_{n-1,t}^{n_3}I_{n-1,t}^{n_4}I_{n-1,t}^{*}|v_{n-1}] = 0\).
\((n_3,n_4,n_5)=(0,0,2)\) reduces to \((n_3, n_4)=(0,2)\), i.e., \(E[I_{n-1,t}^{2}|v_{n-1}]\).
Note
The two recursive equations and can be used to compute the central moment
of any order
of \(y_{n-1,t}\) recursively, from lower order ones to high
order ones. For example, we can start with the combinations
\(\{(n_3,n_4,n_5), l=1\}\), then
\(\{(n_3,n_4,n_5), l=2\}\), so on and so forth, where \(n_3+n_4+n_5=l\).
The computations are fairly straightforward but computationally intensive,
which can be automated as implemented in the ajdmom package
which is explained in the Program Design page.
Covariance Derivation¶
Similarly, we can compute
in which \(y_n = y_{n-1,t}\) with \(t=nh\) where
which also equals to \(\overline{y}_{n-1,t} + (\mu -\theta/2)[t-(n-1)h]\).