Time Series 6. Robert Almgren. Nov. 9,

Pages 7
Views 75

Please download to get full document.

View again

of 7
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Time Series 6 Robert Almgren Nov. 9, 2009 This week we continue our discussion of state space models, focusing on the particle method approach for nonlinear models. Besides its practical application, this
Time Series 6 Robert Almgren Nov. 9, 2009 This week we continue our discussion of state space models, focusing on the particle method approach for nonlinear models. Besides its practical application, this will give us some insight into the structure of state space models. The nonlinear state space model has an evolution equation x t = F t xt, w t. In general, x t could live in any kind of space, but for simplicity we take x t R n. Here w t is a noise vector that could have any distribution, but we will take successive values w t, w t+,... to be independent. We don t observe x t but only y t, which is given in terms of x t by y t = G t xt, u t 2 with a noise term u t. u t is another noise term, with arbitrary distribution but serially independent; we take u t independent of w t. The filtering problem is to deduce information about x t, given the series of observations to time t: y t, y t,..., which we denote collectively by y t. The prediction problem would extrapolate information about x t+, x t+2,... given information only through time t, and the smoothing problem would try to retroactively improve the estimates of x t, x t 2,... using information through t. Since x t is a random variable, information about it means constructing some model for its entire distribution π t x t = P x t y t This material and the presentation is taken from René A. Carmona, Statistical Analysis of Financial Data in S-Plus, Springer 2004, Section 7.6. Robert Almgren: Time Series 6 Nov. 9, conditional on the observations through time t. We want to determine an update formula π t π t incorporating the evolution equation and the information 2. In the linear case, it is reasonable to take all distributions to be Gaussian, or at least fully described by their mean and covariance structure. The linear Kalman filter that we discussed earlier is just an update formula for the best mean-square estimator ˆx t and the covariance of its estimator Σ v,t = Cov x t ˆx t. For general nonlinear models, mean and variance are not enough to describe the distribution π t. Particle method The idea of a particle method is that we represent the density of interest by a collection of particles xt i for i =,..., m, which will be interpreted as independent samples from the distribution π t. From such a sample we can compute most quantities of interest. We can reconstruct the density using a kernel method or simply point masses π t dx m δ j m x dx t j= and we can directly estimate averages E f x t m f x j t. m j= Particle methods have been widely used in fluid dynamics vortex methods as well as many other fields. If we assume that we have a way to determine the initial values x0 i, then the problem is how to do the update step: determine the {xt i }m i= from the {xi t }m i=. The novel feature of state space models, which does not appear in the physical simulation applications, is the incorporation of the observation variable y t along with the system dynamics. We do this one step at a time. Let us suppose that xt,..., xm t are independent samples from the distribution π t, incorporating the information y t = {y t, y t 2,... } through time t. We set = F t x i t, wt i, i =,..., m, z i t Robert Almgren: Time Series 6 Nov. 9, with the wt i independent samples from the distribution of w t. The values zt i are our best estimates for the xi t using only the information y t through time t. If we had no observations, then we would set xt i = zi t, which would be a Monte Carlo simulation of the dynamics. With some sloppiness of notation, we write P x t = x i t y t = m and P x t = zt i y t = m. This means, not that the true value of x t takes one of the m specific numbers xt i, but rather that all the xi t are equally probable. Our goal is to determine the xt i so that P x t = x i t y t = m. Updating zt i to xi t simply means evolving y t to y t. We incorporate y t using a maximum-liklihood approach. We denoting by Y t the random variable whose observed value is y t and the corresponding X t = x t for symmetry, we define = P Y t = y t Xt = zt i, i =,..., m, which is the probability, per unit dy t, that we would have observed y t, conditional on a particular value of X t. To calculate the, we assume that we can invert the observation equation 2 to write u t = H t xt, y t. This is not too difficult in many cases of interest, for example, if the noise is additive y t = G t x t + u t or multiplicative y t = G t x t u t. Then from Pu du = Py dy we have Py = Pu u/ y or = ψu det y H t where ψ t u is the distribution of u t. In the above expression, ψu is evaluated at u = H t z i t, y t, and y H t is evaluated at z i t, y t. Of course if u t and y t are scalars regardless of the dimension of x t then this is simply = ψ H t z i H t t, y t z i y t, y t. Robert Almgren: Time Series 6 Nov. 9, Next we calculate the probability that X t = zt i conditional on y t, after which we can adjust the values zt i to get xi t. Recalling that loosely speaking y t = y t y t, it is straightforward to calculate P X t = zt i P X t = z i y t = t and Y t = y t y t P Y t = y t y t P Y t = y t X t = z i t P Xt = z i t y t = m j= P Y t = y t Xt = z j t P Xt = z j t y t = αi t /m j α j t /m = αi t α j t. In the first two lines, we have used standard conditional probability. In the third line, we have used the definition of. We have also used the assumption that the xt i are equally probable samples from the distribution π t incorporating the information y t, and hence that the zt i are equally probable samples from the distribution of x t without using the new information y t. Now, if all the were equal, then the above probabilities would be equal and we would be done with this step: by setting xt i = zi t we would have an equally probable sample from the distribution of x t conditional on y t. In general, the values of the tell us how to adjust the zt i to get the new particle positions xi t : sample points zt i with large values of αi t correspond to large liklihood of y t, and should have more weight in the final distribution. The key idea is that we do not attempt to compute new particle positions. Rather, we resample the existing collection to get the desired probabilities. Specifically, for i =,..., m we set z t with probability α t / α j t, xt i =. zt m with probability α m t / α j t. Of course, the selection is to be made independently for each i. In this way, the resulting points xt i will be equally probable samples from the distribution π t using the observation y t. Robert Almgren: Time Series 6 Nov. 9, It may seem that we are narrowing our collection of samples at each step. Even if the were all equal, in the random resampling procedure we will likely set several xt i equal to the same value zj t, and some z j t will not survive into the collection xi t. If you were clever, you might be able to figure out some quasi-mc procedure to preserve the above distribution while minimizing the uncertainty of the selection. But note that on the next iteration, we will use independent samples from the noise w t+, and so values xt i that are equal at time t will no longer be equal at time t +. Example: stochastic volatility We suppose that we have an asset price S t evolving according to the continuous-time process ds t S t = µ dt + σ t dw t where σ t is some volatility process and W t is a Wiener process. In a discrete-time approximation we can write r t = S t+τ S t = µ τ + σ t τ ɛt where r t is the return across time step τ = t and ɛ t N 0,. 2 This is the observation equation 2 and our empirically observed data consists of the return series r, r 2,.... The volatility process could be whatever we like, even with hidden multi-dimensional components. But for simplicity, let us take a simple mean-reverting process dσ t = λ σ t σ dt + γ d W t where σ is the long-term mean level, λ is the rate of mean reversion, γ is the volatility of volatility, and W t is another Wiener process, which we take to be independent of W t. In a discrete-time approximation we may write σ t = e λτ σ t + e λτ σ + γ 2λ e 2λτ ɛ t σ t + λ σ σ t + γ τ ɛt for small τ, 2 I apologize that we are using t both for continuous time and for the discrete index hopefully in each case the meaning is clear from context. Robert Almgren: Time Series 6 Nov. 9, where ɛ t N 0,. We assume that we know the values of all the parameters σ, λ, and γ. This is our evolution equation. Our task is to take an observed historical series of returns, and from that generate a sequence of estimates of the entire distribution of σ t at each time, not just its estimated mean ˆσ t and variance. In doing this, we assume that we know not only the structure of the above model, but also all the coefficients µ, σ, λ, and γ the time step τ is ours to choose. The only problem is to determine the instantaneous value of the state variable σ t. The particle method above can be directly applied, with the hidden variable x t σ t, the observed variable y t r t, the evolution noise w t ɛ t, and the observation noise u t ɛ t. At each time step we maintain a collection {σt i }m i=, which represent an equi-probable sample from the distribution of σ t conditional on the observations through time t. To determine the new values {σt i } from the previous values {σt i }, we first determine unconditional values σ t i using the evolution equation above, with m independent samples { ɛ t i }. Then, to incorporate the new observed return r t, we observe that the inverse of the observation equation r t = G t σ t, ɛ t is ɛ t = H t σ t, r t = r t µτ σ t τ whose derivative with respect to the observation variable is H t / r t = /σ t τ, and so the liklihood coefficients evaluated at the unconditional values σ i t are rt µτ = Φ σ t i τ σ t i τ where Φz = / 2π exp z 2 /2 is the standard normal density. We then determine the conditional particle values {σt i } as random samples from the unconditional values { σ t i } by sampling with probabilities proportional to the. Of course, in practice part of the problem will be to estimate values for all the parameters, especially λ and γ. One technique that is commonly used is to include the parameters in the state equation, and determine their updates along with the state variable. If we assume that µ and σ can be estimated in advance say by taking the mean and standard deviation of the entire return sequence, then we Robert Almgren: Time Series 6 Nov. 9, would use the system of update equations σ t = e λτ σ t + e λτ σ + γ e 2λτ 2λ λ t = λ t γ t = γ t It might be simpler and more numerically stable to use the small-τ approximation above, so that λ appears linearly. That is, the true values of the coefficients do not change from one step to another, only our estimates of their values. We can simply carry along these values as components of the state variable. The liklihood function does not get any more complicated since it includes only the observation variable. In doing this, we will get not only estimated values for the parameters but also approximations to their distributions. ɛ t
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks

We need your sign to support Project to invent "SMART AND CONTROLLABLE REFLECTIVE BALLOONS" to cover the Sun and Save Our Earth.

More details...

Sign Now!

We are very appreciated for your Prompt Action!