Monday 15 September 2014

Particle Physics Software and Financial Analysis Mechanics






Many people ask what are the benefits of massive particle physics experiments such as the Large Hadron Collider at CERN. Some go back to the basics and say that fundamental research inevitably generates spin offs in technology. After all, hospital equipment such as PET scans and MRI machines would have never found their way into hospitals without modern detector and superconducting magnet technology which are used in the massive particle detectors.

However it is important to note that the technology used in modern accelerators, such as superconducting magnets and silicon detectors were not invented by particle physicists, rather they were developed by materials scientists and engineers.

Particle Physics has benefited from Materials Science much the same way as Medical Physics has benefited from Particle Physics, simply in a different direction. However Particle Physics research, often labelled "Blue Sky" not only tests proof of principle of many important technologies but also does create applications in the here and now, most important of which is new software technology.

So when talking about the benefits of Particle Physics, and more importantly why people should fund it, what are the benefits?
The real benefits of Particle Physics research has been the vast amount of software that has been developed for these projects. Grid computing and Data Mining systems have been integral in the whole process from the beginning as has the ability to develop faster computer programs the analyse vast amounts of variables.

Physicists and Engineers have had to examine how they can program computers to perform these tasks as quickly as possible and it has led to them developing new programming languages to do so using existing techniques. One such example is CERN's ROOT programming language which uses the more familiar C++ language as a template.



CERN based ROOT off of C++ because even though it is a bit harder than most other languages it is very fast and powerful. Computer programming languages are chosen in the same way you would chose to buy a car, either for safety from crashing or for speed. If Fortran and Visual Basic are stable, yet slow programs, equivalent to a family car say, then a C++ program would be like a Ferrari. It is fast but crashes easily.

C++ is the computer language for mathematical models where you need speed. For models with closed form solutions you are naturally doing fine in almost any language, but when it comes to large scale Monte Carlo C++ is really a plus.

Therefore it is not surprising that CERN uses C++ in ROOT, as Monte Carlo simulations are used all the time in theoretical particle physics models. Before, CERN had used Fortran to run its simulations in its Geant-3 particle phsycis platform. CERN adopted C++ for Geant-4, which was the first such program to use object-oriented programming. CERN ROOT truly is an amazing piece of physics software with far greater potential than people give it credit for. ROOT works on virtually all operating systems however for optimum performance I have found that Red Hat Linux's Fedora is the OS of choice for CERN ROOT, at least in my experience.





Windows is useful for doing work too, for quick jobs and modifications, however I find it crashes far too easily and is not good if you are planning on working with ROOT for hours on end. Moreover, since the advent of the infamous Windows 8, most would consider Windows to be a generally bad operating system for doing any form of high level work on.


Apart from particle physics, Monte Carlo Simulations are also an important tool in modern finance, as they can be used in models to make predictions, namely in integrating certain formulas but incorporating random fluctuations as well as boundaries.

An elementary example of using the Monte Carlo method is to find the value of pi

Where humans can solve integrals symbolically, which is more mathematically complete, computer programs can only solve integrals numerically. To understand how a computer program does this, we need to think back to the basic definition of an integral from fundamental calculus:





The following example generates a solution, using standard integration, of  the area under the curve drawn from the polynomial f(x)





An integral can also be solved by approximation using a series of boxes drawn under the curve (see figure below); as the number of boxes increases, the approximation gets better and better. This is how a computer program can calculate integrals - it calculates the area of a large number of these boxes and sums them up.







In a similar fashion, a value for the mathematical constant π is represented or “modeled” by definite integrals, such as



we can simplify this by the approximation:



This forms our new representation or "model" for as



As a simplifying approximation we can use a Monte Carlo method where we choose points randomly inside of the square. The points should be uniformly distributed, that is, each location inside the square should occur with the same probability. Then the simplifying approximation which allows us to compute π is



A simple algorithm to implement this Monte Carlo calculation is:

• Choose the radius of the circle R = 1, so the enclosing square has side 2R = 2 and area (2R)^2 = 4

• Generate a random point inside the square.

–> Use a random number generator to generate a uniform deviate, that is, a real number r in the range 0 < r < 1. The x coordinate of the random point is x = 2r−1.
–> Repeat to find the y coordinate.
–> If x^2 + y^2 < 1 then the point is inside the circle of radius 1, so increment the number of inside points.
–> Increment the total number of points.

• Repeat until the desired number of points have been generated.

• Compute



Implimented in CERN ROOT, we get the following distribution:






All well and good but what does this have to do with anything beyond abstract math? It turns out that the notion of a fixed constant whose value is computed based on random external counting is very similar to the concept of a priced stock option, a fixed value but under the influence of random fluctuations caused by the externalities that every system of transaction inherently has.


SPX Corporation is a Fortune 500 multi-industry manufacturing firm. SPX's business segments serve developing and emerging end markets, such as global infrastructure, process equipment, and diagnostic tool industries.




How do we know this company is lucrative? Simply calling it a "Fortune 500" Company should not be the only index we use in measuring a company's worth. If we really wanted to invest in a company we should see how its worth changes over time.





















From this we can learn a lot about what happened, particularly how the company was affected by the 2008 banking crisis; namely it suffered a minor collapse and returned to stability but this did not last and suffered an even larger crash over the course of the following months. It is noteworthy however how quickly it recovered and how it began to flourish again.

However we should remember that the banking bailouts promised a large trickling down of money into companies like these, so was this a result of actual worth or was the company benefiting some way from a bailout? President Obama's stimulus package also went into major Fortune 500 corporations, hence the overall speed of recovery should not be a de-facto litmus test of the company's competitiveness as state intervention in any corporation by any means, by bailout or stimulus, is a violation of capitalist market principles by which the system is supposed to be, at least in theory, self-correcting, i.e. the companies which are in fact doing better are allowed to survive whereas the companies doing worst must die out. Therefore measuring the price over time is not a clear indicator of a companies value in a true capitalist market system.

With smaller businesses we can however usually check the profits that a business makes in a year and use this as a accurate measure of competitiveness. The profit of such a business letting us measure its trading power and thus a competitive market system, independent from state intervention, is often more apparent in smaller business. However with bigger businesses, which often are intervened by a state we should use a marker which lets us monitor the trading power more closely, on a daily basis, hourly and even on the minute basis.

One way to do this is to see how much the company's stock was traded over time.

Volume–price trend (VPT) (sometimes price–volume trend) is a technical analysis indicator intended to relate price and volume in the stock market. VPT is based on a running cumulative volume that adds or subtracts a multiple of the percentage change in share price trend and current volume, depending upon their upward or downward movements

\text{VPT} = \text{VPT}_\text{prev} + \text{volume} \times { \text{close}_\text{today} - \text{close}_\text{prev} \over \text{close}_\text{prev} }
VPT total, i.e. the zero point, is arbitrary. Only the shape of the resulting indicator is used, not the actual level of the total.

The VPT is a similar to On Balance Volume (OBV) in that it is a cumulative momentum style indicator which which ties together volume with price action. However the key difference is that the amount of volume added to the total is dependent upon the relationship of Today's close to Yesterdays close
VPT is interpreted in similar ways to OBV. Generally, the idea is that volume is higher on days with a price move in the dominant direction, for example in a strong uptrend more volume on up days than down days. So, when prices are going up, VPT should be going up too, and when prices make a new rally high, VPT should too. If VPT fails to go past its previous rally high then this is a negative divergence, suggesting a weak move

The VPT can be considered to be more accurate than the OBV index in that the amount of volume apportioned each day is directly proportional to the underlying price action. So then, Large Moves account for large moves in the Index and small moves will account for small moves in the Index. This way the PVT can be seen to almost mirror the underlying market action, however as shown above, divergence can occur, and it is this divergence that is an indicator of possible future price action.

VPT is used to measure the "enthusiasm" of the market. In other words, it is an index that shows how much a stock was traded.


Using ROOT we can make animations showing the Price vs Volume movement of the SPX Corp over 3 years. A 50-day moving average line is also drawn over the scatter plot. (note: if animation stops and you want to see it again, open it in a new window)



We can see from this that the motion is truly random and looks very similar to the concept of Brownian Motion from physics. Using Monte Carlo Simulations physicists have been able to create Brownian Motion Simulations on computers to predict the possible random paths of particles.

Stock prices are often modeled as the sum of the deterministic drift, or growth, rate
and a random number with a mean of 0 and a variance that is proportional to dt
This is known as Geometric Brownian Motion, and is commonly model to define stock price paths.  It is defined by the following stochastic differential equation.


Where



St is the stock price at time t, dt is the time step, μ is the drift, σ is the volatility,  Wt is a Weiner process, and ε is a normal distribution with a mean of zero and standard deviation of one .

Hence  dSt is the sum of a general trend, and a term that represents uncertainty.




We can convert this equation into finite difference form to perform a computer simulation which gives



Bear in mind that ε is a normal distribution with a mean of zero and standard deviation of one.


We can use ROOT to perform such a geometric Brownian Motion Simulation


This ROOT script reads in data file which contains 32 days of closing prices.
The script then takes all 32 days, produces a log-normal histogram which is fit with a Gaussian to get the volitility (σ) and drift (μ) assuming Geometric Brownian Motion (GBM).  Once these two parameters are obtained from the data, a simple Monte Carlo model is run to produce 5%, 50%, and 95% CL limits on future price action. On top of the CL contours, 10 world lines of possible future price histories are also drawn.





The future price histories can be "hacked" in a new code to examine the world line histories to compute the probability for a given line to cross a specified price threshold. The next plot here shows 1000 world lines.





Red lines are those which never exceed $350 closing price and green are those which do exceed the $350 price threshold at least one time. For 1000 world lines, the result is that 161 trials had prices exceeding $350 at least one time, thus implying a 16% chance for the closing price to exceed a $350 price threshold.



A computer simulation is not a proof positive way to predict how the trading power of a business will increase or decrease over the course of a year, but considering that it labels the possibility of growth and decay, it is at least a more reasonable way of prediction than the apparent instinctual way people invest in business and do trade, sometimes depending on betting schemes that they don't fully understand and with which carry externalities such as systemic risk .

The management of risk—especially systemic risk—in the financial world was evidently deeply flawed in the 2008 finiancial crisis. An important part of the problem was that core financial institutions had used a shadowy secondary banking system to hide much of their exposure. Citigroup, Merrill Lynch, hsbc, Barclays Capital and Deutsche Bank had taken on a lot of debt and lent other people’s money against desperately poor collateral. Prior to the US bank deregulation and UK privatizations of the 1990s, the exotic forms of risky investment in corporations that banks all across the Anglo-American system were doing would have been barred by the Glass–Steagall Act of 1933 from dabbling in retail finance. Banks, Credit lenders and building societies would have been less exotic and venture capitalist, more boring in the eyes of neoliberalization, but would have nevertheless remained stable and solid institutions.

Hence, paying attention to risks involved in all financial tools is of utmost importance. At the end of the day, all our beautiful graphs, fancy theorems and newest computer models are no more than decorative pieces if the economy is going completely out of whack, as it did in the early 2000's with essentially 8 Trillion dollars in the US alone existing out of thin air supporting the construction and housing boom. Considering the possible hits and misses helps reminds ourselves that no form of trade is ever too big to fail, an important lesson to avoid future crisis.

It is ironic to think that just before the great banking deregulation between the mid-1990's and early 2000's, in 1990, the grand old man of modern economics, Harry Markowitz, was finally awarded the Noble prize:

                                                         Professor Harry Markowitz


Markowitz' work provided new tools for weighing the risks and rewards of different investments and for valuing corporate stocks and bonds.

In plain English, he developed the tools to balance greed and fear, we want the maximum return
with the minimum amount of risk. Our stock portfolio should be at the "Efficient Frontier", a concept in modern portfolio theory introduced by Markowitz himself and others.

A combination of assets, i.e. a portfolio, is referred to as "efficient" if it has the best possible expected level of return for its level of risk (usually proxied by the standard deviation of the portfolio's return).

Here, every possible combination of risky assets, without including any holdings of the risk-free asset, can be plotted in risk-expected return space, and the collection of all such possible portfolios defines a region in this space. The upward-sloped (positively-sloped) part of the left boundary of this region, a hyperbola, is then called the "efficient frontier". The efficient frontier is then the portion of the opportunity set that offers the highest expected return for a given level of risk, and lies at the top of the opportunity set (the feasible set).







To quantify better the risk we are willing to take, we define a utility function U(x) . It describes as a function of our total assets x, our "satisfaction" . A common choice is 1-exp(-k*x) (the reason for the exponent will be clear later) .

The parameter k is the risk-aversion factor . For small values of k the satisfaction is small for small values of x; by increasing x the satisfaction can still be increased significantly . For large values of k, U(x) increases rapidly to 1, there is no increase in satisfaction for additional dollars earned .

In summary:
small k ==> risk-loving investor
large k ==> risk-averse investor

Suppose we have for nrStocks the historical daily returns r = closing_price(n) - closing_price(n-1) .
Define a vector x of length of nrStocks, which contains the fraction of our money invested in each stock . We can calculate the average daily return z of our portfolio and its variance using the portfolio covariance Covar :

z = r^T x   and var = x^T Covar x

Assuming that the daily returns have a Normal distribution, N(x), so will z with mean r^T x and variance x^T Covar x

The expected value of the utility function is :

E(u(x)) = Int (1-exp(-k*x) N(x) dx = 1-exp(-k (r^T x - 0.5 k x^T Covar x) )

Its value is maximized by maximizing  r^T x -0.5 k x^T Covar x under the condition sum (x_i) = 1, meaning we want all our money invested and x_i >= 0 , we can not "short" a stock


How can we do this? We need to use a technique called quadratic programming

Let's first review what we exactly mean by "quadratic programming" :

We want to minimize the following objective function :

c^T x + ( 1/2 ) x^T Q x    wrt. the vector x

where c is a vector and Q a symmetric positive definite matrix

You might wonder what is so special about this objective which is quadratic in the unknowns.

Well, we have in addition the following boundary conditions on x:

A x =  b
clo <=  C x <= cup
xlo <=    x <= xup  ,

where A and C are arbitray matrices and the rest are vectors

Not all these constraints have to be defined . Our example will only use xlo, A and b Still, this could be handled by a general non-linear minimizer like Minuit by introducing so-called "slack" variables . However, quadp is tailored to objective functions not more complex than being quadratic . This allows usage of solving techniques which are even stable for problems involving for instance 500 variables, 100 inequality conditions and 50 equality conditions .

what the quadratic programming package in our computer program will do is

minimize    c^T x + ( 1/2 ) x^T Q x    
subject to                A x  = b
                  clo <=  C x <= cup
                  xlo <=    x <= xup

what we want :

  maximize    c^T x - k ( 1/2 ) x^T Q x
  subject to        sum_x x_i = 1
                   0 <= x_i

We have nrStocks weights to determine, 1 equality- and 0 inequality- equations (the simple square boundary condition (xlo <= x <= xup) does not count)




For 10 stocks we got the historical daily data for Sep-2000 to Jun-2004:


GE   : General Electric Co
SUNW : Sun Microsystems Inc
QCOM : Qualcomm Inc
BRCM : Broadcom Corp
TYC  : Tyco International Ltd
IBM  : International Business Machines Corp
AMAT : Applied Materials Inc
C    : Citigroup Inc
PFE  : Pfizer Inc
HD   : Home Depot Inc

We calculate the optimal portfolio for 2.0 and 10.0 .









Food for thought :

- We assumed that the stock returns have a Normal distribution . Check this assumption by histogramming the stock returns !

- We used for the expected return in the objective function, the flat average over a time period . Investment firms will put significant resources in improving the return predicton .

- If you want to trade significant number of shares, several other considerations have to be taken into account :

+  If you are going to buy, you will drive the price up (so-called "slippage") . This can be taken into account by adding terms to the objective (Google for "slippage optimization")

+  FTC regulations might have to be added to the inequality constraints

- Investment firms do not want to be exposed to the "market" as defined by a broad index like the S&P and "hedge" this exposure away . A perfect hedge this can be added as an equality constrain, otherwise add an inequality constrain .




This was just a brief taste of some of the overlying fields of study that exist between fundamental experimental and theoretical physics research and the world of finance and trade, demonstrating that the two fields which are both very different and abstract can be nevertheless unified to some degree.

I understand that finance is never a universally popular subject among scientists, as most science is critically underfunded worldwide, however I believe that by cosying up to finance in the same way that science has cosied up to industry it will give us more and more chances to "sing for our supper" and get more funding for fundamental science in the future. Moreover it may also help us to do trade with business that are shown, statistically, to be the best people to trade with and therefore will bring the cost of big projects down, helping the process of science to get done with the least amount of cost.




(If you want a copy of any of the CERN ROOT codes used to develop the images and graphs above, please contact me by leaving a comment below)

1 comment:

  1. All Codes Used to generate the images used in this article are here on my GitHub: https://github.com/MuonRay/CERN-ROOT-Financial-Mechanics-and-Market-Analysis-Codes/tree/master/ROOTFinancialAnalysis(YahooStockDownloadandPortfolios)

    ReplyDelete

Note: only a member of this blog may post a comment.