Programming, Statistics

Python and Zero-Inflated Models

While this is hardly a tutorial, I’ve been spending a good deal of time working with zero-inflated data for a forthcoming project, and have worked with it extensively in the past.

The point of zero-inflated models is there are ultimately two sources of zeros, zeros can come from the primary model (usually Poisson), or they can be injected into the total model from some other source. For example, casualties from terror attacks may be zero inflated. Consider the idea that either there is no attack on a particular day (0 casualties), or the attack occurs but no one is hurt (also 0 casualties). However, we may want to model the daily casualties as a single model, so we need both parts in one area. (Seminal ZIP paper,  my applied paper on ZIP)

Here’s some code creating ZIP data, and then testing the ZIP model in Python to see if it properly recovers the coefficients I am looking for. Indeed, it does.  Since the ZIP models in python are not designed for the same things economists are typically looking for (ex, no output of standard errors and coefficient estimates in one table), this is a good exploration in to ZIP models, and a way to look around Python, the code du jour.

ZIP

Load Prerequisites
from matplotlib import pyplot as plt
import numpy as np
import math as math
import pandas as pd
from scipy import stats
import statsmodels.discrete.count_model as reg_models
#help(reg_mo1dels)
Generate Mock Data
The key point here in zero inflated (ZI) processes is that there is TWO ways of generating zeros.   The zero can be generated either through the (ZI) or through another process, usually Poisson (P).   Common examples include assembly line failure, the number of crimes in a neighborhood in a given hour.    Critically here was the challenge of indexing Python appropriately.

np.random.seed(123456789)

N = 100000
x_1= np.random.normal(5,1,size=N)
x_2= np.random.normal(5,1,size=N)
x=pd.DataFrame([x_1,x_2]).T
poisson_part = np.zeros(N)
zi_part = np.zeros(N)

for i, item in enumerate(x_1):
    poisson_part[i] = np.random.poisson(math.exp(0.2*x_1[i]-0.1*x_2[i])) #needed to initialize the test object. Note the poisson parameter is of the form e^(Bx), ln(lambda) = Bx
for i, item in enumerate(x_1):
    zi_part[i] = np.random.logistic(0.3*x_1[i]-0.2*x_2[i]) > 0 #needed to initialize the test object.

y = zip_model_data = poisson_part * zi_part

print(x.iloc[0:10,:]) 
print(poisson_part[0:10])
print(zi_part[0:10,])
print(y[0:10,])
Let’s take a look at the data:
          0         1
0  7.212902  5.344758
1  7.128398  4.654915
2  6.841711  4.715646
3  5.082382  5.770086
4  5.858964  3.736356
5  4.173984  3.839683
6  6.157271  5.560635
7  6.375915  7.051355
8  5.943028  4.532561
9  5.847871  5.960908
[0. 1. 4. 3. 0. 2. 0. 2. 3. 0.]
[1. 0. 1. 0. 1. 0. 1. 1. 1. 1.]
[0. 0. 4. 0. 0. 0. 0. 2. 3. 0.]
Generate the model-object itself.
out=reg_models.ZeroInflatedPoisson(y,x,x, inflation='logit')
Fit it:
fit_regularized=out.fit_regularized(maxiter = 100) #essentially forces convergence by penalizing. Biases estimates.
Optimization terminated successfully.    (Exit mode 0)
            Current function value: 1.479267848993134
            Iterations: 10
            Function evaluations: 12
            Gradient evaluations: 10
Let’s take a look at what our (regularized) fit was:
fit_regularized.params ##notice that these are regularized values, not the true values.  The ordinal scale of the variables 
inflate_0   -3.339664
inflate_1   -3.144663
0            0.237893
1           -0.232087
dtype: float64
Let’s try a REAL fit, no regularization, no biasing the estimates.
fit=out.fit(method='bfgs', maxiter = 1000) # May need more than the default 35 iterations, very small number!
..../python3.6/site-packages/statsmodels/base/model.py:508: ConvergenceWarning: Maximum Likelihood optimization failed to converge. Check mle_retvals
  "Check mle_retvals", ConvergenceWarning)
Optimization terminated successfully.
         Current function value: 1.374978
         Iterations: 13
         Function evaluations: 16
         Gradient evaluations: 16
How did it look?
fit.params
inflate_0   -0.305607
inflate_1    0.208904
0            0.197614
1           -0.096841
dtype: float64

Nailed it! Regularization had had a noticeable effect on the process, but these are the true coefficients I was attempting to recover.

Note that ZIP in python does not automatically assume the existence of an intercept term. Nor does it automatically calculate the standard errors. I’ll be bootstrapping those soon enough.

Programming, Statistics

AI and Economics

I have just released my first version of an AI to play the classic 1998 Starcraft: Broodwar.  My goal with this project is to combine economic fundamentals and behavioral tendencies with existing machine learning techniques to create a versatile and lifelike opponent.

This is a very active area of research, indeed, a publication was made on this topic yesterday.  Further literature is available in the collections listed here, here, and here.  Major competitors in this field include Facebook, dozens of PhD projects from major universities, as well as some of the best private sector programmers and developers from around the world. I am currently competing in the largest tournament of AI bots, SSCAIT, where you can see my bot play live 24/7 for your viewing pleasure and academic interest.

My AI was also selected for review in the weekly report, and this review gives an excellent demonstration of its macro-economic leanings. In this match, it makes aggressive and early investment into its long-term workforce, and has makes minimal investment in the short-run returns of combat units.   The effect is an an overwhelming crush of units in the mid-to-late game.

In this next match, I play a similar macro style, but aggressively manage complex cloaking and air attacks at the same time.

Regardless, this AI learns from every game, and every opponent is different, so each match is a unique experience.  Expect lots more fun game play from it soon!

Features of the AI:

  • Uses macroeconomic modeling to evaluate the state of its own internal economy.  Maximizes its total output.
  • Learns over time to tune its economy to match each opponent.  For this, I use a genetic algorithm with a long memory and a low mutation rate to slowly move in the right direction as the bot plays dozens and dozens of games.
  • Uses no preexisting map libraries or open source bots, written entirely from scratch in C++.
  • High performance even in full combat situations with many units on both sides. Few lag-outs or crashes (indeed, all on record are my fault for accidentally uploading the bot’s memory files incorrectly).

You can see my bot’s page directly on SSCAIT here.

Other assorted press of my bot are below:

Programming, Statistics, Teaching Materials

Simulating Optimization Problems and Production Possibility Frontiers

In teaching Behavioral Economics, optimization problems require some intuition. This intuition can be opaque without calculus literacy.  Below is a simulation to demonstrate that the process for constrained optimization works. It has the added benefit of showing isoquants (by colored stripes in the image below), and the strict boundary condition of the efficiency frontier.

Basic constrained optimization problems are as follows:

U=f(x_1,x_2)

y=p_1 x_1 + p_2 x_2

I have made code in R to simulate the results for a two-part optimization process. The example uses U=sqrt(x_1) + sqrt(x_2) as the functional form.

library(“plot3D”)

p1<-1
p2<-2
y<-10
x1<-runif(25000, min=0, max=y/p1)

#Checking any options inside the constraint or on the constraint
x2<-runif(25000,min=0, max=(y-p1*x1)/p2)

U<-sqrt(x1)+sqrt(x2)
out<-mesh(x1, x2, U)
points3D(x1, x2, U, xlab=”x1″, ylab=”x2″, zlab=”Utility”, phi=-90, theta=90)

plot(x1, U)
abline(v=x1[which(U==max(U))], col=”red”)
x1_star<-x1[which(U==max(U))]
x2_star<-x2[which(U==max(U))]
y-x1_star*p1-x2_star*p2

And it outputs the following plots (with minor variation). Note that the colored bands represent utility curves, isoquants. The end of the colored points represents the efficiency frontier.

This slideshow requires JavaScript.

 

The actual solution is found by:

U=sqrt(x_1)+sqrt(x_2)

subject to:

y=p_1 x_1+p_2 x_2

The Lagrangian is then:

L=sqrt(x_1)+sqrt(x_2) + \lambda (y - p_1 x_1 - p_2 x_2)

Leading to the first order conditions (derivatives of L):

L_1 : 0.5 x_1^{-0.5} - \lambda p_1=0

L_2: 0.5 x_2^{-0.5} - \lambda p_2=0

L_{\lambda} : y- p_1 x_1 -p_2 x_2 =0

Using these 3 conditions, we can find the equations:

\frac{0.5 x_1^{-0.5} }{0.5 x_2^{-0.5}} = \frac{p_1}{p_2}
y- p_1 x_1 -p_2 x_2 =0

Where, if y=10, p_1=1, p_2=2 then we can solve for the theoretical solutions: x_1=6.66, x_2=1.66

These indeed match very closely with the real solutions.

Programming, Statistics

The Optimum Pokemon Portfolio and Principal Component Decomposition (PCD) using R

I have very recently completed the Stanford Lagunita online course on Statistical Learning, and Tibrishani & Hastie have taught me a great deal about Principal Components.  No learning is complete without exercises, however, so I have found a wonderful data set that seems popular, the attacks and weaknesses of Pokemon.  (I am, admittedly, not a pokemon player, so I have had to ask others to help me understand some of the intricacies of the game.)

Principal Component Decomposition:

First and foremost, principal component decomposition finds the direction that maximizes variation in the data.  At the same time, this can be said to be the eigenvalue of the data, the direction which best describes the direction of the data.
For example, if there is a spill of dirt on a white tile floor, the direction of the spill (eigenvalue) would always be the direction the dirt is most widely spread (principal component).

After looking at the beautiful charts used in the link above, I realized this would be very interesting to do a PCD on. What Pokemon are most similar and which are most different in terms of strengths and weaknesses? To find out we will break it into its principal components, and find out in which directions the data is spread out.

Pokemon can vary along 18 dimensions of strengths and weaknesses, since there are 18 types of Pokemon. This means there can be up to 18 principal components. We are not sure which principal components are useful without investigation. We show below how much variation is explained by each type of Pokemon. There doesn’t appear to be any clear point where there the principal components drop off in their usefulness, perhaps the first 3 or the first 5 seem to capture the most variation.  The amount of variation captured by each principal component is outlined below.
Rplot.png
Let us now look at the principal components of the Pokemon attack/weakness chart directly.  We can visualize them in a biplot, where the arrows show the general attacking direction of the pokemon and the black labels show the defending labels.  The distance from the center of biplot shows the deviation of that pokemon type from the central eigenvalue/principal component.  Labels that are close together are more similar than those further apart.

Rplot01.png

So for example, Ghost attacks (arrows) are closely aligned with Ghost defence (black label) and Dark defence (black label).  In general, the Pokemon that are most different in defence is Fighting and Ghost, and still again distinct from Flying and Ground defence.  This suggests that if you wanted a Pokemon portfolio that would be very resilient to attack, you would want Fighting/Ghost types.  If you want a variety of attacks, you might want to look into Ghost/Normal types or Grass/Electric.

Keep in mind together these only explain about 35.5% of the variation of Pokemon types, there are other dimensions in which Pokemon vary.  I expected fire and water to be more clearly different (and they are very distinct, they go opposite directions for a long distance from the center!), but they are less distinct than ghost/normal.

The Optimum Pokemon Portfolio:

This lead me to wonder what type of pokemon portfolio would be best against the world, something outside the scope of the Statistical Learning course but well within my reach as an economist.  Since I don’t know what the pokemon-world looks like, I assumed the pokemon that show up are of a randomly and evenly selected type. (This is a relatively strong assumption, it is likely the pokemon encounters are not evenly distributed among the types).  The question is then, what type of pokemon should we collect to be the best against a random encounter, assuming we simply reach into our bag and grab the first pokemon we see to fight with?

First, I converted the matrix of strengths and weaknesses above into one that describes the spread of the strength-weakness gap, that is to say, if Water attacks Fire at 200% effectiveness, and defends at 50% effectiveness, a fight between the Water and Fire is +150% more effective than a regular pokemon attack (say Normal to Normal or Ice to Ice). Any bonuses a pokemon may have against its own type was discarded, because it would be pointless.  The chart for this, much like the wonderful link that got me the data in the first place, is here, where red is bad and blue is good: Rplot12

Then I added the strength-weakness gap together for each type of pokemon, which assumes that the pokemon are facing an a opponent of a random type.  According to this then, the most effective type of pokemon are on average:

Type                              Effectiveness
Steel                               0.22222222
Fire                                0.11111111
Ground                              0.11111111
Fairy                               0.11111111
Water                               0.08333333
Ghost                               0.08333333
Flying                              0.05555556
Electric                            0.00000000
Fighting                            0.00000000
Poison                             -0.02777778
Rock                               -0.02777778
Dark                               -0.02777778
Ice                                -0.08333333
Dragon                             -0.08333333
Normal                             -0.11111111
Psychic                            -0.11111111
Bug                                -0.11111111
Grass                              -0.19444444

That is to say, Steel pokemon, against a random opponent, will on average be 22% more effective.  (This is the mean, not the median.) And against a random opponent a Grass pokemon will be expected to be 19% less effective than a Fighting pokemon, shockingly low. Amusingly, Normal pokemon are worse than normal (0) against the average pokemon.

This does not mean you ONLY want Steel pokemon because you could come up with an opponent that is strong against Steel. Nor do you want to entirely avoid Grass pokemon, since they are very strong against many things that Steel is weak against. Merely that if you’re willing to roll the dice, a Steel pokemon will probably be your best bet.  Trainers do not want to take strong risks, trainers are risk averse.  You want to maximize your poke-payoff while minimizing how frequently you face negatively stacked fights. The equation for this is:

Maximize: \ \mu * vars - \delta * t(vars) * cov * vars + \lambda*(1- t(ones) * vars) \ wrt. \ vars

Where \mu is your vector of payoffs in the table above, \delta is your risk aversion, cov is the covariance matrix of the differenced pokemon data set, and vars is your portfolio selection which must add up to one hundred percent.

How risk averse are you?  You could be very risk averse and want to never come across a bad pokemon to fight, or you could love rolling the dice and only want one type of pokemon. So I have plotted the optimal portfolio for many levels of risk-tolerance.  It is a little cluttered, so I have labelled them directly as well as in the legend.

Rplot08

The visualization is indeed a little messy, but as you become more risk averse, you add more Electric, Normal, Fire, and Ice pokemon (and more!) to help reduce the chance of a bad engagement.  In order to do this, one reduces the weight we put on Steel, Ground, and Fairy pokemon, but doesn’t eliminate them entirely.  Almost nothing adds Dragon, Ghost, Rock. or Bug pokemon, they are nearly completely dominated by other combinations of pokemon types.

I’ve plotted two interesting portfolios along the spectrum of risk aversion below. They include one with nearly no risk aversion (0.001), and one with high risk aversion (10).

This slideshow requires JavaScript.

Of course, most importantly of all, regardless of your Pokemon and your interest in being “the very best”, you should still pick the coolest Pokemon and play for fun.

Code is included below:

#Data from: https://github.com/zonination/pokemon-chart/blob/master/chart.csv
#write.csv(chart, file="/home/bsweber/Documents/poke_chart.csv")
poke_chart<-read.csv(file="/home/bsweber/Documents/poke_chart.csv")
poke_chart<-poke_chart[,-1]
library(quadprog)
# library(devtools) 
# install_github("vqv/ggbiplot", force=TRUE)
library(ggbiplot)
library(reshape2)
library(ggplot2)
library(ggrepel)


poke_chart<-as.matrix(poke_chart)
differences <- (poke_chart-1) - (t(poke_chart)-1)
diag(differences)<-0
rownames(differences)<-colnames(differences)
core <- poke_chart
rownames(core)<-colnames(poke_chart)

poke_pcd<-prcomp(core, center=TRUE, scale=TRUE)
plot(poke_pcd, type="l", main="Pokemon PCD")
summary(poke_pcd)
biplot(poke_pcd)

poke_palette<-c("#A8A878", "#EE8130", "#6390F0", "#F7D02C", "#7AC74C", "#96D9D6", "#C22E28", "#A33EA1", "#E2BF65", "#A98FF3", "#F95587", "#A6B91A", "#B6A136", "#735797", "#6F35FC", "#705746", "#B7B7CE", "#D685AD")

ggbiplot(poke_pcd, labels= rownames(core), ellipse = TRUE, circle = TRUE, obs.scale = 1, var.scale = 1) +
  scale_color_discrete(name = '') +
  theme(legend.direction = 'horizontal', legend.position = 'top')
  #Score plot is for rows, attack data. loading lot is for columns, defense data.  So bug and fairy have similar attacks (shown by rays), similar defences (shown by points). Ghost and normal have almost identical defences, but different attacks.
ggbiplot(poke_pcd, labels= colnames(core), ellipse = TRUE, circle = TRUE, obs.scale = 1, var.scale = 1, choice=c(2,3)) +
  scale_color_discrete(name = '') +
  theme(legend.direction = 'horizontal', legend.position = 'top')  #Score plot is for rows, attack data. loading lot is for columns, defense data.
ggbiplot(poke_pcd, labels= colnames(core), ellipse = TRUE, circle = TRUE, obs.scale = 1, var.scale = 1, choice=c(5,6)) +
  scale_color_discrete(name = '') +
  theme(legend.direction = 'horizontal', legend.position = 'top')  #Score plot is for rows, attack data. loading lot is for columns, defense data.
ggbiplot(poke_pcd, labels= colnames(core), ellipse = TRUE, circle = TRUE, obs.scale = 1, var.scale = 1, choice=c(7,8)) +
  scale_color_discrete(name = '') +
  theme(legend.direction = 'horizontal', legend.position = 'top')  #Score plot is for rows, attack data. loading lot is for columns, defense data.


cov_core<- t(differences-mean(differences)) %*% (differences-mean(differences)) #Make the Cov. Matrix of differences. 
cov_core[order(diag(cov_core), decreasing=TRUE),order(diag(cov_core), decreasing=TRUE)]
ones<-as.matrix(rep(1,18))
vars<-as.matrix(rep(1/18, times=18))
mu<-t(as.matrix(apply(differences/18, 1, sum))) #Average rate of return over 18 pokemon types.

data.frame(mu[,order(t(mu), decreasing=TRUE)]) #Table of Pokemon Types

colnames(mu)<-colnames(core)
delta<- 1  #risk aversion parameter

  out<- matrix(0, nrow=0, ncol=18)
  colnames(out)<-colnames(core)
  for(j in 1:1000){
    delta<-j/100
  Dmat <- cov_core * 2 * delta
  dvec <- mu
  Amat <- cbind(1, diag(18))
  bvec <- c(1, rep(0, 18) )
  qp <- solve.QP(Dmat, dvec, Amat, bvec, meq=1)
  pos_answers<-qp$solution
  names(pos_answers)<-colnames(poke_chart)
  out<-rbind(out, round(pos_answers, digits=3))
  }
  
  df <- data.frame(x=1:nrow(out))
    df.melted <- melt(out)
      colnames(df.melted)<-c("Risk_Aversion", "Pokemon_Type", "Amount_Used")
  df.melted$Risk_Aversion<-df.melted$Risk_Aversion/100
  
 qplot(Risk_Aversion, Amount_Used, data=df.melted, color=Pokemon_Type, geom="path", main="Pokemon % By Risk Aversion") + 
   # ylim(0, 0.175) +
   scale_color_manual(values = poke_palette) +
#   geom_smooth(se=FALSE) +
   geom_text_repel(data=df.melted[df.melted$Risk_Aversion==8.5,], aes(label=Pokemon_Type, size=9, fontface = 'bold'), nudge_y = 0.005, show.legend = FALSE) 

  
  # Another plot that is less appealing
  # matplot(out, type = "l", lty = 1, lwd = 2, col=poke_palatte)
  # legend( 'center' , legend = colnames(core), cex=0.8,  pch=19, col=poke_palatte)
  pie(head(out, 1), labels= colnames(out), col=poke_palette)
  pie(tail(out, 1), labels= colnames(out), col=poke_palette)
  
  df_1<-data.frame(matrix(out[1,], ncol=1))
  colnames(df_1)<-c("Percentage")
  df_1$Pokemon_Type<-colnames(out)
  
  ggplot(data=df_1, aes(x=Pokemon_Type, y=Percentage, fill=Pokemon_Type))+
    geom_bar(stat="identity", position=position_dodge()) +
    scale_fill_manual(values = poke_palette)+
    ggtitle("Pokemon Portfolio With Almost No Risk Aversion")
  
  df_2<-data.frame(t(tail(out,1)))
  colnames(df_2)<-c("Percentage")
  df_2$Pokemon_Type<-colnames(out)
  
  ggplot(data=df_2, aes(x=Pokemon_Type, y=Percentage, fill=Pokemon_Type))+
    geom_bar(stat="identity", position=position_dodge()) +
    scale_fill_manual(values = poke_palette) +
    ggtitle("Pokemon Portfolio With Very Strong Risk Aversion")
  
  cov_core[order(diag(cov_core), decreasing=TRUE),order(diag(cov_core), decreasing=TRUE)]
 
  melt_diff<-melt(t(differences))
  melt_diff$value<- factor(melt_diff$value)
  N<-nlevels(melt_diff$value)
  
  simplepalette<-colorRampPalette(c("red", "grey", "darkgreen"))
  
  ggplot(data = melt_diff, aes(x=Var1, y=Var2, fill=value) ) + 
    geom_tile()+
    scale_fill_manual(values=simplepalette(9), breaks=levels(melt_diff$value)[seq(1, N, by=1)], name="Net Advantage" )+
    ggtitle("Net Pokemon Combat Advantage")+
    xlab("Opponent") +
    ylab("Pokemon of Choice")
Programming, Statistics, Teaching Materials

Why do we need n-2? An example in R

Below is a simple example showing why we may want the (\Sigma u^2_i )/ (n-2) as our estimates of \large \sigma^2, when our naive intuition may suggest we only want the simple average of squared errors (\Sigma u^2_i )/ (n).

To show this in no uncertain terms, I have coded a linear regression by hand in R.  Also embedded in the work below are several rules I follow about writing code. They are rules 0-6.  There are many other rules, since code writing is an art.

####Coding in R
#### Rule 1: Always comment on every few lines of code. It is not unheard of to comment every single line, particularly for new coders, or complex code.
#### You will need to reference your work at a later date, and after about 3 months, the purpose is lost. Also, I need to read it.

#### Rule 2: Define your variables first. Luckily these names are shared for us.
#### For your projects, use names which are clear for your research: (y=crime in Williamsburg, VA, X= Number of jello puddings consumed)

set.seed(1223) #ensures replication. Sets seed of random number generators.
n<-25 #number of samples
x<-2*rnorm(n) #Our x’s come from a random sampling of X’s.
b_0<-10
b_1<-3 #Those cursed jello puddings are associated with increased crime. Linear regression is supportive of association- not causation.
u<-rnorm(n) #We satisfy both independent mean and zero mean assumptions
y<-b_0+b_1*x+u #This is defining our true Y. The true relationship is linear.

plot(x,y) #Rule 0, really. Always check your data.

#### Rule 3: After definitions begin your second stage of work. Probably trimming existing data, etc. Do these in the order they were added.
hat_b_1<-sum( (x-mean(x)) * (y-mean(y)) ) / sum( (x-mean(x))^2 ) #Spaces between any parenthesized section of operations. We need to be able to see which parentheses are which.
hat_b_1 # Rule 4: Indent work which is conceptually subordinate. Indent more as needed. Four spaces=1 tab.
hat_b_0<-mean(y)-hat_b_1*mean(x)
hat_b_0 # Rule 5: Check your work as you go along. For our example, I got 9.89

abline(a=hat_b_0, b=hat_b_1, col=”red”) #let’s add a red line of best fit. And we must see how our plot looks. Repeat rule 0.

hat_y<-hat_b_0+hat_b_1*x
hat_u<-hat_y-y

plot(x,hat_u) # Let’s see our residuals
hist(hat_u) # Let’s see our histogram

#### Rule 6: Keep your final analysis as punchy and short as possible without sacrificing clarity.
#### The mean sum of the squared errors (usually unknown to us as researchers)
sigma_sq<-sum(u^2)/n #this is the value we’re trying to estimate
sigma_sq_naive<-sum(hat_u^2)/n #this is a naive estimation of it
sigma_sq_hat<-sum(hat_u^2)/(n-2) #this turns out to be more accurate, particularly in small samples. If n->infinity this goes away. Try it for yourself!

#R, is this assessment true? Is sig_sq_hat a better estimator of sig_sq than our naive estimator? Is it true we need the (-2)?
(sigma_sq-sigma_sq_naive) > (sigma_sq-sigma_sq_hat)

Here is one of several plots made by this code, showing a nice linear regression over the data:

Rplot01

Please don’t forget the derivation of why this is true!  This is simply some supportive evidence that it might be true.

Programming, Statistics

An Example of Plotting Multiple Time Series (Stock Values) on a Graph in R

I am currently in the process of designing a portfolio to manage investments. While such programs are not best plastered over the internet, a few basic concepts about plotting can be displayed.  For example, I have created a rather appealing plot, which demonstrates how to plot series of multiple images in a single plot, shown below:
Rplot01
Code is below, including my process to detrend the data. The critical lines are in bold, highlighting the fact that you can use sample(colors()) to select from the body of colors at random. This is useful when you may have to generate many plots, potentially without greatly detailed manual supervision, and you are not demanding publication-quality color selection (which is plausible for personal investigative use).

#after obtaining closing prices, you should make sure you clean your inputs. Ensure you know why there are NA’s, or you will make a critical error of omission.

closeprice<-log(closeprice)
data<-closeprice[is.finite(rowSums(closeprice)),]

#first-difference

data<-diff(data, lag=1, differences=1)
data<-na.omit(data)

#Check for any remaining trends in data over and above the natural cyclical or time-trending motion of the stocks!
#Detrend based off of the bond, a necessary part of even a basic CAPM portfolio
xhat<-lm(data$TYX.Close~1)$coefficients
detrended<-data-xhat #also, norm.
plot(index(detrended),detrended[,1],type=”l”)
for(n in 2:N){

lines(index(detrended),detrended[,n], col=sample(colors(),size=1))

}

Programming, Statistics

Music and Math

Many people claim there is a strong correlation between music and math.
Below, I demonstrate that the patterns in music are NOT well predicted by typical statistical approaches.

Methodology:
I have taken a MIDI file of Beethoven’s 5th, and analyzed the track using non-parametric estimation techniques. These techniques included panel data techniques, ARMA, and extensive non-parametric estimation techniques (polynomial and Fourier series to capture cyclical components). I then use the song’s notes and my estimation technique to create a forecast of following notes. I then play the “forecasted song”.  (I note that there has been a lot of recent development in this area and other techniques have been developed and popularized since I wrote this post.)

Result:
After listening, the “forecasted song” does does not well match the original. As a consequence, I can state that the mathematical techniques common to forecasting do not well predict a song.  Below are several attempts which I have highlighted:

Caveat:
The R-squared for these estimations are in fact VERY high, in the high 90’s. (Only few of the coefficients are significant, the data is clearly overfitted in some regressions.) This song in fact falls into the so-called uncanny valley, and is only slightly deviant from the actual Beethoven’s 5th. However, the ear is strongly cultured to perfection in the subject of music, and the errors are devastating to us.

Programming

Accidental Art 2

My earlier paper mentioned in “Accidental Art” on phased entry has been postponed.   This postponement is a fortunate side effect of my successful publication in Regional Science and Urban Economics.  My successful publication has promptly propelled me into a direction of crime economics, rather than one of phased entry. As a result, this phased entry paper has been put on hold. Currently I’m doing some basic modeling on counter-terrorism, which makes me feel like a criminal mastermind. Amusingly enough, there’s still some some beautiful accidental art being churned out by my model, this time in Matlab!

accidental art 2 accidental art 1

Programming

Using “cbind” and “as.vector”: Computationally intensive commands

As perhaps a mere interesting note, cbind when combined with as.vector can be a particularly RAM-intensive set of commands. I noted the following script excerpt caused my computer to quickly consume  11GB of RAM on a 300k entry dataset. :

for(j in c(1:200)){
mod.out$coefficients$count[1:80]<-lim.my.df[1:80,j]
mod.out$coefficients$zero[1:80]<-lim.my.df[81:160,j]
a<-predict(mod.out,clip.1)
b<-predict(mod.out,clip.mean)
diff.j<-mean(a-b)
# diff[,paste(i,”.”,j)]<-diff.j
diff<-as.vector(cbind(diff,diff.j))
}

The purpose of this script is to use bootstrapped coefficients generate an average partial effect between clip.1 and clip.mean. We will later use this to get a estimate of the standard errors of the APE. As it stands, it eats all my RAM quite promptly and causes the computer to crash.  The following script, nearly identical, does not have this problem:

for(j in c(1:200)){
mod.out$coefficients$count[1:80]<-lim.my.df[1:80,j]
mod.out$coefficients$zero[1:80]<-lim.my.df[81:160,j]
a<-predict(mod.out,clip.1)
b<-predict(mod.out,clip.mean)
diff.j<-mean(a-b)
# diff[,paste(i,”.”,j)]<-diff.j
diff<-cbind(diff,diff.j)
}
diff<-as.vector(diff)

And this works just fine! In fact, it barely  consumes 25% of my RAM.