Programming, Statistics

AI and Economics

I have just released my first version of an AI to play the classic 1998 Starcraft: Broodwar.  My goal with this project is to combine economic fundamentals and behavioral tendencies with existing machine learning techniques to create a versatile and lifelike opponent.

This is a very active area of research, indeed, a publication was made on this topic yesterday.  Further literature is available in the collections listed here, here, and here.  Major competitors in this field include Facebook, dozens of PhD projects from major universities, as well as some of the best private sector programmers and developers from around the world. I am currently competing in the largest tournament of AI bots, SSCAIT, where you can see my bot play live 24/7 for your viewing pleasure and academic interest.

My AI was also selected for review in the weekly report, and this review gives an excellent demonstration of its macro-economic leanings. In this match, it makes aggressive and early investment into its long-term workforce, and has makes minimal investment in the short-run returns of combat units.   The effect is an an overwhelming crush of units in the mid-to-late game.

 

In this next match, I play a similar macro style, but aggressively manage complex cloaking and air attacks at the same time.

Regardless, this AI learns from every game, and every opponent is different, so each match is a unique experience.  Expect lots more fun game play from it soon!

Features of the AI:

  • Uses macroeconomic modeling to evaluate the state of its own internal economy.  Maximizes its total output.
  • Learns over time to tune its economy to match each opponent.  For this, I use a genetic algorithm with a long memory and a low mutation rate to slowly move in the right direction as the bot plays dozens and dozens of games.
  • Uses no preexisting map libraries or open source bots, written entirely from scratch in C++.
  • High performance even in full combat situations with many units on both sides. Few lag-outs or crashes (indeed, all on record are my fault for accidentally uploading the bot’s memory files incorrectly).

You can see my bot’s page directly on SSCAIT here.

Other assorted replays of my bot are below:

Advertisements
Statistics

Behavioral Economics and In-class Exercizes

I wanted to spend a moment to report on the incredible success I have been having using in-class exercises. Students have been engaging the content more directly, understanding of the immediate subject matter has improved, and much to my surprise, interest continues onto subjects beyond that immediate exercise.  This continuation of interest for the entire 2-hour class period suggests more than mere amusement is involved, actual interest in the subject has been created. Perhaps the activities have moved their reference point to a more engaged and interested one.

I would like to share some of the excellent exercises that have been done with our student body.  This is not a limiting set of these activities, there are many more available. Many thanks to Virgina Econ Lab for its excellent work in digitizing some of these classical class exercises.

The first exercise, best as an opening exercise, describes the opening idea that individuals are well-described by the rational model, and shows how the rational class naturally arrives at the results that we expect in classical economics.   We have some of the class as suppliers, some of the class as buyers, and random values for the good are assigned. They enter a mock market place and trade goods among themselves.   Output of the program is very good visually (although a little too colorful for my tastes). Learning is very visible.

2016-08-29-12_34_47-supply-and-demand-experiment-results
Possible Output of Supply and Demand Exercise

Total time was maybe 15-20 minutes of execution, 5-10 minutes of setting up and logging into the computer, costs that have reduced after multiple exercises.

I thought it was best to lead the exercise with:

  • Review of PPFs
  • Refresh basic supply and demand model.
  • Review of basic rational model for firms (MC=MR)\ or\ (M\pi =0) .

I thought the gains from the exercise included:

  • More confidence in the idea of rational agents having validity.
    • More interest in rational agents
  • Students put idea of rationality into contrast with behavioral models and procedurally rational models.
  • Strong retention and application of supply and demand, rational model for firms.

We have, since then, conducted several simulations where students interacted directly with rational (and procedurally rational) models. I have programmed these simulations myself and they are available here: Simulating Optimization Problems and Production Possibility Frontiers  and Simulations of Optimization over Time. I would recommend these to any undergraduate body that has easy access to a PC lab, and more detailed interaction with the models is suitable for earlier graduates. I note that the programs do not generate exact correct answers, so one can still ask homework questions on the subject.

Most recently, we have conducted auctions to examine how individuals handle different auction schemes.   Rational results suggest that individuals bid under their valuation in first-price sealed-bid auctions, and bid their valuation in second price sealed-bid auctions. But people do not always perform to those expectations, and it should be illustrated to students.

Some sample output from this program might look like below:

auction_graph-php
Possible Output of Auction Exercise

We would hope it would clearly show variation between the two auctions, like the plot above.  The second-price auctions lead to significantly higher prices overall, and will often lead students bidding higher than their valuation.

I thought it was best to lead the exercise with:

  • Complete teview of behavioral ideas, particularly:
    • Transactional Utility
    • Risk Aversion
    • Reference Points
    • Mental Accounting

I thought the gains from the exercise included:

  • Better identification among each auction type:
    • Vickry
    • English
    • Dutch
  • Students correctly identified explanations for behavior.
  • Students internalized the optimal bidding value for first-price sealed-bid auctions over uniform distributions, \frac{n-1}{n}
Statistics

Simulations of Optimization over Time

Behavioral Economics class incorporates a discussion about consumption smoothing, the idea that people prefer gradual changes in consumption over time, and so agents have a smooth relationship between consumption in one period and the next. In fact, it can be shown (with simple log utility functions) that the ratio between consumption today (x_t) and consumption tomorrow (x_{t+1}) is:

\frac{x_{t+1}}{x_t}=\delta

Where delta is the “discount rate”, the relative value of today vs tomorrow for the agent.

Below is a simulation I designed to help demonstrate this concept to my class.   It shows an agent struggling to optimize a three-period consumption model.   We always pause to note how the marginal value of consumption smoothly declines every period, and that the discounted marginal utility is nearly the same each period.   (The model simulation residuals are surprisingly large, but nevertheless illustrative.)   The simulation output indicates the consumption in each period with the red line, and a good estimate of the other possible consumption points in black.

rplot01

#Define Terms
delta<-.85
p1<-1
p2<-1
p3<-1
y<-10
#Checking any options inside the constraint or on the constraint
x1<-runif(100000, min=0, max=y/p1)
x2<-runif(100000, min=0, max=(y-p1*x1)/p2)
#Checking only on the constraint.  Assumes no leftover resources.
x3<-y-p1*x1-p2*x2

#Typical Utility Function

U<-log(x1)+ delta * log(x2) + delta^2 * log(x3)
U1<-log(x1)
U2<-delta * log(x2) # undiscounted utility of period 2.
U3<-delta^2 * log(x3) # undiscounted utility of period 3.
par(mfrow=c(1,3))

plot(x1, U1, ylim=c(0,2.5))
abline(v=x1[which(U==max(U))], col=”red”)
plot(x2, delta*U2, ylim=c(0,2.5))
abline(v=x2[which(U==max(U))], col=”red”)
plot(x3, delta^3*U3, ylim=c(0,2.5))
abline(v=x3[which(U==max(U))], col=”red”)

x1_star<-x1[which(U==max(U))]
x2_star<-x2[which(U==max(U))]
x3_star<-x3[which(U==max(U))]

x1_star
x2_star
x3_star
delta

#Marginal Utility
1/log(x1_star); 1/log(x2_star); 1/log(x3_star);

#Discounted Marginal Utility
1/log(x1_star); delta*1/log(x2_star); delta^2*1/log(x3_star); #Discounted marginal utilities are nearly identical.

Programming, Statistics, Teaching Materials

Simulating Optimization Problems and Production Possibility Frontiers

In teaching Behavioral Economics, optimization problems require some intuition. This intuition can be opaque without calculus literacy.  Below is a simulation to demonstrate that the process for constrained optimization works. It has the added benefit of showing isoquants (by colored stripes in the image below), and the strict boundary condition of the efficiency frontier.

Basic constrained optimization problems are as follows:

U=f(x_1,x_2)

y=p_1 x_1 + p_2 x_2

I have made code in R to simulate the results for a two-part optimization process. The example uses U=sqrt(x_1) + sqrt(x_2) as the functional form.

library(“plot3D”)

p1<-1
p2<-2
y<-10
x1<-runif(25000, min=0, max=y/p1)

#Checking any options inside the constraint or on the constraint
x2<-runif(25000,min=0, max=(y-p1*x1)/p2)

U<-sqrt(x1)+sqrt(x2)
out<-mesh(x1, x2, U)
points3D(x1, x2, U, xlab=”x1″, ylab=”x2″, zlab=”Utility”, phi=-90, theta=90)

plot(x1, U)
abline(v=x1[which(U==max(U))], col=”red”)
x1_star<-x1[which(U==max(U))]
x2_star<-x2[which(U==max(U))]
y-x1_star*p1-x2_star*p2

And it outputs the following plots (with minor variation). Note that the colored bands represent utility curves, isoquants. The end of the colored points represents the efficiency frontier.

This slideshow requires JavaScript.

 

The actual solution is found by:

U=sqrt(x_1)+sqrt(x_2)

subject to:

y=p_1 x_1+p_2 x_2

The Lagrangian is then:

L=sqrt(x_1)+sqrt(x_2) + \lambda (y - p_1 x_1 - p_2 x_2)

Leading to the first order conditions (derivatives of L):

L_1 : 0.5 x_1^{-0.5} - \lambda p_1=0

L_2: 0.5 x_2^{-0.5} - \lambda p_2=0

L_{\lambda} : y- p_1 x_1 -p_2 x_2 =0

Using these 3 conditions, we can find the equations:

\frac{0.5 x_1^{-0.5} }{0.5 x_2^{-0.5}} = \frac{p_1}{p_2}
y- p_1 x_1 -p_2 x_2 =0

Where, if y=10, p_1=1, p_2=2 then we can solve for the theoretical solutions: x_1=6.66, x_2=1.66

These indeed match very closely with the real solutions.

Programming, Statistics

The Optimum Pokemon Portfolio and Principal Component Decomposition (PCD) using R

I have very recently completed the Stanford Lagunita online course on Statistical Learning, and Tibrishani & Hastie have taught me a great deal about Principal Components.  No learning is complete without exercises, however, so I have found a wonderful data set that seems popular, the attacks and weaknesses of Pokemon.  (I am, admittedly, not a pokemon player, so I have had to ask others to help me understand some of the intricacies of the game.)

Principal Component Decomposition:

First and foremost, principal component decomposition finds the direction that maximizes variation in the data.  At the same time, this can be said to be the eigenvalue of the data, the direction which best describes the direction of the data.
For example, if there is a spill of dirt on a white tile floor, the direction of the spill (eigenvalue) would always be the direction the dirt is most widely spread (principal component).

After looking at the beautiful charts used in the link above, I realized this would be very interesting to do a PCD on. What Pokemon are most similar and which are most different in terms of strengths and weaknesses? To find out we will break it into its principal components, and find out in which directions the data is spread out.

Pokemon can vary along 18 dimensions of strengths and weaknesses, since there are 18 types of Pokemon. This means there can be up to 18 principal components. We are not sure which principal components are useful without investigation. We show below how much variation is explained by each type of Pokemon. There doesn’t appear to be any clear point where there the principal components drop off in their usefulness, perhaps the first 3 or the first 5 seem to capture the most variation.  The amount of variation captured by each principal component is outlined below.
Rplot.png
Let us now look at the principal components of the Pokemon attack/weakness chart directly.  We can visualize them in a biplot, where the arrows show the general attacking direction of the pokemon and the black labels show the defending labels.  The distance from the center of biplot shows the deviation of that pokemon type from the central eigenvalue/principal component.  Labels that are close together are more similar than those further apart.

Rplot01.png

So for example, Ghost attacks (arrows) are closely aligned with Ghost defence (black label) and Dark defence (black label).  In general, the Pokemon that are most different in defence is Fighting and Ghost, and still again distinct from Flying and Ground defence.  This suggests that if you wanted a Pokemon portfolio that would be very resilient to attack, you would want Fighting/Ghost types.  If you want a variety of attacks, you might want to look into Ghost/Normal types or Grass/Electric.

Keep in mind together these only explain about 35.5% of the variation of Pokemon types, there are other dimensions in which Pokemon vary.  I expected fire and water to be more clearly different (and they are very distinct, they go opposite directions for a long distance from the center!), but they are less distinct than ghost/normal.

The Optimum Pokemon Portfolio:

This lead me to wonder what type of pokemon portfolio would be best against the world, something outside the scope of the Statistical Learning course but well within my reach as an economist.  Since I don’t know what the pokemon-world looks like, I assumed the pokemon that show up are of a randomly and evenly selected type. (This is a relatively strong assumption, it is likely the pokemon encounters are not evenly distributed among the types).  The question is then, what type of pokemon should we collect to be the best against a random encounter, assuming we simply reach into our bag and grab the first pokemon we see to fight with?

First, I converted the matrix of strengths and weaknesses above into one that describes the spread of the strength-weakness gap, that is to say, if Water attacks Fire at 200% effectiveness, and defends at 50% effectiveness, a fight between the Water and Fire is +150% more effective than a regular pokemon attack (say Normal to Normal or Ice to Ice). Any bonuses a pokemon may have against its own type was discarded, because it would be pointless.  The chart for this, much like the wonderful link that got me the data in the first place, is here, where red is bad and blue is good: Rplot12

Then I added the strength-weakness gap together for each type of pokemon, which assumes that the pokemon are facing an a opponent of a random type.  According to this then, the most effective type of pokemon are on average:

Type                              Effectiveness
Steel                               0.22222222
Fire                                0.11111111
Ground                              0.11111111
Fairy                               0.11111111
Water                               0.08333333
Ghost                               0.08333333
Flying                              0.05555556
Electric                            0.00000000
Fighting                            0.00000000
Poison                             -0.02777778
Rock                               -0.02777778
Dark                               -0.02777778
Ice                                -0.08333333
Dragon                             -0.08333333
Normal                             -0.11111111
Psychic                            -0.11111111
Bug                                -0.11111111
Grass                              -0.19444444

That is to say, Steel pokemon, against a random opponent, will on average be 22% more effective.  (This is the mean, not the median.) And against a random opponent a Grass pokemon will be expected to be 19% less effective than a Fighting pokemon, shockingly low. Amusingly, Normal pokemon are worse than normal (0) against the average pokemon.

This does not mean you ONLY want Steel pokemon because you could come up with an opponent that is strong against Steel. Nor do you want to entirely avoid Grass pokemon, since they are very strong against many things that Steel is weak against. Merely that if you’re willing to roll the dice, a Steel pokemon will probably be your best bet.  Trainers do not want to take strong risks, trainers are risk averse.  You want to maximize your poke-payoff while minimizing how frequently you face negatively stacked fights. The equation for this is:

Maximize: \ \mu * vars - \delta * t(vars) * cov * vars + \lambda*(1- t(ones) * vars) \ wrt. \ vars

Where \mu is your vector of payoffs in the table above, \delta is your risk aversion, cov is the covariance matrix of the differenced pokemon data set, and vars is your portfolio selection which must add up to one hundred percent.

How risk averse are you?  You could be very risk averse and want to never come across a bad pokemon to fight, or you could love rolling the dice and only want one type of pokemon. So I have plotted the optimal portfolio for many levels of risk-tolerance.  It is a little cluttered, so I have labelled them directly as well as in the legend.

Rplot08

The visualization is indeed a little messy, but as you become more risk averse, you add more Electric, Normal, Fire, and Ice pokemon (and more!) to help reduce the chance of a bad engagement.  In order to do this, one reduces the weight we put on Steel, Ground, and Fairy pokemon, but doesn’t eliminate them entirely.  Almost nothing adds Dragon, Ghost, Rock. or Bug pokemon, they are nearly completely dominated by other combinations of pokemon types.

I’ve plotted two interesting portfolios along the spectrum of risk aversion below. They include one with nearly no risk aversion (0.001), and one with high risk aversion (10).

This slideshow requires JavaScript.

Of course, most importantly of all, regardless of your Pokemon and your interest in being “the very best”, you should still pick the coolest Pokemon and play for fun.

Code is included below:

#Data from: https://github.com/zonination/pokemon-chart/blob/master/chart.csv
#write.csv(chart, file="/home/bsweber/Documents/poke_chart.csv")
poke_chart<-read.csv(file="/home/bsweber/Documents/poke_chart.csv")
poke_chart<-poke_chart[,-1]
library(quadprog)
# library(devtools) 
# install_github("vqv/ggbiplot", force=TRUE)
library(ggbiplot)
library(reshape2)
library(ggplot2)
library(ggrepel)


poke_chart<-as.matrix(poke_chart)
differences <- (poke_chart-1) - (t(poke_chart)-1)
diag(differences)<-0
rownames(differences)<-colnames(differences)
core <- poke_chart
rownames(core)<-colnames(poke_chart)

poke_pcd<-prcomp(core, center=TRUE, scale=TRUE)
plot(poke_pcd, type="l", main="Pokemon PCD")
summary(poke_pcd)
biplot(poke_pcd)

poke_palette<-c("#A8A878", "#EE8130", "#6390F0", "#F7D02C", "#7AC74C", "#96D9D6", "#C22E28", "#A33EA1", "#E2BF65", "#A98FF3", "#F95587", "#A6B91A", "#B6A136", "#735797", "#6F35FC", "#705746", "#B7B7CE", "#D685AD")

ggbiplot(poke_pcd, labels= rownames(core), ellipse = TRUE, circle = TRUE, obs.scale = 1, var.scale = 1) +
  scale_color_discrete(name = '') +
  theme(legend.direction = 'horizontal', legend.position = 'top')
  #Score plot is for rows, attack data. loading lot is for columns, defense data.  So bug and fairy have similar attacks (shown by rays), similar defences (shown by points). Ghost and normal have almost identical defences, but different attacks.
ggbiplot(poke_pcd, labels= colnames(core), ellipse = TRUE, circle = TRUE, obs.scale = 1, var.scale = 1, choice=c(2,3)) +
  scale_color_discrete(name = '') +
  theme(legend.direction = 'horizontal', legend.position = 'top')  #Score plot is for rows, attack data. loading lot is for columns, defense data.
ggbiplot(poke_pcd, labels= colnames(core), ellipse = TRUE, circle = TRUE, obs.scale = 1, var.scale = 1, choice=c(5,6)) +
  scale_color_discrete(name = '') +
  theme(legend.direction = 'horizontal', legend.position = 'top')  #Score plot is for rows, attack data. loading lot is for columns, defense data.
ggbiplot(poke_pcd, labels= colnames(core), ellipse = TRUE, circle = TRUE, obs.scale = 1, var.scale = 1, choice=c(7,8)) +
  scale_color_discrete(name = '') +
  theme(legend.direction = 'horizontal', legend.position = 'top')  #Score plot is for rows, attack data. loading lot is for columns, defense data.


cov_core<- t(differences-mean(differences)) %*% (differences-mean(differences)) #Make the Cov. Matrix of differences. 
cov_core[order(diag(cov_core), decreasing=TRUE),order(diag(cov_core), decreasing=TRUE)]
ones<-as.matrix(rep(1,18))
vars<-as.matrix(rep(1/18, times=18))
mu<-t(as.matrix(apply(differences/18, 1, sum))) #Average rate of return over 18 pokemon types.

data.frame(mu[,order(t(mu), decreasing=TRUE)]) #Table of Pokemon Types

colnames(mu)<-colnames(core)
delta<- 1  #risk aversion parameter

  out<- matrix(0, nrow=0, ncol=18)
  colnames(out)<-colnames(core)
  for(j in 1:1000){
    delta<-j/100
  Dmat <- cov_core * 2 * delta
  dvec <- mu
  Amat <- cbind(1, diag(18))
  bvec <- c(1, rep(0, 18) )
  qp <- solve.QP(Dmat, dvec, Amat, bvec, meq=1)
  pos_answers<-qp$solution
  names(pos_answers)<-colnames(poke_chart)
  out<-rbind(out, round(pos_answers, digits=3))
  }
  
  df <- data.frame(x=1:nrow(out))
    df.melted <- melt(out)
      colnames(df.melted)<-c("Risk_Aversion", "Pokemon_Type", "Amount_Used")
  df.melted$Risk_Aversion<-df.melted$Risk_Aversion/100
  
 qplot(Risk_Aversion, Amount_Used, data=df.melted, color=Pokemon_Type, geom="path", main="Pokemon % By Risk Aversion") + 
   # ylim(0, 0.175) +
   scale_color_manual(values = poke_palette) +
#   geom_smooth(se=FALSE) +
   geom_text_repel(data=df.melted[df.melted$Risk_Aversion==8.5,], aes(label=Pokemon_Type, size=9, fontface = 'bold'), nudge_y = 0.005, show.legend = FALSE) 

  
  # Another plot that is less appealing
  # matplot(out, type = "l", lty = 1, lwd = 2, col=poke_palatte)
  # legend( 'center' , legend = colnames(core), cex=0.8,  pch=19, col=poke_palatte)
  pie(head(out, 1), labels= colnames(out), col=poke_palette)
  pie(tail(out, 1), labels= colnames(out), col=poke_palette)
  
  df_1<-data.frame(matrix(out[1,], ncol=1))
  colnames(df_1)<-c("Percentage")
  df_1$Pokemon_Type<-colnames(out)
  
  ggplot(data=df_1, aes(x=Pokemon_Type, y=Percentage, fill=Pokemon_Type))+
    geom_bar(stat="identity", position=position_dodge()) +
    scale_fill_manual(values = poke_palette)+
    ggtitle("Pokemon Portfolio With Almost No Risk Aversion")
  
  df_2<-data.frame(t(tail(out,1)))
  colnames(df_2)<-c("Percentage")
  df_2$Pokemon_Type<-colnames(out)
  
  ggplot(data=df_2, aes(x=Pokemon_Type, y=Percentage, fill=Pokemon_Type))+
    geom_bar(stat="identity", position=position_dodge()) +
    scale_fill_manual(values = poke_palette) +
    ggtitle("Pokemon Portfolio With Very Strong Risk Aversion")
  
  cov_core[order(diag(cov_core), decreasing=TRUE),order(diag(cov_core), decreasing=TRUE)]
 
  melt_diff<-melt(t(differences))
  melt_diff$value<- factor(melt_diff$value)
  N<-nlevels(melt_diff$value)
  
  simplepalette<-colorRampPalette(c("red", "grey", "darkgreen"))
  
  ggplot(data = melt_diff, aes(x=Var1, y=Var2, fill=value) ) + 
    geom_tile()+
    scale_fill_manual(values=simplepalette(9), breaks=levels(melt_diff$value)[seq(1, N, by=1)], name="Net Advantage" )+
    ggtitle("Net Pokemon Combat Advantage")+
    xlab("Opponent") +
    ylab("Pokemon of Choice")
Statistics

Multiple Linear Regression in R

This slideshow requires JavaScript.

In the previous exercise: Why do we need N-2?, I show a simple 1 dimensional regression by hand, which is followed by an examination of sample standard errors.  Below I make more extensive use of R (and an additional package) to plot what linear regression looks like in multiple dimensions. This generates the images above, (along with several others).  This illustrates that linear regression remains flat even in N dimensions, the surface of the regression is linear in coefficients.

As a class exercise, I ask that you consider different pairs dependent variables that are  functions of one another. What happens if the function is linear? What happens if the function is nonlinear, for example, cos(x_1)=x_2? Examine what happens to the surface of your regression as compared to the shape of the relationship you are investigating.  Is there a way you can contort the regression estimate into a curved surface to better match?  Why or why not?

install.packages(“plot3D”) # we need 3d plotting
library(“plot3D”, lib.loc=”~/R/win-library/3.1″) #Load it into R’s current library, may vary by computer.

set.seed(2343) #ensures replicatation. Sets seed of random number generators.
n<-25 #number of samples
x_1<-rnorm(n) #Our x’s come from a random sampling of X’s.
x_2<-rnorm(n)
b_0<-10
b_1<-3 #Those cursed jello puddings are associated with increased crime. Linear regression is supportive of association- not causation.
b_2<-(-3) # But student transit programs are associated with a decline in crime.
u<-rnorm(n)
y<-b_0+b_1*x_1+b_2*x_2+u #This is defining our true Y. The true relationship is linear.

#look at data in each dimension
plot(x_1,y)
plot(x_2,y)
#look at data overall
points3D(x_1,x_2,y,xlab=”x_1″,ylab=”x_2″,zlab=”y”,phi=5) #look at data. phi/theta is tilt.

fit<-lm(y~x_1+x_2)  #fit it with a linear model, regressing y on x_1, x_2

#Make a surface
x_1.pred <- seq(min(x_1), max(x_1), length.out = n)
x_2.pred <- seq(min(x_2), max(x_2), length.out = n)
xy <- expand.grid(x_1=x_1.pred, x_2=x_2.pred)
y.pred <- matrix (nrow = n, ncol = n, data = predict(fit, newdata = data.frame(xy), interval = “prediction”))

summary(fit) #view output of variables.

fitpoints<-predict(fit)  #get predicted points, needed to make a surface.

scatter3D(x_1,x_2,y,xlab=”x_1″,ylab=”x_2″,zlab=”y”,phi=5 , surf=list(x = x_1.pred, y = x_2.pred, z = y.pred, facets = NA, fit = fitpoints)) #look at data. phi/theta is tilt.
scatter3D(x_1,x_2,y,xlab=”x_1″,ylab=”x_2″,zlab=”y”,phi=45, surf=list(x = x_1.pred, y = x_2.pred, z = y.pred, facets = NA, fit = fitpoints)) #From straight on it is a flat plane, residuals are highlighted
scatter3D(x_1,x_2,y,xlab=”x_1″,ylab=”x_2″,zlab=”y”,phi=30, surf=list(x = x_1.pred, y = x_2.pred, z = y.pred, facets = NA, fit = fitpoints)) #From other angles it is clear it is somewhat straight.
scatter3D(x_1,x_2,y,xlab=”x_1″,ylab=”x_2″,zlab=”y”,phi=60, surf=list(x = x_1.pred, y = x_2.pred, z = y.pred, facets = NA, fit = fitpoints)) #look at data. phi/theta is tilt.

 

Programming, Statistics, Teaching Materials

Why do we need n-2? An example in R

Below is a simple example showing why we may want the (\Sigma u^2_i )/ (n-2) as our estimates of \large \sigma^2, when our naive intuition may suggest we only want the simple average of squared errors (\Sigma u^2_i )/ (n).

To show this in no uncertain terms, I have coded a linear regression by hand in R.  Also embedded in the work below are several rules I follow about writing code. They are rules 0-6.  There are many other rules, since code writing is an art.

####Coding in R
#### Rule 1: Always comment on every few lines of code. It is not unheard of to comment every single line, particularly for new coders, or complex code.
#### You will need to reference your work at a later date, and after about 3 months, the purpose is lost. Also, I need to read it.

#### Rule 2: Define your variables first. Luckily these names are shared for us.
#### For your projects, use names which are clear for your research: (y=crime in Williamsburg, VA, X= Number of jello puddings consumed)

set.seed(1223) #ensures replication. Sets seed of random number generators.
n<-25 #number of samples
x<-2*rnorm(n) #Our x’s come from a random sampling of X’s.
b_0<-10
b_1<-3 #Those cursed jello puddings are associated with increased crime. Linear regression is supportive of association- not causation.
u<-rnorm(n) #We satisfy both independent mean and zero mean assumptions
y<-b_0+b_1*x+u #This is defining our true Y. The true relationship is linear.

plot(x,y) #Rule 0, really. Always check your data.

#### Rule 3: After definitions begin your second stage of work. Probably trimming existing data, etc. Do these in the order they were added.
hat_b_1<-sum( (x-mean(x)) * (y-mean(y)) ) / sum( (x-mean(x))^2 ) #Spaces between any parenthesized section of operations. We need to be able to see which parentheses are which.
hat_b_1 # Rule 4: Indent work which is conceptually subordinate. Indent more as needed. Four spaces=1 tab.
hat_b_0<-mean(y)-hat_b_1*mean(x)
hat_b_0 # Rule 5: Check your work as you go along. For our example, I got 9.89

abline(a=hat_b_0, b=hat_b_1, col=”red”) #let’s add a red line of best fit. And we must see how our plot looks. Repeat rule 0.

hat_y<-hat_b_0+hat_b_1*x
hat_u<-hat_y-y

plot(x,hat_u) # Let’s see our residuals
hist(hat_u) # Let’s see our histogram

#### Rule 6: Keep your final analysis as punchy and short as possible without sacrificing clarity.
#### The mean sum of the squared errors (usually unknown to us as researchers)
sigma_sq<-sum(u^2)/n #this is the value we’re trying to estimate
sigma_sq_naive<-sum(hat_u^2)/n #this is a naive estimation of it
sigma_sq_hat<-sum(hat_u^2)/(n-2) #this turns out to be more accurate, particularly in small samples. If n->infinity this goes away. Try it for yourself!

#R, is this assessment true? Is sig_sq_hat a better estimator of sig_sq than our naive estimator? Is it true we need the (-2)?
(sigma_sq-sigma_sq_naive) > (sigma_sq-sigma_sq_hat)

Here is one of several plots made by this code, showing a nice linear regression over the data:

Rplot01

Please don’t forget the derivation of why this is true!  This is simply some supportive evidence that it might be true.