Below is an exploration of heavy tails using Python, and some of the problems they present for analysis. Heavy tails are distributions with extremely “fat tails”, they have very high likelihood of extreme values relative to a normal bell curve or even a log normal distribution.
Author: regressforward
NYC Traffic Accidents Heatmap
A simple chart in one place.
This figure was a nice-looking variant to a paper that was ultimately accepted in EEJ. This figure itself didn’t make it, but it is a really good-looking one. This is a heat map of all accidents in NYC, from July 2012 through March 2019. Black areas are few accidents, and the brighter areas are more accidents. The roads are highlighted in pink to match. It’s fairly obvious that the bulk of accidents are in Manhattan and only a few are in Staten Island.

Mammen, K., Shim, H. S., & Weber, B. S. (2019). Vision Zero: Speed Limit Reduction and Traffic Injury Prevention in New York City. Eastern Economic Journal, 1-19.
Made in QGIS.
Economics Jobs
As a department, we had a meeting about jobs in economics.
I collected some information on the subject. Out of 31656 job postings, most summaries are duplicates- the positions may be different, but the first 30 words of the summary were the same. Here are the results of 638 unique postings for “economics entry level” across several cities.
Some good takeaways:
- Bachelors Degree: You actually have to finish!
- Responsible, professional, experience: Employers expect to get the impression that you care about what you are doing. They expect to get the impression that you can do it, and often unfairly expect that you have done it. I note experience is considered important, even for entry level positions. This complaint seems to be commonly repeated by job-seekers in this era.
- Data, analyst, research, quantitative, math, statistics: You’re going to be working with data, and are expected to work with hard numbers. Take your Econometrics and Statistics classes seriously!
A (Quick) Breakdown of SSCAIT Win Rates by AI Race.
Introduction:
Overall, I find that after controlling for individual bot skill level, bots tend to under-perform against Protoss (P) bots and over-perform against Terran (T) bots, and possibly Zerg (Z) bots as well. Disclaimers, methodology, robustness, conclusion to follow.
Disclaimers:
This does not mean that the race P is superior or the race T is inferior. It only means bots perform worse than their past performance would suggest against P opponents, and better against T. It is entirely possible that the exploits against P bots are harder to identify, or the bot meta favors strategies that are played by P, (eg, late game death-ball style plays). This prediction is also honed to the average skill level- at tail ends of the skill distribution, these results will vary.
Methodology:
First, let’s dump the data from here into Excel->CSV->R. I had to manually add the races, though. I’m not really convinced that data types other than CSV aught to even exist, but that’s just me.
Here’s the meat of it:
- Get each bots overall win rate for each match. (Here it is an average of the match results, which could be either 100%, 50%, or 0%)
- Assume this is win percentage a good proxy for the bot’s overall skill level. This needs a bit of explanation. It is already known the Locutus family is strong, for example, and they are likely to win most match-ups regardless of race (90%+). We also know they always play Protoss. Let’s try to decompose the win rates into the portion which is approximately due to bot skill level and the part which is due to race match-ups. We need some proxy for bot overall skill level since we are trying to compare the win rates of each race after excluding any effect of skill.
- Estimate the following:
Where the coefficients of interest are going to be: . Each of these three coefficient represents how different from its average win percentage a given bot is expected to do against an opponent of the given race. Here are the results:
Call:
lm(formula = match_pct ~ skill_level + factor(opp_race) - 1,
data = melted_data)
Residuals:
Min 1Q Median 3Q Max
-90.911 -27.440 1.096 29.510 93.088
Coefficients:
Estimate Std. Error t value Pr(>|t|)
skill_level 0.99908 0.03827 26.107 <2e-16 ***
factor(opp_race)P -5.57628 2.37514 -2.348 0.0190 *
factor(opp_race)T 4.62700 2.38244 1.942 0.0523 .
factor(opp_race)Z 1.32735 2.47510 0.536 0.5918
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 37.52 on 1976 degrees of freedom
Multiple R-squared: 0.6814, Adjusted R-squared: 0.6808
F-statistic: 1057 on 4 and 1976 DF, p-value: < 2.2e-16
First, we see skill level is strongly significant and nearly exactly 1. If a bots win percentage goes up by 1%, we expect it to win each match by precisely 1% more. This is to be expected.
Next, the coefficient on P is -5.6%. That suggests if a given bot (A) is playing a Protoss opponent, bot A’s overall match score is about 5.6% lower than its typical skill level would suggest, and this is difference is significantly different than 0.
Next, the coefficient on T is +4.6%. That suggests if a given bot (A) is playing a Terran opponent, bot A’s overall match score is about +4.6% higher than its typical skill level would suggest, and this is difference is also significantly different than 0.
The coefficient on Z is positive but small and insignificant. Bots tend to do about average, maybe slightly better than average against Z opponents. (It may be significantly different than P or T though, I am dropping the constant and comparing coefficients is outside the scope of this quick post.)
A note about R2: R squared only measure predictive power. We do not care about predictive power here in this context, only the estimated effect of the coefficients. If that cofficient is accurate to the real world, a good deal of accuracy can be sacrificed.
Robustness:
I have also redone this with Logit, which is slightly more accurate on the far tails. (In a journal article on this subject, we’d probably focus more on this result) But here we can simply make some examples. The estimated race effect will be larger near the bottom of the skill distribution and smaller near the top. Example: Given a bot with the skill level of Purplewave (90.09%), the expected win rate moves about +5% when transitioning from a P opponent to a T opponent. Given a bot with the skill level of SijiaXu (40.91%), its expected win rate moves about +12% when transitioning from a P to a T opponent.
Conclusion:
After controlling for skill level of individual bots (which is the vast and overwhelming factor in game wins), we find some modest suggestion that race is a relevant factor in matchup performance. In general bots under-perform against opponents of the race P and over-perform against the race T, but this factor is small at the higher end of the skill distribution.
Code:
library(readr)
library(reshape)#Get that data – copy/paste from online table. Note spaces in names cause some rudeness in R-studio with autocomplete. I did a find and replace in the csv before hand.
SSCAITdata <- as.data.frame(readr::read_csv(“C:/Users/Bryan/Desktop/SSCAITdata.csv”))
SSCAITdata<-SSCAITdata[,-c(1,4,5)]
# View(SSCAITdata)
melted_data<-reshape::melt(SSCAITdata,id.vars=names(SSCAITdata)[1:3])#Seperate the Win-losses
melted_data$wins<-as.numeric(sapply(strsplit(as.character(melted_data$value),”\\-“), “[“, 1))
melted_data$losses<-as.numeric(sapply(strsplit(as.character(melted_data$value),”\\-“), “[“, 2))melted_data<-melted_data[!is.na(melted_data$Race),]
present_elements<-!is.na(SSCAITdata$Race)
melted_data$opp_race<-rep(SSCAITdata$Race[present_elements],each = sum(present_elements)) #make sure the races line up
names(melted_data)[1]<-“own_race”
melted_data<-melted_data[!is.na(melted_data$value),] #drop games with no opponent.melted_data$match_pct<-melted_data$wins/(melted_data$wins+melted_data$losses) * 100
skill_level<-aggregate(match_pct ~ Bot, data = melted_data, mean)
names(skill_level)[2]<-“skill_level”
melted_data<-merge(melted_data, skill_level, all.x=TRUE)melted_data$outsized_performance<-melted_data$match_pct-melted_data$skill_level
melted_data$matchup_type = paste0(melted_data$own_race,”v”,melted_data$opp_race)
# View(melted_data)aggregate(outsized_performance ~ matchup_type, data = melted_data, mean) #compare to below:
summary(lm(match_pct~skill_level+factor(own_race):factor(opp_race)-1, data=melted_data))#with intercept terms like most people expect
summary(lm(match_pct~skill_level+factor(opp_race), data=melted_data))
summary(lm(match_pct~skill_level+I(skill_level^2)+I(skill_level^3)+factor(opp_race), data=melted_data))
summary(glm(I(match_pct/100)~skill_level+factor(opp_race), data=melted_data, family=binomial(link=”logit”)))#without intercept terms because of clean presentation
summary(lm(match_pct~skill_level+factor(opp_race)-1, data=melted_data))
summary(lm(match_pct~skill_level+I(skill_level^2)+I(skill_level^3)+factor(opp_race)-1, data=melted_data))
summary(glm(I(match_pct/100)~skill_level+factor(opp_race)-1, data=melted_data, family=binomial(link=”logit”)))melted_data$prediction<-predict(glm(I(match_pct/100)~skill_level+factor(opp_race)-1, data=melted_data, family=binomial(link=”logit”)), melted_data, type = “response”)
View(melted_data)
Undermind Interview
I talk with a few experts in programming about CUNYBot and the Starcraft-BW programming scene below, as well as a discussion about causal inference in games (forthcoming paper in IEEE-CoG):
Currently CUNYbot has been undergoing a lot of revisions. The code for many sections has been scuttled and redesigned from the ground up. Much “technical debt” has been paid, and the code is approaching clarity. Speed is improved and many already established systems have been incorporated.
Python and Zero-Inflated Models
While this is hardly a tutorial, I’ve been spending a good deal of time working with zero-inflated data for a forthcoming project, and have worked with it extensively in the past.
The point of zero-inflated models is there are ultimately two sources of zeros, zeros can come from the primary model (usually Poisson), or they can be injected into the total model from some other source. For example, casualties from terror attacks may be zero inflated. Consider the idea that either there is no attack on a particular day (0 casualties), or the attack occurs but no one is hurt (also 0 casualties). However, we may want to model the daily casualties as a single model, so we need both parts in one area. (Seminal ZIP paper, my applied paper on ZIP)
Here’s some code creating ZIP data, and then testing the ZIP model in Python to see if it properly recovers the coefficients I am looking for. Indeed, it does. Since the ZIP models in python are not designed for the same things economists are typically looking for (ex, no output of standard errors and coefficient estimates in one table), this is a good exploration in to ZIP models, and a way to look around Python, the code du jour.
ZIP
from matplotlib import pyplot as plt
import numpy as np
import math as math
import pandas as pd
from scipy import stats
import statsmodels.discrete.count_model as reg_models
#help(reg_mo1dels)
np.random.seed(123456789)
N = 100000
x_1= np.random.normal(5,1,size=N)
x_2= np.random.normal(5,1,size=N)
x=pd.DataFrame([x_1,x_2]).T
poisson_part = np.zeros(N)
zi_part = np.zeros(N)
for i, item in enumerate(x_1):
poisson_part[i] = np.random.poisson(math.exp(0.2*x_1[i]-0.1*x_2[i])) #needed to initialize the test object. Note the poisson parameter is of the form e^(Bx), ln(lambda) = Bx
for i, item in enumerate(x_1):
zi_part[i] = np.random.logistic(0.3*x_1[i]-0.2*x_2[i]) > 0 #needed to initialize the test object.
y = zip_model_data = poisson_part * zi_part
print(x.iloc[0:10,:])
print(poisson_part[0:10])
print(zi_part[0:10,])
print(y[0:10,])
out=reg_models.ZeroInflatedPoisson(y,x,x, inflation='logit')
fit_regularized=out.fit_regularized(maxiter = 100) #essentially forces convergence by penalizing. Biases estimates.
fit_regularized.params ##notice that these are regularized values, not the true values. The ordinal scale of the variables
fit=out.fit(method='bfgs', maxiter = 1000) # May need more than the default 35 iterations, very small number!
fit.params
Nailed it! Regularization had had a noticeable effect on the process, but these are the true coefficients I was attempting to recover.
Note that ZIP in python does not automatically assume the existence of an intercept term. Nor does it automatically calculate the standard errors. I’ll be bootstrapping those soon enough.
AI and Economics
I have just released my first version of an AI to play the classic 1998 Starcraft: Broodwar. My goal with this project is to combine economic fundamentals and behavioral tendencies with existing machine learning techniques to create a versatile and lifelike opponent.
This is a very active area of research, indeed, a publication was made on this topic yesterday. Further literature is available in the collections listed here, here, and here. Major competitors in this field include Facebook, dozens of PhD projects from major universities, as well as some of the best private sector programmers and developers from around the world. I am currently competing in the largest tournament of AI bots, SSCAIT, where you can see my bot play live 24/7 for your viewing pleasure and academic interest.
My AI was also selected for review in the weekly report, and this review gives an excellent demonstration of its macro-economic leanings. In this match, it makes aggressive and early investment into its long-term workforce, and has makes minimal investment in the short-run returns of combat units. The effect is an an overwhelming crush of units in the mid-to-late game.
In this next match, I play a similar macro style, but aggressively manage complex cloaking and air attacks at the same time.
Regardless, this AI learns from every game, and every opponent is different, so each match is a unique experience. Expect lots more fun game play from it soon!
Features of the AI:
- Uses macroeconomic modeling to evaluate the state of its own internal economy. Maximizes its total output.
- Learns over time to tune its economy to match each opponent. For this, I use a genetic algorithm with a long memory and a low mutation rate to slowly move in the right direction as the bot plays dozens and dozens of games.
- Uses no preexisting map libraries or open source bots, written entirely from scratch in C++.
- High performance even in full combat situations with many units on both sides. Few lag-outs or crashes (indeed, all on record are my fault for accidentally uploading the bot’s memory files incorrectly).
You can see my bot’s page directly on SSCAIT here.
Other assorted press of my bot are below:
Behavioral Economics and In-class Exercizes
I wanted to spend a moment to report on the incredible success I have been having using in-class exercises. Students have been engaging the content more directly, understanding of the immediate subject matter has improved, and much to my surprise, interest continues onto subjects beyond that immediate exercise. This continuation of interest for the entire 2-hour class period suggests more than mere amusement is involved, actual interest in the subject has been created. Perhaps the activities have moved their reference point to a more engaged and interested one.
I would like to share some of the excellent exercises that have been done with our student body. This is not a limiting set of these activities, there are many more available. Many thanks to Virgina Econ Lab for its excellent work in digitizing some of these classical class exercises.
The first exercise, best as an opening exercise, describes the opening idea that individuals are well-described by the rational model, and shows how the rational class naturally arrives at the results that we expect in classical economics. We have some of the class as suppliers, some of the class as buyers, and random values for the good are assigned. They enter a mock market place and trade goods among themselves. Output of the program is very good visually (although a little too colorful for my tastes). Learning is very visible.
Total time was maybe 15-20 minutes of execution, 5-10 minutes of setting up and logging into the computer, costs that have reduced after multiple exercises.
I thought it was best to lead the exercise with:
- Review of PPFs
- Refresh basic supply and demand model.
- Review of basic rational model for firms
.
I thought the gains from the exercise included:
- More confidence in the idea of rational agents having validity.
- More interest in rational agents
- Students put idea of rationality into contrast with behavioral models and procedurally rational models.
- Strong retention and application of supply and demand, rational model for firms.
We have, since then, conducted several simulations where students interacted directly with rational (and procedurally rational) models. I have programmed these simulations myself and they are available here: Simulating Optimization Problems and Production Possibility Frontiers and Simulations of Optimization over Time. I would recommend these to any undergraduate body that has easy access to a PC lab, and more detailed interaction with the models is suitable for earlier graduates. I note that the programs do not generate exact correct answers, so one can still ask homework questions on the subject.
Most recently, we have conducted auctions to examine how individuals handle different auction schemes. Rational results suggest that individuals bid under their valuation in first-price sealed-bid auctions, and bid their valuation in second price sealed-bid auctions. But people do not always perform to those expectations, and it should be illustrated to students.
Some sample output from this program might look like below:
We would hope it would clearly show variation between the two auctions, like the plot above. The second-price auctions lead to significantly higher prices overall, and will often lead students bidding higher than their valuation.
I thought it was best to lead the exercise with:
- Complete teview of behavioral ideas, particularly:
- Transactional Utility
- Risk Aversion
- Reference Points
- Mental Accounting
I thought the gains from the exercise included:
- Better identification among each auction type:
- Vickry
- English
- Dutch
- Students correctly identified explanations for behavior.
- Students internalized the optimal bidding value for first-price sealed-bid auctions over uniform distributions,
Simulations of Optimization over Time
Behavioral Economics class incorporates a discussion about consumption smoothing, the idea that people prefer gradual changes in consumption over time, and so agents have a smooth relationship between consumption in one period and the next. In fact, it can be shown (with simple log utility functions) that the ratio between consumption today () and consumption tomorrow (
) is:
Where delta is the “discount rate”, the relative value of today vs tomorrow for the agent.
Below is a simulation I designed to help demonstrate this concept to my class. It shows an agent struggling to optimize a three-period consumption model. We always pause to note how the marginal value of consumption smoothly declines every period, and that the discounted marginal utility is nearly the same each period. (The model simulation residuals are surprisingly large, but nevertheless illustrative.) The simulation output indicates the consumption in each period with the red line, and a good estimate of the other possible consumption points in black.
#Define Terms
delta<-.85
p1<-1
p2<-1
p3<-1
y<-10
#Checking any options inside the constraint or on the constraint
x1<-runif(100000, min=0, max=y/p1)
x2<-runif(100000, min=0, max=(y-p1*x1)/p2)
#Checking only on the constraint. Assumes no leftover resources.
x3<-y-p1*x1-p2*x2#Typical Utility Function
U<-log(x1)+ delta * log(x2) + delta^2 * log(x3)
U1<-log(x1)
U2<-delta * log(x2) # undiscounted utility of period 2.
U3<-delta^2 * log(x3) # undiscounted utility of period 3.
par(mfrow=c(1,3))plot(x1, U1, ylim=c(0,2.5))
abline(v=x1[which(U==max(U))], col=”red”)
plot(x2, delta*U2, ylim=c(0,2.5))
abline(v=x2[which(U==max(U))], col=”red”)
plot(x3, delta^3*U3, ylim=c(0,2.5))
abline(v=x3[which(U==max(U))], col=”red”)x1_star<-x1[which(U==max(U))]
x2_star<-x2[which(U==max(U))]
x3_star<-x3[which(U==max(U))]x1_star
x2_star
x3_star
delta#Marginal Utility
1/log(x1_star); 1/log(x2_star); 1/log(x3_star);#Discounted Marginal Utility
1/log(x1_star); delta*1/log(x2_star); delta^2*1/log(x3_star); #Discounted marginal utilities are nearly identical.
Simulating Optimization Problems and Production Possibility Frontiers
In teaching Behavioral Economics, optimization problems require some intuition. This intuition can be opaque without calculus literacy. Below is a simulation to demonstrate that the process for constrained optimization works. It has the added benefit of showing isoquants (by colored stripes in the image below), and the strict boundary condition of the efficiency frontier.
Basic constrained optimization problems are as follows:
I have made code in R to simulate the results for a two-part optimization process. The example uses as the functional form.
library(“plot3D”)
p1<-1
p2<-2
y<-10
x1<-runif(25000, min=0, max=y/p1)#Checking any options inside the constraint or on the constraint
x2<-runif(25000,min=0, max=(y-p1*x1)/p2)U<-sqrt(x1)+sqrt(x2)
out<-mesh(x1, x2, U)
points3D(x1, x2, U, xlab=”x1″, ylab=”x2″, zlab=”Utility”, phi=-90, theta=90)plot(x1, U)
abline(v=x1[which(U==max(U))], col=”red”)
x1_star<-x1[which(U==max(U))]
x2_star<-x2[which(U==max(U))]
y-x1_star*p1-x2_star*p2
And it outputs the following plots (with minor variation). Note that the colored bands represent utility curves, isoquants. The end of the colored points represents the efficiency frontier.
The actual solution is found by:
subject to:
The Lagrangian is then:
Leading to the first order conditions (derivatives of L):
Using these 3 conditions, we can find the equations:
Where, if then we can solve for the theoretical solutions:
These indeed match very closely with the real solutions.