Below is an exploration of heavy tails using Python, and some of the problems they present for analysis. Heavy tails are distributions with extremely “fat tails”, they have very high likelihood of extreme values relative to a normal bell curve or even a log normal distribution.

# Category: Statistics

# NYC Traffic Accidents Heatmap

A simple chart in one place.

This figure was a nice-looking variant to a paper that was ultimately accepted in EEJ. This figure itself didn’t make it, but it *is* a really good-looking one. This is a heat map of all accidents in NYC, from July 2012 through March 2019. Black areas are few accidents, and the brighter areas are more accidents. The roads are highlighted in pink to match. It’s fairly obvious that the bulk of accidents are in Manhattan and only a few are in Staten Island.

Mammen, K., Shim, H. S., & Weber, B. S. (2019). Vision Zero: Speed Limit Reduction and Traffic Injury Prevention in New York City. *Eastern Economic Journal*, 1-19.

Made in QGIS.

# A (Quick) Breakdown of SSCAIT Win Rates by AI Race.

**Introduction:**

Overall, I find that *after controlling for individual bot skill level*, bots tend to under-perform against Protoss (P) bots and over-perform against Terran (T) bots, and possibly Zerg (Z) bots as well. Disclaimers, methodology, robustness, conclusion to follow.

**Disclaimers:**

This does not mean that the race P is superior or the race T is inferior. It only means bots perform worse than their past performance would suggest against P opponents, and better against T. It is entirely possible that the exploits against P bots are harder to identify, or the bot meta favors strategies that are played by P, (eg, late game death-ball style plays). This prediction is also honed to the average skill level- at tail ends of the skill distribution, these results will vary.

**Methodology:**

First, let’s dump the data from here into Excel->CSV->R. I had to manually add the races, though. I’m not really convinced that data types other than CSV aught to even exist, but that’s just me.

Here’s the meat of it:

- Get each bots overall win rate for each match. (Here it is an average of the match results, which could be either 100%, 50%, or 0%)
- Assume this is win percentage a good proxy for the bot’s overall skill level. This needs a bit of explanation. It is already known the Locutus family is strong, for example, and they are likely to win most match-ups regardless of race (90%+). We also know they always play Protoss. Let’s try to decompose the win rates into the portion which is approximately due to bot skill level and the part which is due to race match-ups. We need some proxy for bot overall skill level since we are trying to
*compare the win rates of each race after excluding any effect of skill*. - Estimate the following:

Where the coefficients of interest are going to be: . Each of these three coefficient represents how different from its average win percentage a given bot is expected to do against an opponent of the given race. Here are the results:

```
Call:
lm(formula = match_pct ~ skill_level + factor(opp_race) - 1,
data = melted_data)
Residuals:
Min 1Q Median 3Q Max
-90.911 -27.440 1.096 29.510 93.088
Coefficients:
Estimate Std. Error t value Pr(>|t|)
skill_level 0.99908 0.03827 26.107 <2e-16 ***
factor(opp_race)P -5.57628 2.37514 -2.348 0.0190 *
factor(opp_race)T 4.62700 2.38244 1.942 0.0523 .
factor(opp_race)Z 1.32735 2.47510 0.536 0.5918
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 37.52 on 1976 degrees of freedom
Multiple R-squared: 0.6814, Adjusted R-squared: 0.6808
F-statistic: 1057 on 4 and 1976 DF, p-value: < 2.2e-16
```

First, we see skill level is strongly significant and nearly exactly 1. If a bots win percentage goes up by 1%, we expect it to win each match by precisely 1% more. This is to be expected.

Next, the coefficient on P is -5.6%. That suggests if a given bot (A) is playing a Protoss opponent, bot A’s overall match score is about 5.6% lower than its typical skill level would suggest, and this is difference is significantly different than 0.

Next, the coefficient on T is +4.6%. That suggests if a given bot (A) is playing a Terran opponent, bot A’s overall match score is about +4.6% higher than its typical skill level would suggest, and this is difference is also significantly different than 0.

The coefficient on Z is positive but small and insignificant. Bots tend to do about average, maybe slightly better than average against Z opponents. (It may be significantly different than P or T though, I am dropping the constant and comparing coefficients is outside the scope of this quick post.)

A note about R2: R squared only measure predictive power. We do not care about predictive power here in this context, only the estimated effect of the coefficients. If that cofficient is accurate to the real world, a good deal of accuracy can be sacrificed.

**Robustness:**

I have also redone this with Logit, which is slightly more accurate on the far tails. (In a journal article on this subject, we’d probably focus more on this result) But here we can simply make some examples. The estimated race effect will be larger near the bottom of the skill distribution and smaller near the top. Example: Given a bot with the skill level of Purplewave (90.09%), the expected win rate moves about +5% when transitioning from a P opponent to a T opponent. Given a bot with the skill level of SijiaXu (40.91%), its expected win rate moves about +12% when transitioning from a P to a T opponent.

**Conclusion:**

After controlling for skill level of individual bots (which is the vast and overwhelming factor in game wins), we find some modest suggestion that race is a relevant factor in matchup performance. In general bots under-perform against opponents of the race P and over-perform against the race T, but this factor is small at the higher end of the skill distribution.

**Code:**

library(readr)

library(reshape)#Get that data – copy/paste from online table. Note spaces in names cause some rudeness in R-studio with autocomplete. I did a find and replace in the csv before hand.

SSCAITdata <- as.data.frame(readr::read_csv(“C:/Users/Bryan/Desktop/SSCAITdata.csv”))

SSCAITdata<-SSCAITdata[,-c(1,4,5)]

# View(SSCAITdata)

melted_data<-reshape::melt(SSCAITdata,id.vars=names(SSCAITdata)[1:3])#Seperate the Win-losses

melted_data$wins<-as.numeric(sapply(strsplit(as.character(melted_data$value),”\\-“), “[“, 1))

melted_data$losses<-as.numeric(sapply(strsplit(as.character(melted_data$value),”\\-“), “[“, 2))melted_data<-melted_data[!is.na(melted_data$Race),]

present_elements<-!is.na(SSCAITdata$Race)

melted_data$opp_race<-rep(SSCAITdata$Race[present_elements],each = sum(present_elements)) #make sure the races line up

names(melted_data)[1]<-“own_race”

melted_data<-melted_data[!is.na(melted_data$value),] #drop games with no opponent.melted_data$match_pct<-melted_data$wins/(melted_data$wins+melted_data$losses) * 100

skill_level<-aggregate(match_pct ~ Bot, data = melted_data, mean)

names(skill_level)[2]<-“skill_level”

melted_data<-merge(melted_data, skill_level, all.x=TRUE)melted_data$outsized_performance<-melted_data$match_pct-melted_data$skill_level

melted_data$matchup_type = paste0(melted_data$own_race,”v”,melted_data$opp_race)

# View(melted_data)aggregate(outsized_performance ~ matchup_type, data = melted_data, mean) #compare to below:

summary(lm(match_pct~skill_level+factor(own_race):factor(opp_race)-1, data=melted_data))#with intercept terms like most people expect

summary(lm(match_pct~skill_level+factor(opp_race), data=melted_data))

summary(lm(match_pct~skill_level+I(skill_level^2)+I(skill_level^3)+factor(opp_race), data=melted_data))

summary(glm(I(match_pct/100)~skill_level+factor(opp_race), data=melted_data, family=binomial(link=”logit”)))#without intercept terms because of clean presentation

summary(lm(match_pct~skill_level+factor(opp_race)-1, data=melted_data))

summary(lm(match_pct~skill_level+I(skill_level^2)+I(skill_level^3)+factor(opp_race)-1, data=melted_data))

summary(glm(I(match_pct/100)~skill_level+factor(opp_race)-1, data=melted_data, family=binomial(link=”logit”)))melted_data$prediction<-predict(glm(I(match_pct/100)~skill_level+factor(opp_race)-1, data=melted_data, family=binomial(link=”logit”)), melted_data, type = “response”)

View(melted_data)

# Undermind Interview

I talk with a few experts in programming about CUNYBot and the Starcraft-BW programming scene below, as well as a discussion about causal inference in games (forthcoming paper in IEEE-CoG):

Currently CUNYbot has been undergoing a lot of revisions. The code for many sections has been scuttled and redesigned from the ground up. Much “technical debt” has been paid, and the code is approaching clarity. Speed is improved and many already established systems have been incorporated.

# Python and Zero-Inflated Models

While this is hardly a tutorial, I’ve been spending a good deal of time working with zero-inflated data for a forthcoming project, and have worked with it extensively in the past.

The point of zero-inflated models is there are ultimately two sources of zeros, zeros can come from the primary model (usually Poisson), or they can be injected into the total model from some other source. For example, casualties from terror attacks may be zero inflated. Consider the idea that either there is no attack on a particular day (0 casualties), or the attack occurs but no one is hurt (also 0 casualties). However, we may want to model the daily casualties as a single model, so we need both parts in one area. (Seminal ZIP paper, my applied paper on ZIP)

Here’s some code creating ZIP data, and then testing the ZIP model in Python to see if it properly recovers the coefficients I am looking for. Indeed, it does. Since the ZIP models in python are not designed for the same things economists are typically looking for (ex, no output of standard errors and coefficient estimates in one table), this is a good exploration in to ZIP models, and a way to look around Python, the code *du jour*.

**ZIP**

**Load Prerequisites**

```
from matplotlib import pyplot as plt
import numpy as np
import math as math
import pandas as pd
from scipy import stats
import statsmodels.discrete.count_model as reg_models
#help(reg_mo1dels)
```

**Generate Mock Data**

```
np.random.seed(123456789)
N = 100000
x_1= np.random.normal(5,1,size=N)
x_2= np.random.normal(5,1,size=N)
x=pd.DataFrame([x_1,x_2]).T
poisson_part = np.zeros(N)
zi_part = np.zeros(N)
for i, item in enumerate(x_1):
poisson_part[i] = np.random.poisson(math.exp(0.2*x_1[i]-0.1*x_2[i])) #needed to initialize the test object. Note the poisson parameter is of the form e^(Bx), ln(lambda) = Bx
for i, item in enumerate(x_1):
zi_part[i] = np.random.logistic(0.3*x_1[i]-0.2*x_2[i]) > 0 #needed to initialize the test object.
y = zip_model_data = poisson_part * zi_part
print(x.iloc[0:10,:])
print(poisson_part[0:10])
print(zi_part[0:10,])
print(y[0:10,])
```

```
out=reg_models.ZeroInflatedPoisson(y,x,x, inflation='logit')
```

```
fit_regularized=out.fit_regularized(maxiter = 100) #essentially forces convergence by penalizing. Biases estimates.
```

```
fit_regularized.params ##notice that these are regularized values, not the true values. The ordinal scale of the variables
```

```
fit=out.fit(method='bfgs', maxiter = 1000) # May need more than the default 35 iterations, very small number!
```

```
fit.params
```

Nailed it! Regularization had had a noticeable effect on the process, but these are the true coefficients I was attempting to recover.

Note that ZIP in python does not automatically assume the existence of an intercept term. Nor does it automatically calculate the standard errors. I’ll be bootstrapping those soon enough.

# AI and Economics

I have just released my first version of an AI to play the classic 1998 Starcraft: Broodwar. My goal with this project is to combine economic fundamentals and behavioral tendencies with existing machine learning techniques to create a versatile and lifelike opponent.

This is a very active area of research, indeed, a publication was made on this topic yesterday. Further literature is available in the collections listed here, here, and here. Major competitors in this field include Facebook, dozens of PhD projects from major universities, as well as some of the best private sector programmers and developers from around the world. I am currently competing in the largest tournament of AI bots, SSCAIT, where you can see my bot play live 24/7 for your viewing pleasure and academic interest.

My AI was also selected for review in the weekly report, and this review gives an excellent demonstration of its macro-economic leanings. In this match, it makes aggressive and early investment into its long-term workforce, and has makes minimal investment in the short-run returns of combat units. The effect is an an overwhelming crush of units in the mid-to-late game.

In this next match, I play a similar macro style, but aggressively manage complex cloaking and air attacks at the same time.

Regardless, this AI learns from every game, and every opponent is different, so each match is a unique experience. Expect lots more fun game play from it soon!

Features of the AI:

- Uses macroeconomic modeling to evaluate the state of its own internal economy. Maximizes its total output.
- Learns over time to tune its economy to match each opponent. For this, I use a genetic algorithm with a long memory and a low mutation rate to slowly move in the right direction as the bot plays dozens and dozens of games.
- Uses no preexisting map libraries or open source bots, written entirely from scratch in C++.
- High performance even in full combat situations with many units on both sides. Few lag-outs or crashes (indeed, all on record are my fault for accidentally uploading the bot’s memory files incorrectly).

You can see my bot’s page directly on SSCAIT here.

Other assorted press of my bot are below:

# Behavioral Economics and In-class Exercizes

I wanted to spend a moment to report on the incredible success I have been having using in-class exercises. Students have been engaging the content more directly, understanding of the immediate subject matter has improved, and much to my surprise, *interest continues onto subjects beyond that immediate exercise. *This continuation of interest for the entire 2-hour class period suggests more than mere amusement is involved, actual interest in the subject has been created. Perhaps the activities have moved their reference point to a more engaged and interested one.

I would like to share some of the excellent exercises that have been done with our student body. This is not a limiting set of these activities, there are many more available. Many thanks to Virgina Econ Lab for its excellent work in digitizing some of these classical class exercises.

The first exercise, best as an opening exercise, describes the opening idea that individuals are well-described by the rational model, and shows how the rational class naturally arrives at the results that we expect in classical economics. We have some of the class as suppliers, some of the class as buyers, and random values for the good are assigned. They enter a mock market place and trade goods among themselves. Output of the program is very good visually (although a little too colorful for my tastes). Learning is very visible.

Total time was maybe 15-20 minutes of execution, 5-10 minutes of setting up and logging into the computer, costs that have reduced after multiple exercises.

I thought it was best to lead the exercise with:

- Review of PPFs
- Refresh basic supply and demand model.
- Review of basic rational model for firms .

I thought the gains from the exercise included:

- More confidence in the idea of rational agents having validity.
- More interest in rational agents

- Students put idea of rationality into contrast with behavioral models and procedurally rational models.
- Strong retention and application of supply and demand, rational model for firms.

We have, since then, conducted several simulations where students interacted directly with rational (and procedurally rational) models. I have programmed these simulations myself and they are available here: Simulating Optimization Problems and Production Possibility Frontiers and Simulations of Optimization over Time. I would recommend these to any undergraduate body that has easy access to a PC lab, and more detailed interaction with the models is suitable for earlier graduates. I note that the programs do not generate exact correct answers, so one can still ask homework questions on the subject.

Most recently, we have conducted auctions to examine how individuals handle different auction schemes. Rational results suggest that individuals bid under their valuation in first-price sealed-bid auctions, and bid their valuation in second price sealed-bid auctions. But people do not always perform to those expectations, and it should be illustrated to students.

Some sample output from this program might look like below:

We would hope it would clearly show variation between the two auctions, like the plot above. The second-price auctions lead to significantly higher prices overall, and will often lead students bidding higher than their valuation.

I thought it was best to lead the exercise with:

- Complete teview of behavioral ideas, particularly:
- Transactional Utility
- Risk Aversion
- Reference Points
- Mental Accounting

I thought the gains from the exercise included:

- Better identification among each auction type:
- Vickry
- English
- Dutch

- Students correctly identified explanations for behavior.
- Students internalized the optimal bidding value for first-price sealed-bid auctions over uniform distributions,

# Simulations of Optimization over Time

Behavioral Economics class incorporates a discussion about *consumption smoothing, *the idea that people prefer gradual changes in consumption over time, and so agents have a smooth relationship between consumption in one period and the next. In fact, it can be shown (with simple log utility functions) that the ratio between consumption today () and consumption tomorrow () is:

Where delta is the “discount rate”, the relative value of today vs tomorrow for the agent.

Below is a simulation I designed to help demonstrate this concept to my class. It shows an agent struggling to optimize a three-period consumption model. We always pause to note how the marginal value of consumption smoothly declines every period, and that the discounted marginal utility is nearly the same each period. (The model simulation residuals are surprisingly large, but nevertheless illustrative.) The simulation output indicates the consumption in each period with the red line, and a good estimate of the other possible consumption points in black.

#Define Terms

delta<-.85

p1<-1

p2<-1

p3<-1

y<-10

#Checking any options inside the constraint or on the constraint

x1<-runif(100000, min=0, max=y/p1)

x2<-runif(100000, min=0, max=(y-p1*x1)/p2)

#Checking only on the constraint. Assumes no leftover resources.

x3<-y-p1*x1-p2*x2#Typical Utility Function

U<-log(x1)+ delta * log(x2) + delta^2 * log(x3)

U1<-log(x1)

U2<-delta * log(x2) # undiscounted utility of period 2.

U3<-delta^2 * log(x3) # undiscounted utility of period 3.

par(mfrow=c(1,3))plot(x1, U1, ylim=c(0,2.5))

abline(v=x1[which(U==max(U))], col=”red”)

plot(x2, delta*U2, ylim=c(0,2.5))

abline(v=x2[which(U==max(U))], col=”red”)

plot(x3, delta^3*U3, ylim=c(0,2.5))

abline(v=x3[which(U==max(U))], col=”red”)x1_star<-x1[which(U==max(U))]

x2_star<-x2[which(U==max(U))]

x3_star<-x3[which(U==max(U))]x1_star

x2_star

x3_star

delta#Marginal Utility

1/log(x1_star); 1/log(x2_star); 1/log(x3_star);#Discounted Marginal Utility

1/log(x1_star); delta*1/log(x2_star); delta^2*1/log(x3_star); #Discounted marginal utilities are nearly identical.

# Simulating Optimization Problems and Production Possibility Frontiers

In teaching Behavioral Economics, optimization problems require some intuition. This intuition can be opaque without calculus literacy. Below is a simulation to demonstrate that the process for constrained optimization *works*. It has the added benefit of showing isoquants (by colored stripes in the image below), and the strict boundary condition of the efficiency frontier.

Basic constrained optimization problems are as follows:

I have made code in R to simulate the results for a two-part optimization process. The example uses as the functional form.

library(“plot3D”)

p1<-1

p2<-2

y<-10

x1<-runif(25000, min=0, max=y/p1)#Checking any options inside the constraint or on the constraint

x2<-runif(25000,min=0, max=(y-p1*x1)/p2)U<-sqrt(x1)+sqrt(x2)

out<-mesh(x1, x2, U)

points3D(x1, x2, U, xlab=”x1″, ylab=”x2″, zlab=”Utility”, phi=-90, theta=90)plot(x1, U)

abline(v=x1[which(U==max(U))], col=”red”)

x1_star<-x1[which(U==max(U))]

x2_star<-x2[which(U==max(U))]

y-x1_star*p1-x2_star*p2

And it outputs the following plots (with minor variation). Note that the colored bands represent utility curves, *isoquants*. The end of the colored points represents the *efficiency frontier*.

The actual solution is found by:

subject to:

The Lagrangian is then:

Leading to the first order conditions (derivatives of L):

Using these 3 conditions, we can find the equations:

Where, if then we can solve for the theoretical solutions:

These indeed match very closely with the real solutions.

# The Optimum Pokemon Portfolio and Principal Component Decomposition (PCD) using R

I have very recently completed the Stanford Lagunita online course on Statistical Learning, and Tibrishani & Hastie have taught me a great deal about Principal Components. No learning is complete without exercises, however, so I have found a wonderful data set that seems popular, the attacks and weaknesses of Pokemon. (I am, admittedly, not a pokemon player, so I have had to ask others to help me understand some of the intricacies of the game.)

*Principal Component Decomposition:*

First and foremost, *principal component decomposition* finds the direction that maximizes variation in the data. At the same time, this can be said to be the *eigenvalue* of the data, the direction which best describes the direction of the data.

For example, if there is a spill of dirt on a white tile floor, the direction of the spill (eigenvalue) would always be the direction the dirt is most widely spread (principal component).

After looking at the beautiful charts used in the link above, I realized this would be very interesting to do a PCD on. What Pokemon are most similar and which are most different in terms of strengths and weaknesses? To find out we will break it into its principal components, and find out in which directions the data is spread out.

Pokemon can vary along 18 dimensions of strengths and weaknesses, since there are 18 types of Pokemon. This means there can be up to 18 principal components. We are not sure which principal components are useful without investigation. We show below how much variation is explained by each type of Pokemon. There doesn’t appear to be any clear point where there the principal components drop off in their usefulness, perhaps the first 3 or the first 5 seem to capture the most variation. The amount of variation captured by each principal component is outlined below.

Let us now look at the principal components of the Pokemon attack/weakness chart directly. We can visualize them in a biplot, where the arrows show the general attacking direction of the pokemon and the black labels show the defending labels. The distance from the center of biplot shows the deviation of that pokemon type from the central eigenvalue/principal component. Labels that are close together are more similar than those further apart.

So for example, Ghost attacks (arrows) are closely aligned with Ghost defence (black label) and Dark defence (black label). In general, the Pokemon that are most different in defence is Fighting and Ghost, and still again distinct from Flying and Ground defence. This suggests that if you wanted a Pokemon portfolio that would be very resilient to attack, you would want Fighting/Ghost types. If you want a variety of attacks, you might want to look into Ghost/Normal types or Grass/Electric.

Keep in mind together these only explain about 35.5% of the variation of Pokemon types, there are other dimensions in which Pokemon vary. I expected fire and water to be more clearly different (and they are very distinct, they go opposite directions for a long distance from the center!), but they are less distinct than ghost/normal.

*The Optimum Pokemon Portfolio:*

This lead me to wonder what type of pokemon portfolio would be best against the world, something outside the scope of the Statistical Learning course but well within my reach as an economist. Since I don’t know what the pokemon-world looks like, I assumed the pokemon that show up are of a randomly and evenly selected type. (This is a relatively strong assumption, it is likely the pokemon encounters are not evenly distributed among the types). The question is then, what type of pokemon should we collect to be the best against a random encounter, assuming we simply reach into our bag and grab the first pokemon we see to fight with?

First, I converted the matrix of strengths and weaknesses above into one that describes the spread of the strength-weakness gap, that is to say, if Water attacks Fire at 200% effectiveness, and defends at 50% effectiveness, a fight between the Water and Fire is +150% more effective than a regular pokemon attack (say Normal to Normal or Ice to Ice). Any bonuses a pokemon may have against its own type was discarded, because it would be pointless. The chart for this, much like the wonderful link that got me the data in the first place, is here, where red is bad and blue is good:

Then I added the strength-weakness gap together for each type of pokemon, which assumes that the pokemon are facing an a opponent of a random type. According to this then, the most effective type of pokemon are on average:

Type EffectivenessSteel 0.22222222 Fire 0.11111111 Ground 0.11111111 Fairy 0.11111111 Water 0.08333333 Ghost 0.08333333 Flying 0.05555556 Electric 0.00000000 Fighting 0.00000000 Poison -0.02777778 Rock -0.02777778 Dark -0.02777778 Ice -0.08333333 Dragon -0.08333333 Normal -0.11111111 Psychic -0.11111111 Bug -0.11111111 Grass -0.19444444

That is to say, Steel pokemon, against a random opponent, will on average be 22% more effective. (This is the mean, not the median.) And against a random opponent a Grass pokemon will be expected to be 19% less effective than a Fighting pokemon, shockingly low. Amusingly, Normal pokemon are worse than normal (0) against the average pokemon.

This does not mean you ONLY want Steel pokemon because you could come up with an opponent that is strong against Steel. Nor do you want to entirely avoid Grass pokemon, since they are very strong against many things that Steel is weak against. Merely that if you’re willing to roll the dice, a Steel pokemon will probably be your best bet. Trainers do not want to take strong risks, trainers are risk averse. You want to maximize your poke-payoff while minimizing how frequently you face negatively stacked fights. The equation for this is:

Where is your vector of payoffs in the table above, is your risk aversion, cov is the covariance matrix of the differenced pokemon data set, and vars is your portfolio selection which must add up to one hundred percent.

How risk averse are you? You could be very risk averse and want to never come across a bad pokemon to fight, or you could love rolling the dice and only want one type of pokemon. So I have plotted the optimal portfolio for many levels of risk-tolerance. It is a little cluttered, so I have labelled them directly as well as in the legend.

The visualization is indeed a little messy, but as you become more risk averse, you add more Electric, Normal, Fire, and Ice pokemon (and more!) to help reduce the chance of a bad engagement. In order to do this, one reduces the weight we put on Steel, Ground, and Fairy pokemon, but doesn’t eliminate them entirely. Almost nothing adds Dragon, Ghost, Rock. or Bug pokemon, they are nearly completely dominated by other combinations of pokemon types.

I’ve plotted two interesting portfolios along the spectrum of risk aversion below. They include one with nearly no risk aversion (0.001), and one with high risk aversion (10).

Of course, most importantly of all, regardless of your Pokemon and your interest in being “the very best”, you should still pick the coolest Pokemon and play for fun.

Code is included below:

#Data from: https://github.com/zonination/pokemon-chart/blob/master/chart.csv #write.csv(chart, file="/home/bsweber/Documents/poke_chart.csv") poke_chart<-read.csv(file="/home/bsweber/Documents/poke_chart.csv") poke_chart<-poke_chart[,-1] library(quadprog) # library(devtools) # install_github("vqv/ggbiplot", force=TRUE) library(ggbiplot) library(reshape2) library(ggplot2) library(ggrepel) poke_chart<-as.matrix(poke_chart) differences <- (poke_chart-1) - (t(poke_chart)-1) diag(differences)<-0 rownames(differences)<-colnames(differences) core <- poke_chart rownames(core)<-colnames(poke_chart) poke_pcd<-prcomp(core, center=TRUE, scale=TRUE) plot(poke_pcd, type="l", main="Pokemon PCD") summary(poke_pcd) biplot(poke_pcd) poke_palette<-c("#A8A878", "#EE8130", "#6390F0", "#F7D02C", "#7AC74C", "#96D9D6", "#C22E28", "#A33EA1", "#E2BF65", "#A98FF3", "#F95587", "#A6B91A", "#B6A136", "#735797", "#6F35FC", "#705746", "#B7B7CE", "#D685AD") ggbiplot(poke_pcd, labels= rownames(core), ellipse = TRUE, circle = TRUE, obs.scale = 1, var.scale = 1) + scale_color_discrete(name = '') + theme(legend.direction = 'horizontal', legend.position = 'top') #Score plot is for rows, attack data. loading lot is for columns, defense data. So bug and fairy have similar attacks (shown by rays), similar defences (shown by points). Ghost and normal have almost identical defences, but different attacks. ggbiplot(poke_pcd, labels= colnames(core), ellipse = TRUE, circle = TRUE, obs.scale = 1, var.scale = 1, choice=c(2,3)) + scale_color_discrete(name = '') + theme(legend.direction = 'horizontal', legend.position = 'top') #Score plot is for rows, attack data. loading lot is for columns, defense data. ggbiplot(poke_pcd, labels= colnames(core), ellipse = TRUE, circle = TRUE, obs.scale = 1, var.scale = 1, choice=c(5,6)) + scale_color_discrete(name = '') + theme(legend.direction = 'horizontal', legend.position = 'top') #Score plot is for rows, attack data. loading lot is for columns, defense data. ggbiplot(poke_pcd, labels= colnames(core), ellipse = TRUE, circle = TRUE, obs.scale = 1, var.scale = 1, choice=c(7,8)) + scale_color_discrete(name = '') + theme(legend.direction = 'horizontal', legend.position = 'top') #Score plot is for rows, attack data. loading lot is for columns, defense data. cov_core<- t(differences-mean(differences)) %*% (differences-mean(differences)) #Make the Cov. Matrix of differences. cov_core[order(diag(cov_core), decreasing=TRUE),order(diag(cov_core), decreasing=TRUE)] ones<-as.matrix(rep(1,18)) vars<-as.matrix(rep(1/18, times=18)) mu<-t(as.matrix(apply(differences/18, 1, sum))) #Average rate of return over 18 pokemon types. data.frame(mu[,order(t(mu), decreasing=TRUE)]) #Table of Pokemon Types colnames(mu)<-colnames(core) delta<- 1 #risk aversion parameter out<- matrix(0, nrow=0, ncol=18) colnames(out)<-colnames(core) for(j in 1:1000){ delta<-j/100 Dmat <- cov_core * 2 * delta dvec <- mu Amat <- cbind(1, diag(18)) bvec <- c(1, rep(0, 18) ) qp <- solve.QP(Dmat, dvec, Amat, bvec, meq=1) pos_answers<-qp$solution names(pos_answers)<-colnames(poke_chart) out<-rbind(out, round(pos_answers, digits=3)) } df <- data.frame(x=1:nrow(out)) df.melted <- melt(out) colnames(df.melted)<-c("Risk_Aversion", "Pokemon_Type", "Amount_Used") df.melted$Risk_Aversion<-df.melted$Risk_Aversion/100 qplot(Risk_Aversion, Amount_Used, data=df.melted, color=Pokemon_Type, geom="path", main="Pokemon % By Risk Aversion") + # ylim(0, 0.175) + scale_color_manual(values = poke_palette) + # geom_smooth(se=FALSE) + geom_text_repel(data=df.melted[df.melted$Risk_Aversion==8.5,], aes(label=Pokemon_Type, size=9, fontface = 'bold'), nudge_y = 0.005, show.legend = FALSE) # Another plot that is less appealing # matplot(out, type = "l", lty = 1, lwd = 2, col=poke_palatte) # legend( 'center' , legend = colnames(core), cex=0.8, pch=19, col=poke_palatte) pie(head(out, 1), labels= colnames(out), col=poke_palette) pie(tail(out, 1), labels= colnames(out), col=poke_palette) df_1<-data.frame(matrix(out[1,], ncol=1)) colnames(df_1)<-c("Percentage") df_1$Pokemon_Type<-colnames(out) ggplot(data=df_1, aes(x=Pokemon_Type, y=Percentage, fill=Pokemon_Type))+ geom_bar(stat="identity", position=position_dodge()) + scale_fill_manual(values = poke_palette)+ ggtitle("Pokemon Portfolio With Almost No Risk Aversion") df_2<-data.frame(t(tail(out,1))) colnames(df_2)<-c("Percentage") df_2$Pokemon_Type<-colnames(out) ggplot(data=df_2, aes(x=Pokemon_Type, y=Percentage, fill=Pokemon_Type))+ geom_bar(stat="identity", position=position_dodge()) + scale_fill_manual(values = poke_palette) + ggtitle("Pokemon Portfolio With Very Strong Risk Aversion") cov_core[order(diag(cov_core), decreasing=TRUE),order(diag(cov_core), decreasing=TRUE)] melt_diff<-melt(t(differences)) melt_diff$value<- factor(melt_diff$value) N<-nlevels(melt_diff$value) simplepalette<-colorRampPalette(c("red", "grey", "darkgreen")) ggplot(data = melt_diff, aes(x=Var1, y=Var2, fill=value) ) + geom_tile()+ scale_fill_manual(values=simplepalette(9), breaks=levels(melt_diff$value)[seq(1, N, by=1)], name="Net Advantage" )+ ggtitle("Net Pokemon Combat Advantage")+ xlab("Opponent") + ylab("Pokemon of Choice")