Teaching Materials

Economics Jobs

As a department, we had a meeting about jobs in economics.

I collected some information on the subject. Out of 31656 job postings, most summaries are duplicates- the positions may be different, but the first 30 words of the summary were the same.  Here are the results of 638 unique postings for “economics entry level” across several cities.
NoRivals

Some good takeaways:

  • Bachelors Degree: You actually have to finish!
  • Responsible, professional, experience:  Employers expect to get the impression that you care about what you are doing.  They expect to get the impression that you can do it, and often unfairly expect that you have done it.  I note experience is considered important, even for entry level positions.  This complaint seems to be commonly repeated by job-seekers in this era.
  • Data, analyst, research, quantitative, math, statistics:  You’re going to be working with data, and are expected to work with hard numbers. Take your Econometrics and Statistics classes seriously!
Programming, Statistics, Teaching Materials

Simulating Optimization Problems and Production Possibility Frontiers

In teaching Behavioral Economics, optimization problems require some intuition. This intuition can be opaque without calculus literacy.  Below is a simulation to demonstrate that the process for constrained optimization works. It has the added benefit of showing isoquants (by colored stripes in the image below), and the strict boundary condition of the efficiency frontier.

Basic constrained optimization problems are as follows:

U=f(x_1,x_2)

y=p_1 x_1 + p_2 x_2

I have made code in R to simulate the results for a two-part optimization process. The example uses U=sqrt(x_1) + sqrt(x_2) as the functional form.

library(“plot3D”)

p1<-1
p2<-2
y<-10
x1<-runif(25000, min=0, max=y/p1)

#Checking any options inside the constraint or on the constraint
x2<-runif(25000,min=0, max=(y-p1*x1)/p2)

U<-sqrt(x1)+sqrt(x2)
out<-mesh(x1, x2, U)
points3D(x1, x2, U, xlab=”x1″, ylab=”x2″, zlab=”Utility”, phi=-90, theta=90)

plot(x1, U)
abline(v=x1[which(U==max(U))], col=”red”)
x1_star<-x1[which(U==max(U))]
x2_star<-x2[which(U==max(U))]
y-x1_star*p1-x2_star*p2

And it outputs the following plots (with minor variation). Note that the colored bands represent utility curves, isoquants. The end of the colored points represents the efficiency frontier.

This slideshow requires JavaScript.

 

The actual solution is found by:

U=sqrt(x_1)+sqrt(x_2)

subject to:

y=p_1 x_1+p_2 x_2

The Lagrangian is then:

L=sqrt(x_1)+sqrt(x_2) + \lambda (y - p_1 x_1 - p_2 x_2)

Leading to the first order conditions (derivatives of L):

L_1 : 0.5 x_1^{-0.5} - \lambda p_1=0

L_2: 0.5 x_2^{-0.5} - \lambda p_2=0

L_{\lambda} : y- p_1 x_1 -p_2 x_2 =0

Using these 3 conditions, we can find the equations:

\frac{0.5 x_1^{-0.5} }{0.5 x_2^{-0.5}} = \frac{p_1}{p_2}
y- p_1 x_1 -p_2 x_2 =0

Where, if y=10, p_1=1, p_2=2 then we can solve for the theoretical solutions: x_1=6.66, x_2=1.66

These indeed match very closely with the real solutions.

Programming, Statistics, Teaching Materials

Why do we need n-2? An example in R

Below is a simple example showing why we may want the (\Sigma u^2_i )/ (n-2) as our estimates of \large \sigma^2, when our naive intuition may suggest we only want the simple average of squared errors (\Sigma u^2_i )/ (n).

To show this in no uncertain terms, I have coded a linear regression by hand in R.  Also embedded in the work below are several rules I follow about writing code. They are rules 0-6.  There are many other rules, since code writing is an art.

####Coding in R
#### Rule 1: Always comment on every few lines of code. It is not unheard of to comment every single line, particularly for new coders, or complex code.
#### You will need to reference your work at a later date, and after about 3 months, the purpose is lost. Also, I need to read it.

#### Rule 2: Define your variables first. Luckily these names are shared for us.
#### For your projects, use names which are clear for your research: (y=crime in Williamsburg, VA, X= Number of jello puddings consumed)

set.seed(1223) #ensures replication. Sets seed of random number generators.
n<-25 #number of samples
x<-2*rnorm(n) #Our x’s come from a random sampling of X’s.
b_0<-10
b_1<-3 #Those cursed jello puddings are associated with increased crime. Linear regression is supportive of association- not causation.
u<-rnorm(n) #We satisfy both independent mean and zero mean assumptions
y<-b_0+b_1*x+u #This is defining our true Y. The true relationship is linear.

plot(x,y) #Rule 0, really. Always check your data.

#### Rule 3: After definitions begin your second stage of work. Probably trimming existing data, etc. Do these in the order they were added.
hat_b_1<-sum( (x-mean(x)) * (y-mean(y)) ) / sum( (x-mean(x))^2 ) #Spaces between any parenthesized section of operations. We need to be able to see which parentheses are which.
hat_b_1 # Rule 4: Indent work which is conceptually subordinate. Indent more as needed. Four spaces=1 tab.
hat_b_0<-mean(y)-hat_b_1*mean(x)
hat_b_0 # Rule 5: Check your work as you go along. For our example, I got 9.89

abline(a=hat_b_0, b=hat_b_1, col=”red”) #let’s add a red line of best fit. And we must see how our plot looks. Repeat rule 0.

hat_y<-hat_b_0+hat_b_1*x
hat_u<-hat_y-y

plot(x,hat_u) # Let’s see our residuals
hist(hat_u) # Let’s see our histogram

#### Rule 6: Keep your final analysis as punchy and short as possible without sacrificing clarity.
#### The mean sum of the squared errors (usually unknown to us as researchers)
sigma_sq<-sum(u^2)/n #this is the value we’re trying to estimate
sigma_sq_naive<-sum(hat_u^2)/n #this is a naive estimation of it
sigma_sq_hat<-sum(hat_u^2)/(n-2) #this turns out to be more accurate, particularly in small samples. If n->infinity this goes away. Try it for yourself!

#R, is this assessment true? Is sig_sq_hat a better estimator of sig_sq than our naive estimator? Is it true we need the (-2)?
(sigma_sq-sigma_sq_naive) > (sigma_sq-sigma_sq_hat)

Here is one of several plots made by this code, showing a nice linear regression over the data:

Rplot01

Please don’t forget the derivation of why this is true!  This is simply some supportive evidence that it might be true.