Bargaining Power in a Simulated Barter Economy
Clifford Clive
What about non-competitive equilibria?
In my last post, I went through the process used to derive the competitive equilibrium allocation that I plugged into the trade function in our economic simulator. It wasn't just some arbitrary trade rule; I derived it from the agents' utility functions, making sure to meet certain conditions to define a competitive equilibrium.
To reach a competitive equilibrium, we must assume that both agents are price takers. That is, they don't set the terms of their trade; the market does, and they just trade according to the market price. In a competitive market, the equilibrium price is the one that satisfies the demand functions of all the agents, even when there are only two of them.
But what if we relax those assumptions? Our model is still using a barter economy, after all, and some people are better negotiators than others. So let's consider trades where the outcome will still be on the contract curve (i.e., the outcome will still be an equilibrium; i.e., there will be no more ways to trade without making someone worse off). But now let's look at what happens when one agent is able to negotiate a better price. The weaker negotiator will still benefit from trade, but not as much as they did in a competitive equilibrium.
How to Model Bargaining Power
To give the agents a measure of bargaining power, we'll introduce a new data member to our agent class. We'll call it charisma. To find the new trade outcome in our model, we'll use the following steps:
- Find the indifference curves for each agent's initial allocation.
- Find the contract curve for the model.
- Find the minimum amount of good 1 that each agent is willing to accept. (This is taken from the allocation where the contract curve intersect's each agent's indifference curve).
- Find each agent 1's share of the the sum of their charisma scores (c = c1 / (c1 + c2)).
- Move that proportion of the distance along the contract curve from agent 1's indifference curve to agent 2's indifference curve. This is the final allocation.
Now, to find this allocation, we need to be able to measure the arc length along a section of the contract curve. The mathematical formula for calculating the arc length of a curve is:
λ=∫ba1+(dydx)2−−−−−−−−−−√dx
and the contract curve for two agents with different Cobb-Douglas utility functions is given by:
y1=x1∗a2b1Ya1b2X+x1(a2b1−a1b2)
In other words, to find the arc length, we need to find the derivative of this equation, substitute it into the dy/dx of the arc length function, square that, add one, take the square root, and integrate over x1 to find the arc length. Hey, I never said this was easy! Although in some situations, it can be.
For Cobb-Douglas utility functions where each agent has the same preference parameters, we have a_1=a_2=a and b_1=b_2=b, and so they all cancel out very nicely. Even more importantly, the x_1 in the denominator also disappears. The contract curve in this case is just a straight line connecting the two origins of the Edgeworth box:
y1=YXx1
So in this case, we don't even need to measure the arc length, since we can just find the proportion of the distance between the least amount of good 1 that agent 1 will accept (as described above), and the most amount of good 1 that agent 1 can get (which is the total amount of good 1 minus the least amount of good 1 that agent 2 will accept).
Our bargaining model will be very easy to simulate for agents in this special case. I'll address the problem of running simulations on agents with randomized preferences at some point in the (hopefully) not-too-distant future.
and the contract curve for two agents with different Cobb-Douglas utility functions is given by:
In other words, to find the arc length, we need to find the derivative of this equation, substitute it into the dy/dx of the arc length function, square that, add one, take the square root, and integrate over x1 to find the arc length. Hey, I never said this was easy! Although in some situations, it can be.
For Cobb-Douglas utility functions where each agent has the same preference parameters, we have a_1=a_2=a and b_1=b_2=b, and so they all cancel out very nicely. Even more importantly, the x_1 in the denominator also disappears. The contract curve in this case is just a straight line connecting the two origins of the Edgeworth box:
So in this case, we don't even need to measure the arc length, since we can just find the proportion of the distance between the least amount of good 1 that agent 1 will accept (as described above), and the most amount of good 1 that agent 1 can get (which is the total amount of good 1 minus the least amount of good 1 that agent 2 will accept).
Our bargaining model will be very easy to simulate for agents in this special case. I'll address the problem of running simulations on agents with randomized preferences at some point in the (hopefully) not-too-distant future.
On to the Simulation
Three things need to change in our old simulation code. First, we need a new Agent class, BargainingAgent, which will just be a subclass of the old Agent. We will add a charisma data member to the class, and change the comparison operators to compare agents based on their charisma scores rather than their utility. (The method of comparison is only relevant to what we are trying to observe in our simulation, and in this case we want to track the progress of agents based on their different levels of charisma).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| class BargainingAgent(Agent): def __init__( self , endowment1, endowment2, preference1, preference2, cha): Agent.__init__( self , endowment1, endowment2, preference1, preference2) self .charisma = cha # We need to define comparison operators in order to sort the # agents based on charisma. I always prefer to define all of them # if I need to define any. def __gt__( self , other): return self .charisma > other def __lt__( self , other): return self .charisma < other def __eq__( self , other): return self .charisma = = other def __ge__( self , other): return self .charisma > = other def __le__( self , other): return self .charisma < = other def __ne__( self , other): return self .charisma ! = other |
Second, we need a new random agent creator, which simply creates a random charisma score and feeds that in with the other variables in our previous one.
1
2
3
4
5
6
7
8
9
| def random_bargaining_agent(mu_e1 = mu, mu_e2 = mu, sigma_e1 = mu / 3 , sigma_e2 = mu / 3 , mu_p1 = 0.5 , mu_p2 = 0.5 , width_p1 = 0.0 , width_p2 = 0.0 , mu_ch = 0.5 , width_ch = 0.5 ): e1 = max ( 0 , gauss(mu_e1, sigma_e1)) e2 = max ( 0 , gauss(mu_e2, sigma_e2)) p1 = uniform(mu_p1, width_p1) p2 = uniform(mu_p2, width_p2) ch = uniform(mu_ch, width_ch) return BargainingAgent(e1, e2, p1, p2, ch) |
Third, we need a new transaction function, one that accounts for the bargaining solution described above.
1
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
| def cobb_douglas_negotiation(agentX, agentY): # Find the total amounts of each good: total_1 = agentX.good1 + agentY.good1 total_2 = agentX.good2 + agentY.good2 # Each agent will accept no less of good1 than the amount # where the contract curve intersects their starting # indifference curve min_good1_x = math.sqrt(agentX.good1 * agentX.good2 * total_1 / total_2) min_good1_y = math.sqrt(agentY.good1 * agentY.good2 * total_1 / total_2) max_good1_x = total_1 - min_good1_y # Bargaining power of an agent is that agent's share of the # sum of the two agents' charisma scores. bargaining_power_x = agentX.charisma / (agentX.charisma + agentY.charisma) # The amount of good1 up for negotiation is max_good1_x - min_good1_x. # Each agent's bargaining power determines what share they will get. allocation_x1 = min_good1_x + bargaining_power_x * (max_good1_x - min_good1_x) allocation_y1 = total_1 - allocation_x1 allocation_x2 = (total_2 / total_1) * allocation_x1 allocation_y2 = total_2 - allocation_x2 return (allocation_x1, allocation_x2), (allocation_y1, allocation_y2) |
The Results
So, now what happens when we run the simulation? The results are much less predictable than before. In these charts, I've tracked the progress of agents by ranking their charisma scores from highest (red) to lowest (blue).Left: absolute trading gains of selected agents Right: relative trading gains of selected agents Red = most charismatic, Blue = least charismatic |
(more of the same) |
Now it's not so easy to predict who the big winners will be. Previously, we saw that the competitive equilibrium will allow the wealthiest agents to have the greatest increase of wealth in absolute terms, while the poorest agents gained the most relative to what they started with. In the economy where agents exercise bargaining power, we still see everyone benefiting from trade, but it's less clear who will gain the most in either absolute or relative terms.
In fact, the agents' charisma scores themselves aren't even very strong indicators of who will gain the most. We see a few instances where agents with fairly low charisma making some very beneficial trades. So what exactly is going on here?
Much of it is simply due to luck. Perhaps your charisma score is fairly low, say at the 40th percentile of all agents in the economy. Now suppose you meet a very wealthy agent with a far worse charisma score. You will be able to capture a very large portion of the gains from that trade. In some cases, that one trade might provide you with nearly all of the gains you get in the entire simulation.
This underscores the importance of the role that the interaction functions play in our model. If these lucky trades can have such a big impact on agents' success, then it would be interesting to look at the ways agents go about finding the best people to trade with. It makes sense that being a good businessman would involve some degree of skill in doing deals, as well as being able to find the best trading partners. That's something I can take a look at next time.
Take this with a grain of salt
Keep in mind this is just a simulation, not an argument in support of any economic theory. I'm not suggesting that giving agents bargaining power and letting them deviate from the competitive equilibrium is either better or worse for the economy, and I don't think this simulation really proves anything other than how this particular model works when we give it these parameters. Just remember that my goal here is to use economics to develop more interesting simulations, not to use simulations to explore economic theory.
So, does this rule make our simulation more interesting? It definitely makes it less predictable, and I think that's more interesting. But we can always add noise to a model to make it more unpredictable; I think that unpredictability is interesting when it's caused by some underlying behavior. But this is just one take on the problem; I'd love to hear your thoughts on my approach.
And of course there's always the possibility that I've overlooked something or made a mistake somewhere, so please, let me know if you find anything wrong!
No hay comentarios:
Publicar un comentario