The Euclidean algorithm is a very efficient method to find the greatest common divisor (GCD) for a pair of integers. The algorithm is based on the principle that the GCD for a pair does not change if the smaller number is subtracted from the bigger number.
Given interger \(a\) and \(b\) (assume \(a\geq b\)), the algorithm can be recursively define as:
\[gcd(a,0)=a\]
\[gcd(a,b)=gcd(b, a\: mod\: b)\]
Reference: Wiki page on Euclidean Algorithm.
Research Toolkit
Tuesday, September 4, 2012
Friday, August 17, 2012
The Fink lone-chooser procedure
This proportional cake-cutting procedure was proposed by Fink in 1964. It has a very nice property that the agent does not need to know the total number of agents before he cuts. Now we show the procedure on a 3 agents case.
Say the agents are Alice, Bob and Carol:
1. Let Alice bisects the cake into two equal pieces based on his utility function.
2. Bob chooses the larger piece for himself and leaves the other one for Alice.
3. Now Alice and Bob trisect their pieces. Carol then selects the largest piece out of the three pieces from Alice and Bob. Leaving the remainder to Alice and Bob.
It is not hard to show that everyone gets a proportional piece from this allocation. If \(n=4\), Alice, Bob and Carol will quadrisect their three pieces at the next stage, and the fourth agent, say David, will piece the largest one from each of the three pieces. The algorithm continues in a similar way for any \(n\).
Reference: S. J. Brams, A. D. Taylor, 1996, Fair Division: From cake-cutting to dispute resolution, pp. 40, 2.6
Say the agents are Alice, Bob and Carol:
1. Let Alice bisects the cake into two equal pieces based on his utility function.
2. Bob chooses the larger piece for himself and leaves the other one for Alice.
3. Now Alice and Bob trisect their pieces. Carol then selects the largest piece out of the three pieces from Alice and Bob. Leaving the remainder to Alice and Bob.
It is not hard to show that everyone gets a proportional piece from this allocation. If \(n=4\), Alice, Bob and Carol will quadrisect their three pieces at the next stage, and the fourth agent, say David, will piece the largest one from each of the three pieces. The algorithm continues in a similar way for any \(n\).
Reference: S. J. Brams, A. D. Taylor, 1996, Fair Division: From cake-cutting to dispute resolution, pp. 40, 2.6
Thursday, August 16, 2012
Ford–Fulkerson Algorithm
The Ford-Fulkerson Algorithm is a classic solution for maximum flow problems.
Let \(G(V,E)\) be a graph. For each edge \((u,v)\), its capacity is \(c(u,v)\) and flow is \(f(u,v)\). We want to find the maximum flow from the source \(s\) to sink \(t\).
We put following constraints on the graph:
1. Capacity constraints: \(\forall (u,v)\in E\) \( f(u,v)\leq c(u,v)\).
On any edge, the flow cannot exceed its capacity.
2. Skew symmetry: \(\forall (u,v)\in E\) \(f(u,v) = -f(v,u)\).
The net flow over \((u,v)\) must be the opposite of the one over \((v,u)\).
3. Flow conservation: \(\forall u\in V:u\neq s\cap u\neq t \Rightarrow\sum_{w\in V} f(u,w)=0\).
For any node other than \(u\) and \(v\), the net flow should be 0.
We also define the residual network over \(G\) as \(G_f(V,E_f)\), where the capacity of edges is \(c_f(u,v)=c(u,v)-f(u,v)\).
The algorithm goes like:
1. \(f(u,v)\leftarrow 0\) for all edges \((u,v)\).
2. While there is a path from \(s\) to \(t\) in \(G_f\), such that \(c_f(u,v)>0\) for all edges \((u,v)\in p\):
i. Find \(c_f(p)=min\{c_f(u,v):(u,v)\in p\}\).
ii. For each edge \((u,v)\in p\)
a. \(f(u,v)\leftarrow f(u,v)+c_f(p)\) (Send the flow along the path)
b. \(f(v,u)\leftarrow f(v,u)-c_f(p)\) (The flow might be "returned" later)
In step 2, we can use BFS or DFS to find the path.
This algorithm runs in \(O(Ef)\) time, where \(E\) is the number of edges and \(f\) is the maximum flow in the graph.
UPDATE 8/22/12: An interesting theorem that relates to the flow network is the Max-flow Min-cut Theorem. It states
The maximum value of an s-t flow is equal to the minimum capacity over all s-t cuts.
Reference:
Wikipage on Ford-Fulkerson algorithm
Wikipedia page on Max-flow Min-Cut Theorem
Let \(G(V,E)\) be a graph. For each edge \((u,v)\), its capacity is \(c(u,v)\) and flow is \(f(u,v)\). We want to find the maximum flow from the source \(s\) to sink \(t\).
We put following constraints on the graph:
1. Capacity constraints: \(\forall (u,v)\in E\) \( f(u,v)\leq c(u,v)\).
On any edge, the flow cannot exceed its capacity.
2. Skew symmetry: \(\forall (u,v)\in E\) \(f(u,v) = -f(v,u)\).
The net flow over \((u,v)\) must be the opposite of the one over \((v,u)\).
3. Flow conservation: \(\forall u\in V:u\neq s\cap u\neq t \Rightarrow\sum_{w\in V} f(u,w)=0\).
For any node other than \(u\) and \(v\), the net flow should be 0.
We also define the residual network over \(G\) as \(G_f(V,E_f)\), where the capacity of edges is \(c_f(u,v)=c(u,v)-f(u,v)\).
The algorithm goes like:
1. \(f(u,v)\leftarrow 0\) for all edges \((u,v)\).
2. While there is a path from \(s\) to \(t\) in \(G_f\), such that \(c_f(u,v)>0\) for all edges \((u,v)\in p\):
i. Find \(c_f(p)=min\{c_f(u,v):(u,v)\in p\}\).
ii. For each edge \((u,v)\in p\)
a. \(f(u,v)\leftarrow f(u,v)+c_f(p)\) (Send the flow along the path)
b. \(f(v,u)\leftarrow f(v,u)-c_f(p)\) (The flow might be "returned" later)
In step 2, we can use BFS or DFS to find the path.
This algorithm runs in \(O(Ef)\) time, where \(E\) is the number of edges and \(f\) is the maximum flow in the graph.
UPDATE 8/22/12: An interesting theorem that relates to the flow network is the Max-flow Min-cut Theorem. It states
The maximum value of an s-t flow is equal to the minimum capacity over all s-t cuts.
Reference:
Wikipage on Ford-Fulkerson algorithm
Wikipedia page on Max-flow Min-Cut Theorem
The Steinhaus-Kuhn lone-divider Procedure
The lone-divider procedure is another classic proportional cake-cutting procedures. It was presented by Kuhn back in 1967. This procedure has an interesting property, that throughout the process, we only need one agent to make multiple cuts.
To illustrate the procedure, here we present an example for 3 agents, say Alice, Bob and Carol.
1. Let Alice cut the cake into 3 pieces of equal value based on her utility function.
2. Now let Bob and Carol to indicate which pieces are acceptable to them (at least of value 1/3 based on their own utility functions).
3. Then we have two mutually exclusive cases:
Case I: Either Bob or Carol finds two or more pieces are acceptable. Say Bob finds more than 1 piece acceptable. Then choosing in order of Carol, Bob and Alice will result in a proportional allocation.
Case II: Both Bob and Carol indicate at most one piece is acceptable. We first gives the piece that both Bob and Carol find unacceptable to Alice, then we do a Cut-and-Choose between Bob and Carol. In this way, we will end up with a proportional allocation.
Kuhn extended this procedure to any number of players in 1967 with the help of some combinatorial theorem. The basic logic still remains the same.
Reference: S. J. Brams, A. D. Taylor, 1996, Fair Division: From cake-cutting to dispute resolution, pp. 31, 2.2
To illustrate the procedure, here we present an example for 3 agents, say Alice, Bob and Carol.
1. Let Alice cut the cake into 3 pieces of equal value based on her utility function.
2. Now let Bob and Carol to indicate which pieces are acceptable to them (at least of value 1/3 based on their own utility functions).
3. Then we have two mutually exclusive cases:
Case I: Either Bob or Carol finds two or more pieces are acceptable. Say Bob finds more than 1 piece acceptable. Then choosing in order of Carol, Bob and Alice will result in a proportional allocation.
Case II: Both Bob and Carol indicate at most one piece is acceptable. We first gives the piece that both Bob and Carol find unacceptable to Alice, then we do a Cut-and-Choose between Bob and Carol. In this way, we will end up with a proportional allocation.
Kuhn extended this procedure to any number of players in 1967 with the help of some combinatorial theorem. The basic logic still remains the same.
Reference: S. J. Brams, A. D. Taylor, 1996, Fair Division: From cake-cutting to dispute resolution, pp. 31, 2.2
Monday, August 13, 2012
Dubins-Spanier Moving-knife Method
This is one of the classic proportional procedures. It is proposed by L. E. Dubins and E. H Spanier back in 1961.
Given a group of \(n\) agents and a bar cake, a referee holds a knife parallel to edge of the cake and moves slowly across the whole cake from left to right. When an agent calls "cut", he will get the piece to the left of the blade. The claim is, in this way, the best strategy for agents is to call cut exactly at \(\frac{1}{n}\) position based on their own utility functions. For the last two agents, we do a cut-and-choose. This will give us a proportional allocation.
This procedure is based on one key assumption, that the agents are maximum risk averse. In other words, the agent care only about the maxmin value. Given this assumption, it is not hard to see that calling cut at \(\frac{1}{n}\) gives us the highest maxmin value.
As we can see earlier, the last two agents who call "cut" will get a piece larger than \(\frac{1}{n}\). Therefore, it is possible that all agents may wait for others to call "cut" at the same time, and thus it ends up no one calls throughout the whole procedure. To avoid this, we can simple make a random allocation if no one calls. Since agents are maximum risk averse, this will drive them to call "cut" in order to guarantee a \(\frac{1}{n}\) maxmin value.
Reference: L. E. Dubins, E. H. Spanier, 1961, How to Cut A Cake Fairly, The American Mathematical Monthly.
Given a group of \(n\) agents and a bar cake, a referee holds a knife parallel to edge of the cake and moves slowly across the whole cake from left to right. When an agent calls "cut", he will get the piece to the left of the blade. The claim is, in this way, the best strategy for agents is to call cut exactly at \(\frac{1}{n}\) position based on their own utility functions. For the last two agents, we do a cut-and-choose. This will give us a proportional allocation.
This procedure is based on one key assumption, that the agents are maximum risk averse. In other words, the agent care only about the maxmin value. Given this assumption, it is not hard to see that calling cut at \(\frac{1}{n}\) gives us the highest maxmin value.
As we can see earlier, the last two agents who call "cut" will get a piece larger than \(\frac{1}{n}\). Therefore, it is possible that all agents may wait for others to call "cut" at the same time, and thus it ends up no one calls throughout the whole procedure. To avoid this, we can simple make a random allocation if no one calls. Since agents are maximum risk averse, this will drive them to call "cut" in order to guarantee a \(\frac{1}{n}\) maxmin value.
Reference: L. E. Dubins, E. H. Spanier, 1961, How to Cut A Cake Fairly, The American Mathematical Monthly.
Starcraft Quickmatch Trolling
So this idea originates from the following funny image on Reddit:
So for some reason, the system matched a bronze player (the lowest rank, the last 20% players on the ladder) against a high master player (the highest rank except grand master, the top 2% players). Of course, the master won the match. But the funny part is, the bronze player's strategy confused the hell out the master player, since he has never seen suchstupid builds in his rank.
If we dig deeper into this, we may find an interesting phenomena, that people have much difficulty judging the others who are either much better or much worse than they are. The reason is simple, people know too little about the ones who are far away from them in a scoring coordination. In order to make a comparison, we usually need the assistance of a reference. Most cases, ourselves can serve as a convenient reference. This explains why it may be easier for us to judge the players who are on a par with us, but much harder for the super good/bad players.
Another interesting phenomena we can derive from this, is that we find it easier to figure out who is a better player among a pair of similar skill level. To illustrate my point by an example, suppose a gold player (an average player on the ladder) is asked to watch a game for 5 minutes, and place a bet on the side he thinks would win. Then he may find it much easier to make the decision on a game between a master and a diamond player (one rank lower than the master) than one between a master and a bronze player. The diamond player and the master can serve as good references for each other, though they may be both much better than the observer. In the other case, all we have is just three far away data points (a bronze, a gold and a master). Thus it is harder to make the comparison.
So for some reason, the system matched a bronze player (the lowest rank, the last 20% players on the ladder) against a high master player (the highest rank except grand master, the top 2% players). Of course, the master won the match. But the funny part is, the bronze player's strategy confused the hell out the master player, since he has never seen such
If we dig deeper into this, we may find an interesting phenomena, that people have much difficulty judging the others who are either much better or much worse than they are. The reason is simple, people know too little about the ones who are far away from them in a scoring coordination. In order to make a comparison, we usually need the assistance of a reference. Most cases, ourselves can serve as a convenient reference. This explains why it may be easier for us to judge the players who are on a par with us, but much harder for the super good/bad players.
Another interesting phenomena we can derive from this, is that we find it easier to figure out who is a better player among a pair of similar skill level. To illustrate my point by an example, suppose a gold player (an average player on the ladder) is asked to watch a game for 5 minutes, and place a bet on the side he thinks would win. Then he may find it much easier to make the decision on a game between a master and a diamond player (one rank lower than the master) than one between a master and a bronze player. The diamond player and the master can serve as good references for each other, though they may be both much better than the observer. In the other case, all we have is just three far away data points (a bronze, a gold and a master). Thus it is harder to make the comparison.
Friday, August 10, 2012
Approximate Envy-Free Moving-Knife Procedure
This approximation procedure is based on the original Dubins-Spanier moving-knife procedure. Unlike the original procedure, now we allow agents to re-enter the game and call "cut" again and again. However, each time they call cut, they have to claim for a piece larger than the existing piece by \(\epsilon\), and they have to return the previous piece they have and merge it with the remaining. In this way, all the agents at last will have a piece that is smaller than the largest piece by at most \(\epsilon\).
Please note that it is impossible to get an allocation where one agent gets nothing and all the other agents get at least \(\frac{1}{n}\). Since there will always be \(n\) pieces of cakes (agent have to return their previous piece when they claim for another piece), they cannot be all smaller than \(\frac{1}{n}\) for the agent who gets nothing.
Reference: S. J. Brams, A. D. Taylor, 1996, Fair Division: From cake-cutting to dispute resolution, pp. 130, 7.2
Please note that it is impossible to get an allocation where one agent gets nothing and all the other agents get at least \(\frac{1}{n}\). Since there will always be \(n\) pieces of cakes (agent have to return their previous piece when they claim for another piece), they cannot be all smaller than \(\frac{1}{n}\) for the agent who gets nothing.
Reference: S. J. Brams, A. D. Taylor, 1996, Fair Division: From cake-cutting to dispute resolution, pp. 130, 7.2
Subscribe to:
Posts (Atom)