One of the most misconceived areas of World of Warcraft is without a doubt the probability of items dropping. Within the game, probability is notoriously misunderstood and this is no fault to the players, the problem lies with the fact that probability is, for the most part not intuitive. With this article, I’ll hopefully be able to clear up some of the common errors and offer insight on correct calculations.
N.b. I am not a statistician; I am simply a mathematics undergraduate.
THE BASIC DEFINITION
Before we delve into the common misconceptions, we need to understand the very basic definition of probability. In most basic terms, probability is a measure of the likelihood of an event occurring. Assuming an item is on a loot table (a list of possible items that can drop from a mob), it will have a predetermined value from 0 up to and including 1. These apply to all items, be it the rare [Invincible’s Reins] with a drop chance of 0.01, or something as mundane but much more likely such as collecting [Large Candle]’s from the troublesome kobold.
Expectation ≠ Guarantee: “The drop chance for Mimiron’s Head is 1%; therefore if I run Ulduar 100 times, I will see the mount drop once.”
People often misuse mathematical expectation and tend to treat it as complete certainty, however the flaw is that it is only an expectation, what is expected can easily be entirely different to experimental results. For example, if we use basic dice to help illustrate the point. Let X be the score of a dice roll, then X follows a discrete uniform distribution. Then the probability mass function of X is:
And the tabulated probability distribution:
Expectation for this distribution can be calculated as follows:
For this case,
The expected value or theoretical mean in this case is 3.5; however, it is important to understand that the probability of obtaining the expected value is exactly 0 as the score of a single die is a discrete variable, and therefore impossible to obtain 3.5. This is made simple to understand expectation, so let’s expand this a little bit and add another layer.
Let’s now use two dice, both dice are thrown and the individual scores are recorded and the sum is calculated, and let Y be the sum of the dice; and the tabulated probabilities follow.
Once again to calculate the theoretical mean we use:
Now if we apply the same logic that is used when it comes to mount drops to the dice in our experiment, we would expect to get a sum of seven, and if this would be repeated enough we would expect to see a nice symmetrical distribution around the mean. This is not the case, all of this is purely theoretical and when actually tested, the results are likely not to follow a perfect distribution in the case of a small sample.
The figure below uses excel to simulate an experiment where two dice were thrown 20 times and their scores were recorded. From this data the scores are represented graphically and the mean and sample variance are calculated.
It is clear that the distribution is slightly different to what’s expected, and this is the point I am trying to drive home. When dealing with a small sample (in context, the number of runs completed), the results may not follow expectation the way some believe.
There does however exist a powerful theorem called the law of large numbers, which states that as the sample size tends to infinity (which is coincidentally, the average time Blizzard takes to release new tiers), the sample mean will converge to the expected mean (proof can be found online). In regards to the experiment above, with enough dice rolls, the average score will converge to 7.
DOES RANDOM HAVE A MEMORY?
“I have run Icecrown Citadel 60 times now and still no mount, but at least my odds are getting better”
The confusion arises due to a lack of understanding of probability, or the fact that each mount run is independent to any other. On any individual mount run, the chance of obtaining the mount is the same, i.e. the probability remains constant. This is known as the Gambler’s Fallacy, which is essentially a failure to understand and recognize statistical independence. The fallacy most commonly arises in betting situations in which players note previous outcomes and draw predictions from that. For example, after seeing three consecutive black numbers on a roulette wheel, there would be the belief that the next one is more likely to be a red.
This occurs with players and their perception of mount drops, some may feel that they are due for a mount to drop because they have been running the instance (unsuccessfully) for some time so it must drop soon. As mentioned above, on any individual run the chance of the mount dropping remains the same. If you have run the instance 99 times, on the 100th run you will be no more or less likely to obtain the mount. The probability of seeing the mount at least once in x amount of runs does however change.
Let X denote the number of independent Bernoulli trials need for the first success to appear, then X follows a Geometric distribution with parameter p, the probability of the mount to drop, written
And the probability mass function:
If the kth trial is the first success, this implies that every trial before (k-1 trials) has been a failure.
There are two important conditions required for us to model this accurately.
- The probability of an events success must remain constant, this condition is met as the probability of seeing the mount does not change.
- Events must occur independently – in context, the outcome of any one mount run should have no bearing on the outcome of any other.
If we want to calculate the probability of seeing the mount with a 1% drop chance on the 50th run, then k=50 and p = 0.01.
Hence there is a 0.611% chance to see the mount for the first time on your 50th run.
We now need to consider the probability of the mount dropping at least once in k runs, so we need to use the cumulative distribution function for the Geometric distribution.
If we once again choose k=50 and p = 0.01, this sums the probabilities of the mount dropping on every run up until the kth run.
Hence we have a 39.5% chance to see the mount drop at least once in 50 runs. If we run the calculation again, however this time consider when n=100, we obtain a 63.4% chance to obtain the mount at least once in all those runs.
What if we wanted to examine the how many runs it would take to, for example, be 99% sure of obtaining the mount? If we want to find out how many runs of a particular mount is required to be sure to see it drop at least once, then the work goes as follows. Note: as it is impossible to be 100% sure of obtaining the mount as there is still chance involved.
Going back to the cumulative distribution function:
Therefore if we wanted to be 99% sure of having the mount drop at least once, we have p=0.01 and α=0.99.
Therefore since, it requires 459 runs of a specific instance with a drop chance of 1% to be 99% sure to see the mount at least once.
EXPECTATION OF A GEOMETRIC DISTRIBUTION
Expectation was discussed before so, what is the expectation of a geometric distribution?
Applying our standard value for p (assuming the drop chance is 1%), then when p = 0.01,
Therefore this suggests that the expected number of runs required until first success is 100.
Probability is entirely based from random number generation and chance, and you may see it on your 1st run, or you may see it on your 500th run. Last year I started farming Ulduar for the highly coveted mimirons head, and we made a deal to keep farming it until 2 mounts dropped, one for each of us. It turns out that the first mount dropped in the 3rd run, and the second mount dropped in the 7th run. The odds of this occurring is 0.0571% (calculated using the Negative Binomial distribution.) The main thing to remember is that sometimes the odds go your way, sometimes they don’t.
Percolator is an mathematics undergraduate and officer currently in charge of all all recruitment and log analysis for Dragon. Currently playing Balance Druid. To know more, feel free to visit his profile right here on Dragon’s website.