We added a small “hidden” feature in the last update that is handy for people analyzing custom simulation batches, best explained by example:

Say you run a batch of 50 different trinkets for your character to see how they stack up against each other. Many of those trinkets will be pretty close together, e.g. maybe trinket A is only 2000 DPS higher than trinket B. How do you know that trinket A is actually better than trinket B, or if that was just “noise” due to margin of error in the simulation?

There’s a formula for that! If you click anywhere on the row for trinket B, then click on the row for trinket A, it will pop up a window that shows you the range on the difference between those two rows with 95% confidence. (This is equivalent to our single sims showing you an average DPS value +/- margin of error, just expressed differently. e.g. 600k +/- 2000 DPS is the same as 598k-602k DPS.) If the range on the difference between two rows crosses zero, then it is not statistically significant.

Example: If the difference shown is 1000 to 3000, then there is definitely an “interesting” difference between the two rows. If the difference shown is -500 to 1500, that crosses zero, so it is not considered statistically significant.

For the math nerds out there, this uses the “pooled” standard deviation on the two point estimates to calculate the 95% confidence interval for the difference. This works out pretty well in our case because the standard deviation on each point is almost always very similar.

In our next update, we will also auto-adjust the “DPS Error”, “HPS Error”, and “NPS Error” columns if you check “subtract reference” to show a difference from the “reference” row. It will adjust to show the margin of error on the *difference* from the reference, not the margin of error on the point estimate.