I understand your first two answers (my last two questions) well.
Regarding the third answer (first question), consider the following scenarios:
Scenario A: The simulation uses increments of 1000 stats to conserve cpu power while covering the entire space. It finds that >99% dps is achievable with various stat combinations having 1000 or more haste, 1000 or more crit, and 1000 or more versatility. All three have differing upper bounds, but they all start at the same lower bound.
Scenario B: The simulation uses increments of 100 stats despite requiring more cpu power. It finds that >99% dps is achievable with various stat combinations having 800 or more haste, 1000 or more crit, and 1200 or more versatility. There is no change to upper bounds (vs scenario A), but now their lower bounds differ slightly on the main chart.
I’m curious if scenario B would have a noticeable impact on user perceptions. Obviously it doesn’t greatly change AMR’s recommendation; so from a rational calculation point of view the extra cpu effort isn’t worthwhile. But could this give users a more comfortable perception? It might be that those extra clock cycles make users feel like AMR is more correct despite you and I both knowing that higher fidelity has already been achieved from dropping stat weights in the first place.