Been chewing this over in my head for a while, and came to the conclusion I need dynamic Threshold Values.
For example, I have a 7 node cluster, and I calculated the Consumed Memory thresholds based on 80% & 90% of the N-1 totals of RAM. So in the case of 192 GB, it was 6 * 192 * 0.8 = 921GB & 6 * 192 * 0.9 = 1036.
However we'd have a hardware failure, for example, and it isn't clear when that node will be back in circulation and I haven't gone back in to amend the threshold, so it's wrong. Which then affects my forecast saturation dates.
For % based thresholds it's OK, but where you have non-% based thresholds, and an environment that can flex, you need the threshold to flex with it, without a management overhead of remembering to update the thresholds.