Your first problem is that it is not cheap to add a member to a cluster. The data is distributes across a cluster so when a member is added the data is re-distributed across the cluster. So realistically you are almost certainly going to need to keep a consolidator and multiple scanners.
I don't mind if it's not cheap.
If I did want to build a beast of an environment, and my current DB size was say 500GB on a 1TB disk. Are you saying that a clustered equivalent would require multiple members, all with 1TB disks or would I feasibly be able to lower this requirement, as the capacity would be shared? Or, is it the case that the DB is replicated as exact copies to all other nodes?
Assuming you have a fault tolerant cluster how many copies of a node will depend upon the size of the cluster. Although there is duplication of nodes the overall disk requirements on individual machines would be lower. However, adding a member redistributes this data so has significant cost.
All the members are expected to be in the same location as they need low latency and to handle significant traffic amongst the members.