Some design caveats i've seen:
If you provision very large thin VMDKs (80%+ of the total size of the capacity storage of a single host) and it's near full, can run into situations where the cluster itself is not out of space, but some individual capacity drive hits 100% resulting in out of space situations for the VM which will pause the VM the VMDK is attached to. For this reason, the largest VMDK should be maybe only 25% of a host's total storage (assuming ftt=1)
In our test environment, we have seen resync/space related issues as well with large VMDKs (large being 80%+ of a host's storage capacity) and VSAN space is 90% utilized, in short when above 80% storage resync may become an issue where massive amounts of data are moved during maintenance operations. Examples of this would be 2-3 days to enter maintenance mode. If it enters maintenance mode in the middle of the night and vsan.clomrepairdelay isn't set higher than the default 60 minutes, after 60 minutes passes during maintenance, it forces resync on the cluster which generates massive resync traffic at that time. After maintenance concludes, it will generate even more resync to balance the cluster afterwards because the repair delay wasn't set high enough. We had this happen on our dev cluster and VSAN was resynching took 10 days to complete from the maintenance of 1 host., The resync also took a large amount of temporary space from VSAN which it gave back later but at one point the VSAN suddenly jumped from 80% full to 99.9%.
Short story, watch out of very large VMDK design in your cluster and space/resync issues arising if you are beyond 80% full on VSAN. Also plan accordingly for long maintenance windows setting clomrepairdelay to something higher if you expect maintenance windows to be > 1 hr.