Hi All
I am trying to understand if there are better and worse ways of splitting my sync rules into buckets?
I understand that syncing everything vs syncing only what I actually need for my offline workflows is less efficient, but questions are more around the structure of the buckets, the order of the buckets, how the buckets get “parsed”, and if that makes a difference.
For example
- Does the order of the models inside a bucket make a difference to the order in which data will be synched? (For example should I put the high volume models first or last?)
- Is it better to have two separate global buckets, one with only small volume models and the other with only big volume models, or does it not make any difference whether you split the buckets up or just have 1 big bucket? (apart from the fact that multiple buckets chips away at the overall bucket limit)
- Similar to above, assuming the same amount of data that needs to get synced, is 1 bucket more sync efficient than 2 buckets?
- Should I put data that changes often in a separate bucket from data that never changes? Will that make the sync process more efficient?
I guess a different way of looking at my question is this. Assuming the same end sync result set (ie unique individual records), does the structure of the sync rules (apart from the total number of buckets to be synced) impact the performance of the sync process?
Does the order of the models inside a bucket make a difference to the order in which data will be synched? (For example should I put the high volume models first or last?)
No - the order in which changes are made determines this.
Is it better to have two separate global buckets, one with only small volume models and the other with only big volume models, or does it not make any difference whether you split the buckets up or just have 1 big bucket? (apart from the fact that multiple buckets chips away at the overall bucket limit)
Apart from the bucket count limit, it only has an effect on reprocessing (which only happens when the sync rule change). For example, when you add a new model to an existing bucket, the entire bucket may need to be re-downloaded on every client. While if you add it to a new bucket, only the new bucket is downloaded by the client. (It’s a little more complicated in practice, but that’s the general idea).
Similar to above, assuming the same amount of data that needs to get synced, is 1 bucket more sync efficient than 2 buckets?
Not a big difference between having 1 or 2 buckets. There may be a performance difference between 100 and 200.
Should I put data that changes often in a separate bucket from data that never changes? Will that make the sync process more efficient?
This will not have any significant effect. Overall, the things to optimize for are:
- The total number of objects being synced.
- The number of buckets being synced (once again, one or two more won’t make a difference).
- The number of changes being synced.