5.6 What determines the number of chunks in a JUMBO dataset? We have one dataset with 32 chunks. All of our other JUMBO datasets have a seemingly more reasonable 2 - 6 chunks. Is this documented anywhere? Does this unusually large quantity of chunks indicate some problem?

No, that's not a problem. The jumbo code allows up to 99 chunks (easily expanded to 999).

There's a chance that the 32 chunks were caused by some early version of IMAGE and/or a tool, perhaps trying to allocate space on a checkerboard disk system. It might be possible to maximize the size of each chunk file, thus reducing the number of chunks by performing a reorganization. Contact your third-party tool provider for guidance.

The minimum number of chunks for a dataset are essentially determined by dividing the total space needed to house the data volume by 4 GB (max size of each chunk file). Using individual chunk files smaller than 4 GB will then require more chunk files to contain the same data volume. The current maximum number of chunks that can exist for a single dataset is 99. However, it is more likely that one would exhaust the ability of IMAGE's current record pointer format to address all the entries in a set before reaching 99 files at 4 GB each.

There may be scenarios where having multiple chunk files that are less than 4 GB in size will benefit performance.