5.5 We have several datasets that will have to go "JUMBO" soon. Is there anything special to look out for?

There are a couple of things to watch out for. The biggest issue is that the extended "JUMBO" part of the file resides in HFS space. This means in order to store or restore the whole dataset you must use HFS conventions e.g. STORE /DATAACCT/JUMBOSET/

It also means that it won't show up in a regular MPE LISTF. You must use LISTFILE with HFS file set conventions. If the dataset's MPE name is XXXDB22 then the Jumbo file names will be XXXDB22.001, XXDB22.002, etc.

Of course the larger the set, the longer serial reads will take.

Also, going "JUMBO" means that FSCHECK can't be used to purge any corrupted JUMBOs or temporary versions of the JUMBOs left from an aborted restore (using unnamed restore utilities). FSCHECK can't purge HFS files.

From Ken Sletten:

As in many cases, there is a slight divergence between theory and reality.

Internal TurboIMAGE limits currently restrict the maximum size of one DETAIL dataset to 80GB or less (depending on Block Factor), using 4GB JUMBO HFS chunk file "extensions". With the pending IMAGE enhancement to move from EntryByName to 32-bit EntryByNumber (IMAGE will continue to support both formats), HP could also choose to increase the number of allowable JUMBO chunk files. BUT, HP has indicated that a future release will support using MPE "Large Files" as IMAGE DETAIL datasets. Since 128GB "Large Files" are in MPE 6.5, my unconfirmed guess is that HP will likely try and go directly to MPE Large Files in IMAGE instead of first increasing the number of JUMBO chunk files from the current 99 maximum (the format easily accommodates a 999 maximum).