net.digest – November 2003

 

So where do we go from here? If HP is to be believed, and there is no reason not to believe, it has sold its very last HP 3000 system. Ironically, on the day HP took its last order for HP 3000s, I took delivery of an IBM iSeries (nee AS 400) I intend to use as a test bed for writing a series of comparative white papers. It is interesting to see how HP and IBM respectively have treated these systems over the years. The most obvious difference is that even during the great hype over Open Systems, which was really just a code phrase for Unix, IBM continued to invest heavily in the AS 400 while HP chose to just milk the HP 3000 customer base and put most of its investments elsewhere. Even as HP abandons the HP 3000, we’ll still be here helping you, whether you are transitioning away from the HP 3000 or homesteading.

Carly Fiorina, HP’s CEO made several of the HP3000-L off-topic postings. One was about how she has joined Governor-elect Arnold Schwarzenegger’s transition team. Another was a CNET commentary by Charles Cooper, “There sat Carly Fiorina, Hewlett-Packard's great communicator, struggling to directly answer an interviewer's request for a crisp definition of the company's ‘Adaptive Enterprise’ strategy… Carly Fiorina's recent explanation of Adaptive Enterprise was enough to reduce even the most hardened McKinsey consultant to a state of dribbling catatonia.” Make your own connection. As usual, though, there was still a lot of good technical advice. Some of it follows.

I always like to hear from readers of net.digest and Hidden Value. Even negative comments are welcome. If you think I’m full of it or goofed, or a horse's behind, let me know. If something from these columns helped you, let me know. If you’ve got an idea for something you think I missed, let me know. If you spot something on HP3000-L and would like someone to elaborate on what was discussed, let me know. Are you seeing a pattern here? You can reach me at john@burke-consulting.com.

 

Firmware: Homesteaders and Self-Maintainers Take Note

 

A poster to HP3000-L was trying to get a 9x8 up and running using a set of MPE/iX 6.5 tapes. However, “there was one problem in following the instructions: MAPPER complained that it needed PDC version 1.3 or later; but my machine has 1.2 installed. Can the firmware be updated without resorting to a technician?”

This could increasingly become a problem as we move into the out years after end-of-sales and people pick up used machines from various sources. And not just PDC firmware, but also how to get the firmware for other things such as the code that turns a generic Seagate disk drive into an “HP” disk drive. In the case of disk drives, this was discussed at HPWorld 2003, with HP promising to look into the issue. The good news is that the technical side of HP does not see any particular problem with this. The bad news is that the same legal group that has a death grip on MPE has not yet reviewed this issue. This is a topic the 3000 Newswire will keep an eye on.

But, let’s get back to our original problem. An anonymous poster said “MAPPER could be run anyway, even with the complaint about version 1.2 of the PDC. Simply reply ‘RESUME’ at the MAPPER PAUSED> prompt, then ‘RESUME’ again at the warning about ‘if you do this, the system will crash!’ (I assume that's what HPMC means). I did, and it worked fine.”

About the firmware itself, Stan Sieler noted, “you can download the firmware for the 918 (and other 9x8 boxes) from: ftp://ftp.itrc.hp.com/firmware_patches/hp/cpu. You want: ftp://ftp.itrc.hp.com/firmware_patches/hp/cpu/PF_CWBR0013.txt and ftp://ftp.itrc.hp.com/firmware_patches/hp/cpu/PF_CWBR0013. The text file starts:

 

   Patch Name:  PF_CWBR0013

 

   Patch Description: HP9000 Model E25/E35/E45 PDC revision 1.3.

 

       This patch is a PDC firmware update for the HP9000

       Models E25/E35/E45, and the HP3000 918/928/968 systems.

       The latest versions of the offline diagnostic "MAPPER"

       will not execute properly if the systems described

       above do not contain PDC revision 1.3 or greater.

 

“The firmware requires creating a tape, but the instructions are in the patch. I've done it with no problems. Note that the instructions assume you've got access to an HP 9000. I've never tried them from a 3000, but there's a chance it will work from there.”

 

In Case You Missed It, Because

 

It was buried in a thread titled “Graphical Depiction of IMAGE Database”, Wirt Atmar and Jerry Fochtman expounded on the following series of questions: “For serial read performance, can anybody comment on the expected gain or difference between just deleting records vs. deleting, setting a lower capacity and repacking? As an example, let’s say we delete 25% of the records in a dataset with 20 million records. Fewer records clearly mean less time, right? Even if you don't resize and repack, right?”

First, from Wirt Atmar, “No. If you delete 25% of your 20 million records but fail to repack the dataset, a serial search will take just as long as it did before. A serial search begins at the first record of the dataset and proceeds until it hits the high-water mark. It doesn't matter if the records in between those two points are either active or have been marked deleted. A repacked dataset however will be 25% faster to search serially. All of the deleted records will have been squeezed out of the dataset, so that every record that's now present is active, and the high-water mark will have been moved down to the top of those records.”

Jerry Fochtman elaborated, “I'd like to expand a bit on what Wirt provided by explaining the two primary options that are available in the various third party tools to address this situation. The most common approach is to repack along a specific search path. This will improve data retrieval performance when your application performs a key lookup and then retrieves multiple detail entries following the look-up. Another repack method involves simply compressing the empty space from the file and lowering the high water mark. This technique involves simply moving adjacent records next to one another until they are all located in consecutively. Both methods will improve the performance of a serial scan by lowering the high water mark and removing the space occupied by the deleted records. However, while the second compress method can be performed on a dataset faster than the reorganizing/repacking method, there is no guarantee that it will improve retrieval performance along a certain search path. Some folks with very large data sets but only 1-2 entries per key value periodically use this compress method on their sets as the added downtime to conduct a reorganization does not provide noticeable lookup performance improvement. Others sites, with much larger search chains within a key value find that having to perform the full detail set reorganization periodically indeed does improve their application performance. So, as with most things, it depends on your situation as to which approach may work best, especially when it comes to very large detail sets.”

As for the question that started the thread, suggestions ranged from Stan Sieler’s DBHTML (at www.allegro.com), to Fantasia (from JetForm Corporation), to Dataarchitect if you have ODBC set up (from www.thekompany.com), DBIMAG4 (from the Interex CSL), to Visio, again if you are set up to use ODBC (from Microsoft).

 

Stop Doing That

 

From HP3000-L, "We have a 989/650 system. Every weekend we identify about 70 thousand files to delete off the system. I build a jobstream that basically executes a file that has about 70 thousand lines. Each line says 'PURGE file.group.account'. This job has become a real hog. It launches at 6 AM on Sunday morning, but by 7 PM on Sunday night it has only purged about 20,000 files. While this job is running, logons take upwards of 30 seconds. What can I do?"

This reminds me of the old joke where the guy goes to the doctor and complains "Gee, doc, my arm hurts like hell when I move it like this. What can I do?” The doctor looks at him and says "Stop moving it like that." But seriously, the user above is lucky the files are not all in the same group or he would be experiencing system failures like the poor user two years ago who was only trying to purge 40,000 files. In either case, the advice is the same; purge the files in reverse alphabetic order. This will avoid a system failure if you already have too many files in a group or HFS directory and dramatically improve system performance in all cases. However, several people pointed out that if you find you need to purge 70,000 files per week, you should consider altering your procedures to use temporary files, or, if that will not work, purge the files as soon as you no longer need them rather than wait until it becomes a huge task. Some excerpts from the original net.digest column (published November 2001 – when many of us had other issues on our minds) follow:

If all the files are in one group and you want to purge only a subset of the files in the group, you have to purge the files in reverse alphabetical order to avoid the System Abort (probably SA2200). PURGEGROUP and PURGEACCT will be successful, but at the expense of having to recreate the accounting structure and restoring the files you want to keep. Note that if you log onto the group and then do PURGEGROUP you will not have to recreate the group.

Craig Fairchild, MPE/iX File System Architect explained what is going on, "Your system abort [or performance issues] stem from the fact that the system is trying desperately to make sure that all the changes to your directory are permanently recorded. To do this, MPE uses its Transaction Management (XM) facility on all directory operations. To make sure that the directories are not corrupted, XM takes a beginning image of the area of the directory being changed, and after the directory operation is complete, it takes an after image. In this way, should the system ever crash in the middle of a directory operation, XM can always recover the directory to a consistent state - either before or after the operation, but not in a corrupted in-between state. On MPE, directories are actually just special files with records for each other file or directory that is contained in them. They are stored in sorted alphabetical order, with the disk address of the file label for that file. Because we must keep this list of files in alphabetical order, if you add or delete a file, the remaining contents of the file need to be "shifted" to make room, or to compact the directory. So, if you purge the first file alphabetically, XM must record the entire contents of the directory file as the before image, and the entire remaining file as the after image. So purging from the top of the directory causes us to log data equal to twice the size of the directory. Purging from the bottom of directory causes XM to log much less data, since most of the records stay in the same place and their contents don't change. The system abort comes from the fact that more data is being logged to XM than it can reliably record. When its logs fill completely and it can no longer provide protection for the transactions that have been initiated, XM will crash the system to ensure data integrity."

Goetz Neumann added, "PURGEGROUP (and PURGEACCT) do not cause a SA2200 risk, since they actually traverse the directory in reverse alphabetical order internally. This is useful to know for performance reasons. Since these commands cause much smaller XM transactions, it is faster to empty a group by logging into it and then PURGEGROUP it, instead of using PURGE @. A little known fact is that there is a tool to help prevent you from running into these situations in the first place: DIRLIMIT.MPEXL.TELESUP. It should be documented in the MPE/iX 5.0 Communicator. A suggested (soft) limit for directory files would be 2 MB. This would limit MPE to not have more than 50,000 files in one group, and (very much depending on the filenames) much less than 50,000 files per HFS directory.  (These are XM protected just as well, and tens of thousands of files in an HFS directory is not a good idea from a performance standpoint either). Another way to reduce the risk of SA2200 in these situations would be to increase the size of the XM system log file (on the volume set that holds the group with the large number of files), which is available in a VOLUTIL command since MPE/iX 6.0."