net.digest – January 2004

 

This month we again had a number of lengthy politics and religion threads that threatened to hijack HP3000-L, obscuring the many interesting technical and non-technical threads. A casual observer might think nothing of technical merit or interest was discussed this month, but he would be wrong. I enjoyed the report and associated commentary about HP planning to enter the online music store market, about the recent defections from the HP executive ranks and about what the hell HP’s “Virtual Adaptive Enterprise” actually means. Then there was the thread about HP’s new Webcast extolling the virtues of migrating to .NET and Alfredo Rego’s observation, “Does this mean that HP has now decided that there is NO value (and that there are NO benefits) associated with migrating your HP e3000 to HP-UX?” Finally, there was the verbal nuclear bomb dropped by OpenMPE Board member Ken Sletten (covered in detail elsewhere in this issue). What about technical content you ask? It is well represented below here and in Hidden Value.

I always like to hear from readers of net.digest and Hidden Value. Even negative comments are welcome. If you think I’m full of it or goofed, or a horse's behind, let me know. If something from these columns helped you, let me know. If you’ve got an idea for something you think I missed, let me know. If you spot something on HP3000-L and would like someone to elaborate on what was discussed, let me know. Are you seeing a pattern here? You can reach me at john@burke-consulting.com.

 

Automating ftp logons

 

Many of us have used ftp for years without really understanding some of its features. In response to a question about automating an ftp process, Donna Garverick gave a short tutorial on using netrc files: netrc files are simple ascii files that make both ftp and users happy. The format of a netrc record is:

 

                        machine 'nodename' login 'mgr.foobar' password 'never,tell'

 

If this netrc file is named 'netrc' and lives in the ftp initiator's home group, then:

 

                        ftp nodename

 

will automatically log you onto 'nodename'.  Caveat: Since so many users' home group is pub, it makes a tremendous amount of sense to not have the netrc file in the pub group. And, since this file contains passwords, it makes sense to put some kind of security on this file.  My recommendation is to 'altsec' it in some fashion (either for r/w access or acd's...).  If you put this netrc file into a non-home group, then the following file equation is needed:

 

                        :file netrc.[my_home_group] = [filename.group][.account]

 

For example, ':file netrc.pub = nodename.netrc'. A single netrc file can hold logons for multiple nodes but not multiple logons for a single server (you need multiple files for that). With this kind of system in place, users never need to know passwords. It's all 'hidden' in netrc files. For more information, read ftpdoc.arpa.sys.

 

More on FTP - EXITONERROR not working correctly

 

A user writes, "We have several batch jobs that get files from other machines. They worked fine under plain MPE/iX 7.0, but when PowerPatch 2 was applied, EXITONERROR now appears to exit and set the variables back to a successful state, instead of indicating why EXITONERROR activated."

James Hofmeister replied, "I duplicated this problem and EXITONERROR is working properly. However, the problem is the 'quit' that is called internally on EXITONERROR is operating the same as if the QUIT command is entered at the user command prompt. We need to make a code repair to the 'quit' on EXITONERROR and avoid updating the FTPLASTREPLY variable with the results of 'quit'."

Tim Cummings suggested, "The only way I have found to reliably determine if FTP has completed the task is to set my own VARs. Before you enter FTP, set a VAR to indicate that your ftp has failed. Then, inside FTP, after you issue the PUT, GET etc. follow it with a :setvar to indicate that the FTP completed successfully."

Joshua Johnson added, "I took this a step further and used setvar _ftp_lastcmd and ':if' to check the FTP variables, then set my own variables before I exit FTP."

 

Another example of why failure to prepare is preparing to fail

 

The following sad story was posted to HP3000-L, "I was doing an archive of a data set in IMAGE this weekend, and I was disappointed at the performance of the PUTS. I had created a flat file previously of the records to add back, the DB was set with AUTODEFER enabled, TPI turned off, and with no IMAGE paths on the set (just OMNIDEX keys). The set had 101 million entries when I started, I unloaded the 40 million we wanted to keep via SUPRTOOL into a SD file (this only took 58 minutes), erased and resized the set to 60 million via ADAGER, and then began the PUTS via SUPRTOOL with the set locked up front. My performance time was 3.5 million PUTS per hour, and I was really disappointed. I truly thought that because I had no IMAGE keys, AUTODEFER was on, and TPI was off the set should load very quickly. It turns out that even though I set TPI off, I still needed to de-install OMNIDEX. Sigh."

 

CI functions SIB request update

 

This from Jeff Vance, "As you know we are implementing the 'CI Functions' SIB request. The engineer responsible for the coding and design details (Hariprasad) discovered that the CI's evaluator treats '.' and '/' as token separators. The '/' isn't surprising since '/' is the division operator, and expressions such as a/b are perfectly valid. The '.' is more surprising, but since there are no predefined CI functions with a '.' in their name, and there are no real values, and there are no CI methods, and there are no CI structures, maybe it just turned out that way. Anyway, it is better from an eliminating regression failures point of view if we do NOT change the evaluator parsing rules in the implementation of CI functions. However that would preclude a CI function from being qualified. For example, Myfunc.grp(), ./MyFunct(), /bin/functions/MyFunc() MyDir/MyFunc() all would NOT be legal function names. This is inconsistent with the CI in that it allows qualified script names. However, the CI also supports unqualified POSIX names as script names. For example: myScript(case sensitive), my_Script, my-Script, etc. are all legal script names and can be found in the POSIX namespace. So my question is would the restriction of disallowing qualified user function names be a problem for you, and if so, please give me some examples."

Basically, this means that scripts operating as functions could not be qualified and would have to lie on a path specified in HPPATH. A lively discussion ensued with some good suggestions offered, but it appears if we are ever to get this enhancement that we will have to live with this restriction.

 

Nike Arrays 101

 

Many Homesteaders and Fence Sitters are picking up used Nike Model 20 arrays because there is a glut of them on the market, meaning they are inexpensive, and they work with older model HP 3000s. However, there is a lot of misinformation floating around about how and when to use them. For example, one company posted the following to HP3000-L:

"We're upgrading from a model 10 to a model 20 Nike array. I'm in the middle of deciding whether to keep it in hardware RAID configuration or to switch to MPE/iX mirroring, since one can now do it on the System volume set. (It wasn't when the system was first bought, so we stayed with the Nike hardware RAID. We're considering the performance issue of keeping it Nike hardware RAID versus the safety of MPE Mirroring. Has anyone switched from one to the other? A side issue is that one can use the 2nd Fast and Wide card on the array when using MPE mirroring, but you can't when using Model 20 hardware RAID.

"So, with hardware RAID, you have to consider the single point of failure of the controller card. If we 'split the bus' on the array mechanism into two separate groups of drives, and then connect a separate controller to the other half of the bus, you can't have the hardware mirrored drive on the other controller (I'm told you can do this on UX). It must be on the same path as the 'master' drive because MPE sees them as a single device. Using software mirroring you can do this because both drives are independently configured in MPE. Software mirroring adds overhead to the CPU, but it's a tradeoff you have to decide to make. We are evaluating the options, looking for the best (in our situation) combination of efficiency, performance, fault tolerance and cost."

First of all, as a number of people pointed out, Mirrored Disk/iX does not support mirroring of the System Volume Set – never did and never will. Secondly, you most certainly can use a second FWSCSI card with a Model 20 attached to an HP 3000. Bob J. elaborated on the second controller, "All of the drives are accessible from either controller but of course via different addresses. Your installer should set the DEFAULT ownership of drives to each controller. To improve throughput each controller should share the load. Only one controller is necessary to address all of the drives, but where MPE falls short is not having a mechanism for auto failover of a failing controller. In other words sysgen reconfiguration would be necessary to run on a single controller after SP failure in a dual SP configuration. You could have alternate configurations stored on your system to cover both cases of a single failing controller but the best solution is to get it fixed when it breaks. The best news is that SP failures are not very common."

There is a mechanism in MPE for 'failover' called HAFO - High Availability FailOver.  Unfortunately for the original poster it is only supported with XP and VA arrays and not on Nike's or Autoraid's (because it does not work with those).

Andrew Popay provided some personal experience: "We have seven Nike SP20 arrays, totaling 140 discs spread across all the arrays, using a combination of RAID 1 (for performance) and RAID 5 (for capacity). We use both SP's on all arrays, with six arrays used over three systems (2 per system). One of our systems has two arrays "daisy-chained". The only failures we have suffered on any of the arrays have been due to a disc mechanism failing. We never find any issues with the hardware raiding; in fact, as a lot of people have mentioned, hardware raiding is much more preferred to software raiding. Software raiding has several issues, system volume, performance, ease of use, etc. Hardware raiding is far more resilient. As for any one concerned about single points of failure, I would not worry too much about the Nike arrays, I would say they are almost bullet proof. For those who require a 24x7 system and can't afford any downtime what so ever, maybe they should consider upgrading to an N-Class, with a VA or XP. Bottom line is SP20's are sound arrays on the HP3000's, easy to configure, setup and maintain."

 

[Also contributing were Michael Berkowitz, John Clogg, Gilles Schipper and Goetz Neumann.]