MONITOR = COMPONENTS is a Drag and Code Your Processors Properly

This is actually an amalgamation of two related topics.

Another common implementation that I have seen is the over-utilization of the MONITOR=COMPONENTS ability in processors that create Load modules.

In a previous blog on “Composite Load Module Creation”, I have detailed at length the effect over-utilization of the facility can have on Endevor performance. This section has merely been inserted to draw attention to the issue once again; do not place MONITOR=COMPONENTS on every library being included in a linkedit. It is wisest to monitor only those libraries that are relevant to your actual applications. Unless it is relevant to have a detailed inventory of all the COBOL support routines that are included in your IBM compiles, I highly recommend NOT monitoring those libraries!

And believe it or not, Endevor can be very forgiving of coding errors in its processors. Consider the following processor…

//DLOAD01S PROC SYSA=DUMMY,
//              UNIT=VIO
//S10      EXEC PGM=IEBGENER
//SYSUT1     DD DSN=MY.LIB.LIB1,
//              DISP=SHR
//SYSUT2     DD DSN=&&TEMP1,
//              DISP=(NEW,PASS)
//SYSPRINT   DD &SYSA
//SYSIN      DD DUMMY
//*
:
:

In this example, my definition for the temporary dataset name &&TEMP1 is incomplete; I have not specified any other attributes of the file other than it is new and it should be passed. Endevor will execute this processor correctly; in other words, it will not abend. Instead, Endevor will make assumptions regarding the dataset and attempt to resolve the problems, usually quite correctly.

However, the problem you may run into is mysterious page-outs occurring on your Endevor jobs. Endevor batch jobs will page-out during execution and likely never page back in again. This typically results in your Operations area canceling your job (which is reflected in your JES log as an S522 abend).

This occurs because of the manner in which Endevor enqueues and dequeues datasets for read and write. Since the information supplied in the processor is incomplete, Endevor must make some assumptions based on a finite set of rules. If, for some reason, it must make those assumptions again for a related, or even unrelated circumstance, then it is possible it will put itself into a “deadly embrace”.

The fix to this problem is simple. Just remember to code your processors correctly with all values present.

//DLOAD01S PROC SYSA=DUMMY,
//              UNIT=VIO
//S10      EXEC PGM=IEBGENER
//SYSUT1     DD DSN=MY.LIB.LIB1,
//              DISP=SHR
//SYSUT2     DD DSN=&&TEMP1,
//              DISP=(NEW,PASS),
//              UNIT=&UNIT,
//              SPACE=(TRK,(1,1),RLSE),
//              DCB=(LRECL=80,BLKSIZE=6160,RECFM=FB)
//SYSPRINT   DD &SYSA
//SYSIN      DD DUMMY
//*
:
:

Full and Incremental Unloads

Many shops rely on volume backups in order to provide their emergency and disaster recovery backups. However, there is an inherent problem with this approach.

A volume backup is a point-in-time backup that is, by definition, a physical solution. However, Endevor maintains its information and inventory logically, maintaining independence from the actual physical location of the actual elements. Therefore, unless all the information for an application happens to be stored on the same physical DASD device when the volume backup is occurring, there is a distinct possibility of serious synchronization problems.

To illustrate, my base libraries may be on SYSVL1, the delta libraries on SYSVL2, and my MCF libraries on VSMVL1. The volume backups for these particular devices occurred at 17:23, 18:34, and 19:02 respectively.

During that same time, I also have a regularly scheduled Endevor batch job that runs every hour on the hour to check for approved packages that meet the time windows for execution. The jobs ran, therefore, at 17:00, 18:00 and 19:00.

During that time, it found a job to execute at 18:00 and moved 200 elements from one stage to the next. At 19:00, it found another job that executed GENERATEs against 100 elements.

Based on this information, what do my volume backups contain for my system? If I restored these datasets from the backups, my base library would not match my delta library. And neither of them would match my MCF. In other words, I would have a serious problem to address in the middle of what is already a disaster (i.e. the event that forced the restore in the first place).

Using full and incremental UNLOADs solve this problem by addressing entire applications at the time of the unload separate from the physical location. Although time-consuming, the solution provided in the event of a disaster is comprehensive, complete, and correct. If my UNLOAD took place at 17:23, then no changes can occur to my application, regardless of file, until the UNLOAD is complete. This ensures that, in the event I need the data to RELOAD, I have complete information for my base, delta, and MCF files.

Am I advocating no longer doing volume backups? No, not at all. I recommend continuing to perform volume backups but keep the UNLOAD files as backup for your backup! During a disaster recovery exercise, the steps I recommend are to restore the volume backups and then run an Endevor VALIDATE job. If the job comes back clean, you’re good to go! But if there’s a problem, you have the necessary files to restore things to a stable state.

A final note. The format of the full and incremental UNLOAD files is the same as that created for the ARCHIVE action. Therefore, a secondary use for the UNLOAD file can be to restore “accidentally” deleted elements; in other words, even though not designed to restore specific elements, the fact is they can be restored by using the RESTORE SCL the same as you would against an ARCHIVE file. Just use the UNLOAD file instead!

Use IF-THEN-ELSE

(Republished at 9:55AM EST)

If your Endevor Processors are like most shops, you sometimes find yourself in the position of trying to remember which symbolics to override for which functions in which stage of which environment for which system… you know what I mean.

Many Administrators have taken advantage of symbolics in their CA-ENDEVOR processors so that they can control what is happening where. Unfortunately, we sometimes forget to customize exactly the same processor exactly the same way for every system in a stage. We have been able to ensure some consistency through the creative use of EXECIF and COND CODE checking (plus the later introduction of Batch Administration), but those tools are relatively limited leaving human intervention to complete the necessary customization.

Fortunately, Endevor introduced a tool to the Administrators arena; IF-THEN-ELSE processing. With a little imagination, some re-analysis of your present processors, and a little structure thrown in for good measure, you now have the opportunity to simplify your life as an Administrator; or at least reduce the number of calls due to processors operating differently across systems.

Consider the following processor…

//COPYSRC PROC INLIB=NDVR.&C1SY..&C1ST..SRCELIB1,
// OTLIB=NDVR.&C1SY..&C1ST..SRCELIB2
//COPY EXEC PGM=IEBCOPY,
// MAXRC=4,
// EXECIF=((&C1ST,NE,STG1),
// (&C1ST,NE,STG2),
// (&C1ST,NE,STG3))
//SYSPRINT DD SYSOUT=*
//INDD DD DSN=&INLIB,
// DISP=SHR
//OUTDD DD DSN=&OTLIB,
// DISP=SHR
//SYSIN DD *
COPY INDD=INDD,OUTDD=OUTDD
MEMBER=(&C1ELEMENT,,R)
/*
//* END OF PROCESSOR

This is a very simple processor that could be used to copy source from one library to another library. The conditions on the EXEC statement (i.e. the EXECIF statements) ensure the processor will never execute when the StageID is STG1, STG2, or STG3; otherwise, it will execute. While extremely simple, it will serve to illustrate the potential of IF-THEN-ELSE.

In our make-believe shop, there is no dataset that actually is suffixed “SRCELIB1” or “SRCELIB2” as in the example. Instead, the Administrator must remember to specify the real library name for the source-type the processor is defined to. Unfortunately, the Administrator did not embed or directly relate the name of the libraries to the types he/she wants to manipulate. So, consider the following table…

il1

The symbolics INLIB and OTLIB must be overridden manually by the Administrator to specifically supply the correct suffix depending on type every time the processor is used. If they forget, the processor abends.

So… how can we improve this processor? In essence, start making it a “smart” processor?

Let’s start with the EXEC statement.

Unlike regular JCL, IF-THEN-ELSE processing within ENDEVOR has full access to all the ENDEVOR symbolics. So why not use the opportunity to clarify (and eliminate) the EXECIF statement?

//COPYSRC PROC INLIB=NDVR.&C1SY..&C1ST..SRCELIB1,
// OTLIB=NDVR.&C1SY..&C1ST..SRCELIB2
// IF (&C1ST NE STG1) AND
// (&C1ST NE STG2) AND
// (&C1ST NE STG3) THEN
//COPY EXEC PGM=IEBCOPY,
// MAXRC=4
//SYSPRINT DD SYSOUT=*
//INDD DD DSN=&INLIB,
// DISP=SHR
//OUTDD DD DSN=&OTLIB,
// DISP=SHR
//SYSIN DD *
COPY INDD=INDD,OUTDD=OUTDD
MEMBER=(&C1ELEMENT,,R)
/*
// ENDIF
//* END OF PROCESSOR

Controlling the execution step in this way allows you to conditionally execute programs within your processor based on (for example) Processor Group Name, Type, Element… any processor symbolic supplied by ENDEVOR or, for that matter, any symbolic you define to the processor.

Now, let’s eliminate the need for library symbolics entirely within the processor by adding some intelligent checking based on our known nomenclature for libraries.

//COPYSRC PROC
// IF ((&C1ST NE STG1) AND
// (&C1ST NE STG2) AND
// (&C1ST NE STG3)) AND
// ((&C1TY = JCL) OR
// (&C1TY = PROC) OR
// (&C1TY = COBOL) OR
// (&C1TY = PLI)) THEN
//COPY EXEC PGM=IEBCOPY,
// MAXRC=4
//SYSPRINT DD SYSOUT=*
// IF (&C1TY = PROC) THEN
//INDD DD DSN=NDVR.&C1SY..&C1ST..PRCLIB,
// DISP=SHR
//OUTDD DD DSN=NDVR.&C1SY..&C1ST..PRCLIB2,
// DISP=SHR
// ELSE
// IF (&C1TY = JCL) THEN
//INDD DD DSN=NDVR.&C1SY..&C1ST..DRVJC,
// DISP=SHR
//OUTDD DD DSN=NDVR.&C1SY..&C1ST..DRVJC2,
// DISP=SHR
// ELSE
// IF (&C1TY = COBOL) THEN
//INDD DD DSN=NDVR.&C1SY..&C1ST..COBSRCE,
// DISP=SHR
//OUTDD DD DSN=NDVR.&C1SY..&C1ST..COBSRCE2,
// DISP=SHR
// ELSE
//INDD DD DSN=NDVR.&C1SY..&C1ST..PLSRCE,
// DISP=SHR
//OUTDD DD DSN=NDVR.&C1SY..&C1ST..PLSRCE2,
// DISP=SHR
// ENDIF
// ENDIF
// ENDIF
//SYSIN DD *
COPY INDD=INDD,OUTDD=OUTDD
MEMBER=(&C1ELEMENT,,R)
/*
// ENDIF
//* END OF PROCESSOR

In this example, I have allowed a fall-through to the PLI library if none of the types match the specified ones because I changed the conditional execution of the program to also check for source type. Combinations of IF-THEN-ELSE in EXEC statements and in DD statements provide a powerful new tool for controlling the execution of steps within processors as well as libraries used.

Experiment! In a recent upgrade to a linkage-editor processor, I was able to remove as many as 30 symbolics that were “prone to error” due to just-plain human error.

If you want to structure or simplify the “look” of your processors, try using INCLUDEs. You could, for example, put all the different library sets into separate INCLUDE members. Now your processor could look like the following…

//COPYSRC PROC
// IF ((&C1ST NE STG1) AND
// (&C1ST NE STG2) AND
// (&C1ST NE STG3)) AND
// ((&C1TY = JCL) OR
// (&C1TY = PROC) OR
// (&C1TY = COBOL) OR
// (&C1TY = PLI)) THEN
//COPY EXEC PGM=IEBCOPY,
// MAXRC=4
//SYSPRINT DD SYSOUT=*
// IF (&C1TY = PROC) THEN
++INCLUDE PROCLIBS
// ELSE
// IF (&C1TY = JCL) THEN
++INCLUDE JCLLIBS
// ELSE
// IF (&C1TY = COBOL) THEN
++INCLUDE COBLIBS
// ELSE
++INCLUDE PLILIBS
// ENDIF
// ENDIF
// ENDIF
//SYSIN DD *
COPY INDD=INDD,OUTDD=OUTDD
MEMBER=(&C1ELEMENT,,R)
/*
// ENDIF
//* END OF PROCESSOR

There is a lot of potential in IF-THEN-ELSE processing. Use your imagination and automate yourself out of trouble!

Composite Load Module Creation

It is not unusual for shops to be making use of composite statically linked load modules that consist of many (sometimes hundreds) of subroutines. The purpose of this section is to exam one commonly used method of creating those modules. Also, there exists a manner in which the process and time of creating those modules in Endevor can be significantly reduced.

First of all, let’s exam the steps typically followed in creating a composite load module:

  • Programs are compiled.
  • The output from the compile step is run through either BINDER or IEWL and the output from this step is stored in an “executable” or “object” library with an RECFM definition of U.

At this point, many shops begin to deviate. Depending on what I will call the “architectural application normalization”, this “link-edit” step may or may not be executed with an input parameter of “NCAL”. If it is run with “NCAL”, then I will assume you are normalized; if not, then not.

What’s the difference?

The “NCAL” input parameter instructs the link-edit program on whether or not to try to resolve the external calls or addresses referred to in the compiled program. In essence, it controls whether or not the output from the step is an “executable” object (i.e. one with external addresses resolved) or will require further linking at a future point in order to resolve the external calls.

Why would you use “NCAL”?

The NCAL parameter used in conjunction with a second separate link-edit job can be used to create “efficient” composite load modules. In essence, shops that make extensive use of the NCAL parameter build up a library of submodule object libraries that are then linked into composite load modules on a “mix-and-match” basis.

To illustrate, consider the following diagram that maps the calls of composite load module “A”.

ill1

If we assume that this shop is not using the “NCAL” link-edit parameter, executable load module “A” is created by compiling (and linking) program “A”; the output from the compile job for “A” is the “A” load module. Therefore, if a change is done to any submodule that is part of the composite load module (for instance, “F”), it is “linked” into the “A” load module by, again, recompiling program “A”.

But what have we really created if every submodule has resolved its own set of external addresses? If every program is compiled and the “link-edit” step does not contain the NCAL parameter, the final result when “A” is compiled and linked is to have a load module that looks more like the following illustration.

ill1

This figure is trying to illustrate the fact that every submodule contains the full address resolution of every subroutine underneath it; each routine is executable by itself in its own right (although you would likely never execute any one of them as a stand-alone program).

Therefore, when “A” is compiled and linked to create composite load module “A”, each address is resolved (once again) for each subroutine that has already resolved each external address. In other words, a large degree of redundancy occurs in the final load module and the size of the actual executable can be much larger than would result if each external address were resolved only once. If you consider that each programming language also results in calls to other external support routines, the degree of redundancy increases many times.

However, if we compiled and supplied “NCAL” as the input parameter to the link-edit step in our compiles, each subroutine becomes a stand-alone piece with unresolved external addresses. When all programs have been compiled, a separate link-edit job can be created that resolves all addresses for all routines in the composite load module only once. Input to a link-edit job contains “INCLUDE” statements, an ENTRY statement, and a NAME statement. The input to create composite load module “A” may look like the following.

INCLUDE SYSLIB(A)
INCLUDE SYSLIB(B)
INCLUDE SYSLIB(C)
INCLUDE SYSLIB(D)
INCLUDE SYSLIB(E)
INCLUDE SYSLIB(F)
INCLUDE SYSLIB(G)
INCLUDE SYSLIB(H)
INCLUDE SYSLIB(I)
INCLUDE SYSLIB(J)
ENTRY A
NAME A(R)

Since this is source like any other source, it is an excellent candidate for Endevor control. Simply define a type such as “LKED” and keep these statements as the “source” for your composite load modules.

Now when you change a subroutine in the composite load module, it is no longer necessary to recompile the mainline. Instead, you merely compile the subroutine and then re-link the composite load module source that uses the subroutine. One of the advantages of this approach is that you mitigate risk to your environment by not unnecessarily recompiling mainlines that have not had any changes done to them.

Another hint to keep your “LKED” source type simple is to construct your link-edit job to have all of the composite load module input (i.e. object) libraries defined to DDNAME SYSLIB. This is the default DDNAME that the linkage-editor will scan to resolve external addresses making an INCLUDE unnecessary for every subroutine; it is only necessary if the member name of the subroutine in the input library is different than the “called” name in the program that calls it. This may happen if you have multiple entry points in a program and are using those entry points instead of the actual program name.

INCLUDE SYSLIB(A)
ENTRY A
NAME A(R)

You may want to remove INCLUDE statements and let the linkage-editor determine for itself the routines to pull into the composite load module. If you leave an INCLUDE statement in your LKED type but no longer call that subroutine, the module will still be part of your composite load. This, again, can result in a “larger than necessary” load module and superfluous information in Endevor if you are tracking with the Automated Configuration Manager (ACM).

So, let’s assume you have bought into this “normalization” scheme and have begun to change your approach. The next hurdle you encounter is that the link-edit step seems to take an inordinate amount of time inside Endevor. Why is this happening?

The short answer is ACM and use of the “MONITOR=COMPONENTS” clause in your link-edit processor. For proof, try executing the link-edit inside Endevor without the MONITOR=COMPONENTS turned on. You should find that the link-edits are as quick inside Endevor as they are outside.

But this isn’t a good solution. ACM captures extremely valuable information that is vital for effective impact analysis. If I don’t know which composite load modules use which subroutines, how do I know what to re-link and what to re-test?

So the first step is to determine why ACM adds so much time to the link-edit step. When ACM is activated, it tracks the “opens” that occur when your element is being generated. In the case of a link-edit, it captures the library name(s) that are being accessed in order to construct your composite load module.

Ideally, ACM can get the information it requires from the directory of the library being accessed. Unfortunately, this does not work with “executable”/object libraries with a LRECL of U. When a library is accessed with this type of definition, ACM determines that this is a “load”-type of library and knows it cannot get the information it needs from the directory. Therefore, ACM issues its own OPEN and READ to the member being linked-in and reads the information it requires from the member’s CSECT. The net result is that ACM is causing double the opens/reads/closes for every member linked into a composite load module. Is it any wonder, then, that link-edits can take an inordinate amount of time?

So how can we improve this process? Is there another approach that gives us the best of both worlds i.e. composite load modules without redundant addresses and ACM data with all the integrity we rely on?

The answer is yes, but only if you have migrated to at least MVS 4.x or higher and are using BINDER instead of the old IEWL program to perform your link-edits (check with your internal technical support)

Let’s go back to the beginning and review the steps we can now take…

  • Programs are compiled.

STOP! You have all that you need now! The subsequent link-edit step is no longer necessary.

But what do you have?

  • Compile listing (keep this. You never know when Audit might ask for it!)
  • The “true” program object that was written to DDNAME SYSLIN during the compile

This “true” program object is quite different than the ones that we created out of the link-edit step. For one thing, the library is a sequential FB LRECL 80 type of file. For another, the content bears no resemblance to the output from the link-edit step; regardless of whether you used NCAL or not.

So if we have enough at this point, why have we always included the link-edit step after the compile? The answer is because we had no choice. The old IEWL program that existed in previous MVS operating systems required its input to be either all “true” object libraries (i.e. FB LRECL 80) or all “executable” libraries (RECFM U). They could not be mixed and matched.

Since language support libraries and DBMS support routines were all supplied as “executable” libraries and since these libraries came from a variety of vendors, we were forced to supply our own in-house written routines in the same format for the link-edit to work.

The new BINDER routine has changed all that. BINDER allows you to mix, in the same concatenation list, libraries of fixed and “executable” format. In other words, the datasets in the concatenation of SYSLIB DDNAME in the BINDER step can consist of “true” object libraries (FB LRECL 80) AND “executable” libraries (RECFM U).

So how does this help Endevor and ACM? ACM can now get the information it needs to track from the directory of the “true” object library (FB LRECL 80). That means an extra open/read to the library itself is not required and that saves significant time.

This is an excellent “going-forward” strategy. In order to exploit this capability, it will be necessary to modify your processors by adding a step that copies the output from the compiler’s SYSLIN library to a PDS that you save. This step is necessary, as most compilers will insist that SYSLIN is a sequential library and the easiest thing to administrate and concatenate into your link-edit processor is a PDS of the saved objects. A simple insertion of code like the following should do the trick for you.

//*
//********************************************************************
//* GENER TO PDS TO SAVE SYSLIN MEMBER *
//********************************************************************
//*
//GENNO1 EXEC PGM=IEBGENER,
// COND=(5,LT),
// MAXRC=8
//*
//SYSPRINT DD DUMMY
//SYSUT1 DD DSN=&&SYSLIN,
// DISP=(SHR,PASS)
//SYSUT2 DD DSN=YOURHILI.&C1SY..&C1ST..OBJECT(&C1ELEMENT),
// MONITOR=&MONITOR,
// FOOTPRNT=CREATE,
// DISP=SHR
//SYSIN DD DUMMY
//*

Start saving the “true” objects and including them in your link-edit. Over time and as your inventory of “true” objects builds, the linkage-editor and Endevor/ACM will find what it needs in the new libraries and the extra reads to the old libraries will become a thing of the past.

VSAM File Maintenance

One of the more critical and yet overlooked aspects of Endevor administration is to ensure regular maintenance is being performed on the critical VSAM files; specifically the Master Control Files (MCF) and the Package Dataset.

Significant degradation in performance will be encountered when the CA-Splits on these VSAM files exceeds 100. This is especially true for the Package Dataset.

As part of a regular scheduled cycle, a job that performs an IDCAMS REPRO to take a copy of the VSAM file, a delete/define to re-allocate fresh space, and a REPRO to restore the copied data back to the VSAM file should be performed. The frequency of this maintenance is best determined by monitoring the space utilization over a period of time to determine the volatility of the files at each site. Typically, however, Package Datasets will need to be refreshed on a more frequent basis than MCF files if the site is a heavy Package user.

In the event your site has implemented CA-L-SERV, you should be aware that it is provided with a program named LDMAMS. This program will function much like IDCAMS but with the ability to manipulate the VSAM datasets while CA-L-SERV is still running and holding onto the datasets.

One of the features of LDMAMS is the ability to perform a “compress in place”; in other words, in one step, LDMAMS can perform the REPRO, DELETE/DEFINE, REPRO that compresses the VSAM data. In order to use this feature, however, you must ensure the REUSE attribute is set in the file’s VSAM definition.

Also, although the compress-in-place is a useful feature, it is extremely prudent (and strongly recommended) that the step performing the compress be preceded by a step that performs a REPRO to create a backup sequential dataset. LDMAMS can perform this REPRO exactly as IDCAMS does but with CA-L-SERV still running. This recommendation is made because of the possibility that a problem may occur during the one-step compress step; an unexpected memory problem, a space problem, or some other unforeseen abend may cause the compress to stop in the middle of an important point. If the abend occurs during the compress-in-place step, it is entirely possible that the VSAM file LDMAMS was working on at the time will be corrupted. If you have not taken a sequential backup prior to this abend, your recovery options are limited to non-existent.

Instructions for Installing JCL/PROC Checking Routines

Pre-requisites for this solution: ACMQ activated on the Endevor installation and a JCL Validation routine. This outline was specific for ASG’s JCLPREP, but can be modified for any JCL validation application.

The first thing to be aware of is that the supplied processors were developed to help address what I typically refer to as the “chicken-and-egg” problem that occasionally crops up in Endevor processing.

In the case of JCL and PROCs, the situation arises by trying to determine the source type to associate the processing to. To illustrate, consider the following JCL and PROC examples.

//XXX99D JOB (123,12),’MY JOB’,
// TIME=10
/*JOBPARM Q=H,I
//STEPA EXEC PROC=XXX10,
// SYSOUT=X,
// IMS=IMSA,
// DB2=DB2D

Figure 1 – JCL Example

//XXX10 PROC SYSOUT=A,
// IMS=,
// DB2=
//*
//S10 EXEC PGM=XXXXXXX
//PSBLIB DD DSN=&IMS..PSBLIB,
// DISP=SHR
//STEPLIB DD DSN=SYS2.LOADLIB,
// DISP=SHR
// DD DSN=&DB2..LOADLIB,
// DISP=SHR

Figure 2 – PROC Example

Both of the JCL and PROC in these examples are dependent upon each other. The JCL cannot execute without the PROC that contains the rest of the statements, and the PROC cannot execute without the JCL to resolve the symbolic information.

So if I place these into Endevor, one with type JCL and the other with type PROC, which do I want CA-JCLCHECK (or some other JCL validation routine) to run against? As it turns out, I cannot do one without the other! Placing the validation on both will result in one or the other failing for “false” reasons.

Again, to illustrate, let’s suppose I use CA-JCLCHECK in Endevor against the JCL. When Endevor generates the element, it may fail because the developer has not yet placed the PROC in. But if the developer HAD placed the PROC into Endevor, the PROC would have failed because the JCL wasn’t in Endevor! So who goes first (hence my naming the scenario “chickens-and-eggs”!).

Ultimately, the “dominant” type must be chosen; that is to say, the element most likely to be changed and put though life-cycle management the most often. I would suggest that, in the case of JCL and PROCs, the dominant type is PROC. It is far more likely you will be changing PROC statements and re-promoting those changes than JCL statements. And it is likewise rare that you would be changing a JCL type without also changing its related PROC.

Assuming you agree with that assessment, you next must determine a process that will give you the validations you are looking for. That is the purpose of these processors. In short, the JCL processors should only check for SYNTAX errors i.e. is JOB spelled correctly? Is the accounting information correct?

The PROC processor, however, has an opportunity to go much further. With the supplied processors, the PROCs will take a copy of every JCL that invokes it and do a thorough validation; in other words, even though it is the PROC that is being changed, the JCL validation routine will invoke every JCL that uses that PROC. In the event of shared PROCs (i.e. PROCs that are used by more than one “driver” JCL), this ensures that changes to the PROC will not have an adverse affect on JCL that the developer perhaps forgot to include in their impact analysis. ACMQ will be used to drive this process.

Let’s start with the JCL Processors.

JCL Routines

Generate Processor = GJCL01J

The first JCL Process is named GJCL01J.

The first step (DUMYJCL) executes CONWRITE to take a copy of the JCL statements into a temporary dataset.

The second step (CONSCAN) executes the CA-supplied standard scanning utility for processors that checks and maintains the ACM information that provides the PROC-to-JCL information.

The third step (CONTYPE) executes a small custom program ZCK942. Since CONSCAN does not create the TYPE definition in the scan, this program will insert the value “PROC” since that is what we are scanning the JCL for and will be used as a qualifier when we search ACMQ later.

The fourth step (CONRELE) relates the elements in ACM and ACMQ.

The fifth and final step invokes the JCL validation routine. The program supplied in this processor is a product named JCLPREP from Allen Systems Group (ASG). However, this could be any JCL validation routine (i.e. CA-JCLCHECK). However, the invocation in this step should be for validation only; do not attempt to do anything beyond validation else you will encounter the “chicken-and-egg” syndrome!

Delete Processor = *NOPROC

No delete processor is needed

Move Processor = MJCL01J

The MOVE processor needs to move the ACM information from one stage to the next when the JCL moves from one stage to the next. Therefore, this needs to be a one-step processor invoking program BC1PMVCL.

PROC Routines

Generate Processor = GPROC01J

As I indicated earlier, the entire JCL validation is based on the premise that PROCs are your “dominant” source type. In other words, it is far more likely you will be conducing changes to the PROC type than you will the JCL type. Therefore, it makes sense, in this scenario, for PROC types to drive a more comprehensive JCL validation routine. The GPROC01J processor is based on this principle.

The first step (ACMQ) invokes program BC1PACMQ. This program reads through the ACMQ to find the JCL affected by the PROC invoking this processor.

The second step (CHKXREF) invokes program ZCK940. This program will read through the results found off the ACMQ utility in the first step and create CONWRITE statements to retrieve the JCL that invokes the PROC being modified.

The third step (CHKEMPT) invoked a proprietary program for which I do not have the source to give you. You may have another program already at your site that performs this function; conversely, some of the IBM utilities perform its function as well (I think). The program merely checks to see if the dataset provided is empty or whether it contains data. In this case, an empty dataset returns a COND CODE of 0002 and a valid one returns 0000.

This COND CODE is necessary in subsequent steps. It may be the developer is adding a new PROC for which they have not yet given Endevor the submission JCL. In this scenario, the JCL processor has not yet had an opportunity to capture and build the crossreference entry in ACMQ. Therefore, when ZCK940 executes, it will not find any matching rows (an idea just occurred to me!!! It would make sense to modify program ZCK940 to return cond code 0000 when everything is fine (i.e. rows are found) and 0002 when no rows are found. That way, this utility is unnecessary! Your choice, but I think this is a good idea!!).

At any rate, when no submission JCL is found, then GPROC01J needs to just check the PROC and not worry about submission JCL

The next step (ENDRJCL) retrieves a copy of all the JCL that uses the PROC to be validated.

Step DUMYJCL executes if not JCL exists to be retrieved. In that case, we need to create a dummy datatset to keep our JCL validation program happy!

Finally, step JPRPJCL executes JCLPREP to check all the JCL with the PROC being promoted.

In the event no JCL was found, the Condition Code checking executes the subsequent step instead; JPRPPRO. This step executes JCLPREP to check the PROC as stand-alone.

Last but not least is the GENER step. The results of the validation are not of any value unless reported back to the user. Therefore, if the condition code is greater than the condition code we want, this step executes to display the results. Note that SYSUT2 has a statement for TERM=TS. This allows this entire processor to execute in foreground and, if the JCL prep step fails, will display the results on their TSO terminal (if foreground) or on their print queue (if background).

Delete and Move Processors = *NOPROC

The other processors defined in the processor group for type JCL need only be *NOPROC; in other words, no processing is required to MOVE or DELETE the JCL statements other than the normal processing Endevor performs on the Base libraries. Since no output is created during the GENERATE process, there is no need for any output libraries to be MOVED or DELETED.

GJCL01J Processor

//*****************************************************************
//* *
//* PROCESSOR NAME: GJCL01J *
//* PURPOSE: JCLPREP ROUTINE FOR DRIVER JCL *
//* *
//*****************************************************************
//GJCL01J PROC ADMNLIB='CAIDEMO.NDVUT.LOADLIB',
// PREPCTL='NDVLIB.ADMIN.STG6.CTLLIB',
// PREPLIB='ISJCLPRP.PROD.LOADLIB',
// PREPOPT=ZCKJ005,
// PREPRC=4,
// PRULE=P$RDJCL,
// RULELIB='NDVLIB.ADMIN.STG6.RULELIB',
// SYSOUT=Z,
// WRKUNIT=VIO
//*
//*
//******************************************************************
//* *
//* USE CONWRITE TO COPY IMAGE OF JCL INTO TEMPORARY AREA *
//* *
//******************************************************************
//DUMYJCL EXEC PGM=CONWRITE,
// MAXRC=8,
// PARM='EXPINCL(N)'
//ELMOUT DD DSN=&&JCL,
// DISP=(NEW,PASS),
// UNIT=&WRKUNIT,
// SPACE=(6160,(100,100),RLSE),
// DCB=(RECFM=FB,BLKSIZE=6160,LRECL=80)
//*
//******************************************************************
//* *
//* INVOKE CONSCAN TO CHECK THE JCL FOR PROCS EXECUTED. *
//* *
//******************************************************************
//CONSCAN EXEC PGM=CONSCAN
//SRCIN DD DSN=&&JCL,
// DISP=(OLD,PASS)
//PARMSCAN DD *
*
* SCAN FOR PROC= STATEMENT
* START AT FIRST CHARACTER AFTER PROC= AND DELIMIT BY SPACE OR COMMA
*
SCANTYPE ELEMENT
FIND1 STRING='PROC=',POS=ANY
START TYPE=DFLT
END1 TYPE=CHAR,PARM=','
END2 TYPE=SPAC
*
* SCAN FOR EXEC STATEMENT AND IGNORE PGM
* DELIMIT BY SPACE OR BY COMMA
*
SCANTYPE ELEMENT
FIND1 STRING='EXEC',POS=ANY
FIND2 REJECT,STRING='PGM',POS=ANY
START TYPE=DFLT
END1 TYPE=CHAR,PARM=','
END2 TYPE=SPAC
/*
//ACMRELE DD DSN=&&ACMREL,
// DISP=(NEW,PASS),
// UNIT=&WRKUNIT,
// SPACE=(TRK,(10,50),RLSE),
// DCB=(LRECL=80,BLKSIZE=6160,RECFM=FB)
//SCANPRT DD SYSOUT=*
//SYSPRINT DD SYSOUT=*
//*
//*****************************************************************
//* *
//* INVOKE PGM TO ADD TYPE DECLARATION *
//* *
//*****************************************************************
//*
//CONTYPE EXEC PGM=ZCK942
//STEPLIB DD DSN=&ADMNLIB,
// DISP=SHR
//SCLIN DD DSN=&&ACMREL,
// DISP=SHR
//SCLOUT DD DSN=&&ACMRELS,
// DISP=(NEW,PASS),
// UNIT=&WRKUNIT,
// SPACE=(TRK,(10,50),RLSE),
// DCB=(LRECL=80,BLKSIZE=6160,RECFM=FB)
//*
//*****************************************************************
//* *
//* INVOKE CONRELE TO DOCUMENT THE PROC/JCL RELATIONSHIP *
//* *
//*****************************************************************
//*
//CONRELE EXEC PGM=CONRELE
//NDVRIPT DD DSN=&&ACMRELS,
// DISP=SHR
//*
//*JPRPJCL EXEC PGM=JCLPREP,
//* MAXRC=&PREPRC
//******************************************************************
//* *
//* INVOKE JCLPREP TO DO THE JCL SCAN FOR FOUND DRIVER JCL *
//* *
//******************************************************************
//*STEPLIB DD DSN=&PREPLIB,
//* DISP=(SHR,KEEP)
//*SYSUDUMP DD SYSOUT=*,
//* FREE=CLOSE
//*DDIN DD DSN=&&JCL,
//* DISP=(OLD,PASS)
//*DDOUT DD DUMMY,
//* DCB=(RECFM=FB,BLKSIZE=3120,LRECL=80)
//*DDXEFI DD DSN=&RULELIB(&PRULE),
//* DISP=(SHR,KEEP)
//*DDXEFW DD SYSOUT=*,
//* FREE=CLOSE
//*DDRPT DD DSN=&&DDRPT,
//* DISP=(NEW,PASS,DELETE),
//* UNIT=&WRKUNIT,
//* SPACE=(TRK,(10,10),RLSE),
//* DCB=(RECFM=FBA,BLKSIZE=6118,LRECL=133)
//*DDWORK1 DD DSN=&&DDWORK,
//* DISP=(NEW,PASS),
//* UNIT=&WRKUNIT,
//* SPACE=(8456,(100,500)),
//* DCB=DSORG=DA
//*DDWORK2 DD DSN=&&DDWORK,
//* DISP=SHR
//*DDRUN DD *
//*NOPDS
//*
//*DDOPT DD DSN=&PREPCTL(&PREPOPT),
//* DISP=SHR
//*
//*GENER EXEC PGM=IEBGENER,
//* COND=(&PREPRC,GE,JPRPJCL)
//******************************************************************
//* *
//* GENER THE REPORT TO OUTPUT IF THE THRESHOLD RC WAS EXCEEDED *
//* *
//******************************************************************
//*SYSPRINT DD SYSOUT=&SYSOUT,
//* FREE=CLOSE
//*SYSUT1 DD DSN=&&DDRPT,
//* DISP=(OLD,DELETE)
//*SYSUT2 DD SYSOUT=A,
//* FREE=CLOSE,
//* TERM=TS,
//* DEST=SARSAP,
//* DCB=(RECFM=FB,BLKSIZE=79,LRECL=79)
//*SYSIN DD *
//* GENERATE MAXFLDS=1
//* RECORD FIELD=(79,1)
//*

GPROC01J Processor

//*****************************************************************
//* *
//* PROCESSOR NAME: GPROC01J *
//* PURPOSE: JCLPREP ROUTINE FOR PROCEDURE JCL *
//* *
//*****************************************************************
//GPROC01J PROC ADMNLIB='CAIDEMO.NDVUT.LOADLIB',
// PREPCTL='NDVLIB.ADMIN.STG6.CTLLIB',
// PREPLIB='ISJCLPRP.PROD.LOADLIB',
// PREPOPT=ZCKJ003,
// PREPRC=4,
// PRULE=P$RDPRC,
// RULELIB='NDVLIB.ADMIN.STG6.RULELIB',
// SBASELIB='NDVLIB.&C1SSYSTEM..&C1SSTAGE..PROCLIB',
// WRKUNIT=VIO
//*
//*********************************************************************
//** EXECUTE THE ACM QUERY EXPLOSION REPORT **
//*********************************************************************
//ACMQ EXEC PGM=BC1PACMQ
//ACMOUT DD DSN=&&ACMREL,
// DISP=(NEW,PASS),
// UNIT=&WRKUNIT,
// SPACE=(6160,(10,10),RLSE),
// DCB=(LRECL=80,BLKSIZE=6160,RECFM=FB)
//ACMIN DD *
RECTYPE 3
ENVIRONMENT *
SYSTEM *
SUBSYSTEM *
TYPE PROC
ELEMENT &C1ELEMENT
STAGE *
SEARCH UP
/*
//*
//CHKXREF EXEC PGM=ZCK940,
// MAXRC=0
//******************************************************************
//* *
//* CHECK THE PROCXREF TABLE FOR *
//* ALL DRIVER JCL FOR THIS ELEMENT. *
//* *
//******************************************************************
//STEPLIB DD DSN=&ADMNLIB,
// DISP=SHR
//ZCK94001 DD DSN=&&ACMREL,
// DISP=SHR
//ZCK94002 DD DSN=&&ACMRELS,
// DISP=(NEW,PASS),
// UNIT=&WRKUNIT,
// SPACE=(6160,(10,10),RLSE),
// DCB=(RECFM=FB,BLKSIZE=6160,LRECL=80)
//SYSPRINT DD SYSOUT=*,
// FREE=CLOSE
//SYSOUT DD SYSOUT=*,
// FREE=CLOSE
//SYSUDUMP DD SYSOUT=*,
// FREE=CLOSE
//*
//CHKEMPT EXEC PGM=CEE010,
// MAXRC=2,
// PARM='RET,0002;RET,0000'
//******************************************************************
//* *
//* CHECK THE CREATED DATASET FOR CONTENTS. IF NONE EXIST, THEN *
//* NO DRIVER JCL WAS FOUND AND WE WILL PREP THE MEMBER AS IS. *
//* NO CONTENTS WILL RETURN A COND CODE OF 0002. *
//* *
//******************************************************************
//STEPLIB DD DSN=SYS2.LOADLIB,
// DISP=SHR
//CEE01001 DD DSN=&&SCL,
// DISP=(OLD,PASS)
//*
//ENDRJCL EXEC PGM=CONWRITE,
// COND=(0,NE,CHKEMPT),
// MAXRC=0
//******************************************************************
//* *
//* INVOKE CONWRITE TO RETRIEVE COPIES OF THE DRIVER JCL AS PER *
//* THE GENERATED SCL STATEMENTS. *
//* *
//******************************************************************
//CONWIN DD DSN=&&SCL,
// DISP=(OLD,PASS)
//JCLPRP DD DSN=&&JCL,
// DISP=(NEW,PASS,KEEP),
// UNIT=&WRKUNIT,
// SPACE=(6160,(100,100,100),RLSE),
// DCB=(RECFM=FB,BLKSIZE=6160,LRECL=80,DSORG=PO)
//*
//DUMYJCL EXEC PGM=CONWRITE,
// COND=(0,EQ,CHKEMPT),
// MAXRC=8,
// PARM='EXPINCL(N)'
//******************************************************************
//* *
//* IF THE CEE010 PROGRAM DID NOT FIND CONTENTS, THEN JUST USE *
//* THE PROC JCL FOR THE PREP STEP. *
//* *
//******************************************************************
//ELMOUT DD DSN=&&JCL,
// DISP=(NEW,PASS,KEEP),
// UNIT=&WRKUNIT,
// SPACE=(6160,(100,100),RLSE),
// DCB=(RECFM=FB,BLKSIZE=6160,LRECL=80)
//*
//JPRPJCL EXEC PGM=JCLPREP,
// COND=(0,NE,CHKEMPT),
// MAXRC=&PREPRC
//******************************************************************
//* *
//* INVOKE JCLPREP TO DO THE JCL SCAN FOR FOUND ENDEVOR DRIVER *
//* JCL *
//* *
//******************************************************************
//STEPLIB DD DSN=&PREPLIB,
// DISP=(SHR,KEEP)
//SYSUDUMP DD SYSOUT=*,
// FREE=CLOSE
//DDIN DD DSN=&&JCL,
// DISP=(OLD,PASS)
//DDOUT DD DUMMY,
// DCB=(RECFM=FB,BLKSIZE=3120,LRECL=80)
//DDXEFI DD DSN=&RULELIB(&PRULE),
// DISP=(SHR,KEEP)
//DDXEFW DD SYSOUT=*,
// FREE=CLOSE
//DDRPT DD DSN=&&DDRPT,
// DISP=(NEW,PASS,DELETE),
// UNIT=&WRKUNIT,
// SPACE=(TRK,(10,10),RLSE),
// DCB=(RECFM=FBA,BLKSIZE=6118,LRECL=133)
//DDWORK1 DD DSN=&&DDWORK,
// DISP=(NEW,PASS),
// UNIT=&WRKUNIT,
// SPACE=(8456,(100,500)),
// DCB=DSORG=DA
//DDWORK2 DD DSN=&&DDWORK,
// DISP=SHR
//DDRUN DD *
PDS INPUT
/*
//DDOPT DD DSN=&PREPCTL(&PREPOPT),
// DISP=SHR
// DD *
XEFOPT PROCLIB &C1BASELIB
XEFOPT PROCLIB NDVLIB.&C1SY..STG1.PROCLIB
XEFOPT PROCLIB NDVLIB.&C1SY..ST21.PROCLIB
XEFOPT PROCLIB NDVLIB.&C1SY..STG2.PROCLIB
XEFOPT PROCLIB NDVLIB.&C1SY..ST22.PROCLIB
XEFOPT PROCLIB NDVLIB.&C1SY..STG3.PROCLIB
XEFOPT PROCLIB NDVLIB.&C1SY..STG4.PROCLIB
XEFOPT PROCLIB NDVLIB.&C1SY..STG5.PROCLIB
XEFOPT PROCLIB NDVLIB.EMER.EMER1.PROCLIB
XEFOPT PROCLIB NDVLIB.EMER.EMER2.PROCLIB
XEFOPT PROCLIB NDVLIB.&C1SY..STG6.PROCLIB
XEFOPT PROCLIB SYS2.PROCLIB
XEFOPT PROCLIB SYS1.PROCLIB
XEFOPT PROCLIB SYS3.PROCLIB
XEFOPT PROCLIB SYS1.BASE.PROCLIB
/*
//*
//JPRPPRO EXEC PGM=JCLPREP,
// COND=(0,EQ,CHKEMPT),
// MAXRC=&PREPRC
//******************************************************************
//* *
//* INVOKE JCLPREP TO DO THE JCL SCAN FOR UNFOUND PROC JCL *
//* *
//******************************************************************
//STEPLIB DD DSN=&PREPLIB,
// DISP=(SHR,KEEP)
//SYSUDUMP DD SYSOUT=*,
// FREE=CLOSE
//DDIN DD DSN=&&JCL,
// DISP=(OLD,PASS)
//DDOUT DD DUMMY,
// DCB=(RECFM=FB,BLKSIZE=3120,LRECL=80)
//DDXEFI DD DSN=&RULELIB(&PRULE),
// DISP=(SHR,KEEP)
//DDXEFW DD SYSOUT=*,
// FREE=CLOSE
//DDRPT DD DSN=&&DDRPT,
// DISP=(NEW,PASS,DELETE),
// UNIT=&WRKUNIT,
// SPACE=(TRK,(10,10),RLSE),
// DCB=(RECFM=FBA,BLKSIZE=6118,LRECL=133)
//DDWORK1 DD DSN=&&DDWORK,
// DISP=(NEW,PASS),
// UNIT=&WRKUNIT,
// SPACE=(8456,(100,500)),
// DCB=DSORG=DA
//DDWORK2 DD DSN=&&DDWORK,
// DISP=SHR
//DDRUN DD *
NOPDS
/*
//DDOPT DD DSN=&PREPCTL(&PREPOPT),
// DISP=SHR
// DD *
XEFOPT PROCLIB &C1BASELIB
XEFOPT PROCLIB NDVLIB.&C1SY..STG1.PROCLIB
XEFOPT PROCLIB NDVLIB.&C1SY..ST21.PROCLIB
XEFOPT PROCLIB NDVLIB.&C1SY..STG2.PROCLIB
XEFOPT PROCLIB NDVLIB.&C1SY..ST22.PROCLIB
XEFOPT PROCLIB NDVLIB.&C1SY..STG3.PROCLIB
XEFOPT PROCLIB NDVLIB.&C1SY..STG4.PROCLIB
XEFOPT PROCLIB NDVLIB.&C1SY..STG5.PROCLIB
XEFOPT PROCLIB NDVLIB.EMER.EMER1.PROCLIB
XEFOPT PROCLIB NDVLIB.EMER.EMER2.PROCLIB
XEFOPT PROCLIB NDVLIB.&C1SY..STG6.PROCLIB
XEFOPT PROCLIB SYS2.PROCLIB
XEFOPT PROCLIB SYS1.PROCLIB
XEFOPT PROCLIB SYS3.PROCLIB
XEFOPT PROCLIB SYS1.BASE.PROCLIB
/*
//*
//GENER EXEC PGM=IEBGENER,
// COND=((&PREPRC,GE,JPRPJCL),(&PREPRC,GE,JPRPPRO))
//******************************************************************
//* *
//* GENER THE REPORT TO OUTPUT IF THE THRESHOLD RC WAS EXCEEDED *
//* *
//******************************************************************
//SYSPRINT DD SYSOUT=*,
// FREE=CLOSE
//SYSUT1 DD DSN=&&DDRPT,
// DISP=(OLD,DELETE)
//SYSUT2 DD SYSOUT=A,
// FREE=CLOSE,
// TERM=TS,
// DEST=SARSAP,
// DCB=(RECFM=FB,BLKSIZE=79,LRECL=79)
//SYSIN DD *
GENERATE MAXFLDS=1
RECORD FIELD=(79,1)
/*

ZCK940 COBOL Program

000100*************************
000200 IDENTIFICATION DIVISION.
000300*************************
000400 SKIP1
000500 PROGRAM-ID. ZCK940.
000600*AUTHOR. JOHN DUECKMAN
000700*DATE-WRITTEN. FEB 05,2001.
000800*DATE-COMPILED.
000900 SKIP1
001000*REMARKS.
001300 EJECT
001400**************************
001500 ENVIRONMENT DIVISION.
001600**************************
001700 SKIP1
001800 CONFIGURATION SECTION.
001900 SOURCE-COMPUTER. IBM-370.
002000 OBJECT-COMPUTER. IBM-370.
002100 INPUT-OUTPUT SECTION.
002200**************************
002300 FILE-CONTROL.
002400 SELECT PROC-IN ASSIGN TO UT-S-ZCK94001.
002500 SELECT JOB-OUT ASSIGN TO UT-S-ZCK94002.
002600 EJECT
002700**************************
002800 DATA DIVISION.
002900**************************
003000 SKIP1
003100**************************
003200 FILE SECTION.
003300**************************
003400 FD PROC-IN
003500 LABEL RECORDS ARE STANDARD
003600 RECORDING MODE IS F
003700 BLOCK CONTAINS 0 RECORDS
003800 DATA RECORD IS PROC-IN-AREA.
003900 01 PROC-IN-AREA PIC X(80).
004000 FD JOB-OUT
004100 LABEL RECORDS ARE STANDARD
004200 RECORDING MODE IS F
004300 BLOCK CONTAINS 0 RECORDS
004400 DATA RECORD IS JOB-OUT-AREA.
004500 01 JOB-OUT-AREA PIC X(80).
004600 SKIP1
004700**************************
004800 WORKING-STORAGE SECTION.
004900**************************
005600 01 WS1000-STORAGE-AREA.
005700 05 WS1000 PIC X(12) VALUE 'WS1000'.
005800 05 WS1000-PROC-IN PIC X(80).
005810 05 WS1000-PROC-IN-BKDWN REDEFINES WS1000-PROC-IN.
005820 10 WS1000-FILLER01 PIC X(05).
005830 10 WS1000-LEVEL-CODE PIC X(01).
005831 10 WS1000-FILLER02 PIC X(03).
005850 10 WS1000-JOBNAME PIC X(08).
005860 10 WS1000-FILLER03 PIC X(05).
005870 10 WS1000-TYPE PIC X(08).
005880 10 WS1000-FILLER04 PIC X(03).
005890 10 WS1000-ENV-NAME PIC X(08).
005891 10 WS1000-FILLER05 PIC X(03).
005892 10 WS1000-SYSNAME PIC X(08).
005893 10 WS1000-FILLER06 PIC X(03).
005894 10 WS1000-SUBSYSNAME PIC X(08).
005895 10 WS1000-FILLER07 PIC X(03).
005896 10 WS1000-STGID PIC X(01).
006300 05 WS1000-EOF-FLAG PIC X(03).
006400 88 WS1000-EOF VALUE 'EOF'.
006410 05 WS1000-FOUND-FLAG PIC X(05).
006420 88 WS1000-FOUND VALUE 'FOUND'.
006500 01 WS2000-STORAGE-AREA.
006600 05 WS2000 PIC X(12) VALUE 'WS2000'.
007300 05 WS2000-JOB-OUT PIC X(80).
007400 05 WS2000-JOB-OUT1.
007500 10 WS2000-JOB-OUT1-TEXT1 PIC X(18)
007600 VALUE 'WRITE ELEMENT "'.
007700 10 WS2000-JOB-OUT1-JOBNAME PIC X(08).
007800 10 WS2000-JOB-OUT1-TEXT2 PIC X(54) VALUE '"'.
007900 05 WS2000-JOB-OUT2.
008000 10 WS2000-JOB-OUT2-TEXT1 PIC X(13)
008100 VALUE 'FROM SYSTEM "'.
008200 10 WS2000-JOB-OUT2-SYSNAME PIC X(08).
008300 10 WS2000-JOB-OUT2-TEXT2 PIC X(13)
008400 VALUE '" SUBSYSTEM "'.
008500 10 WS2000-JOB-OUT2-SUBSYSNAME PIC X(08).
008600 10 WS2000-JOB-OUT2-TEXT3 PIC X(07) VALUE '" ENV "'.
008610 10 WS2000-JOB-OUT2-ENV PIC X(08).
008620 10 WS2000-JOB-OUT2-TEXT4 PIC X(23) VALUE '"'.
008700 05 WS2000-JOB-OUT3.
008800 10 WS2000-JOB-OUT3-TEXT1 PIC X(06)
008900 VALUE 'TYPE "'.
009000 10 WS2000-JOB-OUT3-TYPE PIC X(08).
009010 10 WS2000-JOB-OUT3-TEXT2 PIC X(08)
009020 VALUE '" STAGE '.
009030 10 WS2000-JOB-OUT3-STGID PIC X(01).
009040 10 WS2000-JOB-OUT3-TEXT3 PIC X(23)
009050 VALUE ' TO FILE "JCLPRP" MEM "'.
009210 10 WS2000-JOB-OUT3-JOBNAME PIC X(08).
009220 10 WS2000-JOB-OUT3-TEXT3 PIC X(26)
009230 VALUE '" OPTIONS SEARCH.'.
010500 EJECT
010600 PROCEDURE DIVISION.
010700***********************************************************
010800* M A I N L O G I C *
010900***********************************************************
011000 OPEN INPUT PROC-IN
011100 OUTPUT JOB-OUT.
011200 MOVE SPACES TO WS1000-EOF-FLAG.
011210 MOVE SPACES TO WS1000-FOUND-FLAG.
011300 PERFORM 1000-GET-RECORD
011400 THRU 1000-GET-RECORD-EXIT
011410 UNTIL WS1000-EOF
011420 OR WS1000-FOUND.
013000 MOVE 'JCL' TO WS2000-JOB-OUT3-TYPE.
013100 PERFORM 0000-LOOKUP-JOB
013200 THRU 0000-LOOKUP-JOB-EXIT
013300 UNTIL WS1000-EOF.
013700 CLOSE PROC-IN
013800 JOB-OUT.
013900 STOP RUN.
014000 EJECT
014100*******************************************************************
014200* LOOKUP-JOB - INTERNAL SUBROUTINE THAT READS THROUGH THE *
014300* FILE MATCHING ON PROCNAME. *
014400*******************************************************************
014500 0000-LOOKUP-JOB.
016600 MOVE WS1000-JOBNAME TO WS2000-JOB-OUT1-JOBNAME.
016700 MOVE WS2000-JOB-OUT1 TO WS2000-JOB-OUT.
016800 PERFORM 2000-WRITE-JOB
016900 THRU 2000-WRITE-JOB-EXIT.
016910 MOVE WS1000-ENV-NAME TO WS2000-JOB-OUT2-ENV.
017000 MOVE WS1000-SYSNAME TO WS2000-JOB-OUT2-SYSNAME.
017100 MOVE WS1000-SUBSYSNAME TO WS2000-JOB-OUT2-SUBSYSNAME.
017200 MOVE WS2000-JOB-OUT2 TO WS2000-JOB-OUT.
017300 PERFORM 2000-WRITE-JOB
017400 THRU 2000-WRITE-JOB-EXIT.
017401 MOVE WS1000-STGID TO WS2000-JOB-OUT3-STGID.
017410 MOVE WS1000-JOBNAME TO WS2000-JOB-OUT3-JOBNAME.
017500 MOVE WS2000-JOB-OUT3 TO WS2000-JOB-OUT.
017600 PERFORM 2000-WRITE-JOB
017700 THRU 2000-WRITE-JOB-EXIT.
017710 MOVE SPACES TO WS1000-FOUND-FLAG.
017720 PERFORM 1000-GET-RECORD
017730 THRU 1000-GET-RECORD-EXIT
017740 UNTIL WS1000-EOF
017750 OR WS1000-FOUND.
017800 0000-LOOKUP-JOB-EXIT.
017900 EXIT.
018000 EJECT
018100*******************************************************************
018200* GET-RECORD - INTERNAL SUBROUTINE THAT READS THE INPUT RECORD. *
018300*******************************************************************
018400 1000-GET-RECORD.
018500 READ PROC-IN
018600 AT END
018700 MOVE 'EOF' TO WS1000-EOF-FLAG
018800 GO TO 1000-GET-RECORD-EXIT.
018900 MOVE PROC-IN-AREA TO WS1000-PROC-IN.
018902 IF WS1000-LEVEL-CODE IS EQUAL TO '2'
018903 THEN
018904 MOVE 'FOUND' TO WS1000-FOUND-FLAG.
019000 1000-GET-RECORD-EXIT.
019100 EXIT.
019200 EJECT
019300*******************************************************************
019400* WRITE-JOB - INTERAL SUBROUTINE THAT WRITES THE FOUND JOB OUT. *
019500*******************************************************************
019600 2000-WRITE-JOB.
019700 MOVE WS2000-JOB-OUT TO JOB-OUT-AREA.
019800 WRITE JOB-OUT-AREA.
019900 2000-WRITE-JOB-EXIT.
020000 EXIT.

ZCK942 COBOL Program

000100*************************
000200 IDENTIFICATION DIVISION.
000300*************************
000400 SKIP1
000500 PROGRAM-ID. ZCK942.
000600*AUTHOR. JOHN DUECKMAN
000700*DATE-WRITTEN. FEB 05,2001.
000800*DATE-COMPILED.
001500 EJECT
001600**************************
001700 ENVIRONMENT DIVISION.
001800**************************
001900 SKIP1
002000 CONFIGURATION SECTION.
002100 SOURCE-COMPUTER. IBM-370.
002200 OBJECT-COMPUTER. IBM-370.
002300 INPUT-OUTPUT SECTION.
002400**************************
002500 FILE-CONTROL.
002700 SELECT SCLIN ASSIGN TO UT-S-SCLIN .
002800 SELECT SCLOUT ASSIGN TO UT-S-SCLOUT.
003000 EJECT
003100**************************
003200 DATA DIVISION.
003300**************************
003400 SKIP1
003500**************************
003600 FILE SECTION.
003700**************************
004310 FD SCLIN
004320 LABEL RECORDS ARE STANDARD
004330 RECORDING MODE IS F
004340 BLOCK CONTAINS 0 RECORDS
004350 DATA RECORD IS SCLIN-AREA.
004360 01 SCLIN-AREA PIC X(80).
004400 FD SCLOUT
004500 LABEL RECORDS ARE STANDARD
004600 RECORDING MODE IS F
004700 BLOCK CONTAINS 0 RECORDS
004800 DATA RECORD IS SCLOUT-AREA.
004900 01 SCLOUT-AREA PIC X(80).
006200 SKIP1
006300**************************
006400 WORKING-STORAGE SECTION.
006500**************************
006600 01 WS0000-STORAGE-AREA.
006700 05 WS0000 PIC X(12) VALUE 'WS0000'.
006710 05 WS0000-COUNTR PIC S9(09) COMP.
006720 05 WS0000-SCLIN-AREA PIC X(80).
006730 05 WS0000-CHECKTYPE PIC X(05) VALUE 'TYPE'.
006750 05 WS0000-NEWTYPE PIC X(80)
006760 VALUE ' TYPE = "PROC" '.
006800 01 WS1000-STORAGE-AREA.
006900 05 WS1000 PIC X(12) VALUE 'WS1000'.
007000 05 WS1000-EOF-FLAG PIC X(03) VALUE SPACES.
007100 88 WS1000-EOF VALUE 'EOF'.
050200 EJECT
050300 PROCEDURE DIVISION.
050400***********************************************************
050500* M A I N L O G I C *
050600***********************************************************
050710 OPEN INPUT SCLIN
050800 OUTPUT SCLOUT.
051100 PERFORM 1000-GET-RECORD
051200 THRU 1000-GET-RECORD-EXIT.
051300 MOVE SCLIN-AREA TO WS0000-SCLIN-AREA.
051400 PERFORM 0000-CHECK-TYPE
051500 THRU 0000-CHECK-TYPE-EXIT
051600 UNTIL WS1000-EOF.
054100 CLOSE SCLIN
054200 SCLOUT.
054500 STOP RUN.
054600 EJECT
054700*******************************************************************
054800* CHECK-TYPE - INTERNAL SUBROUTINE THAT CHECKS THE INPUT *
054900* RECORD FOR 'TYPE' STATEMENT. *
055000*******************************************************************
055100 0000-CHECK-TYPE.
055110 MOVE ZEROES TO WS0000-COUNTR.
055200 INSPECT SCLIN-AREA
055210 TALLYING WS0000-COUNTR FOR ALL WS0000-CHECKTYPE.
055220 IF WS0000-COUNTR IS EQUAL TO ZEROES
055230 THEN
055231 MOVE SCLIN-AREA TO SCLOUT-AREA
055250 ELSE
055260 MOVE WS0000-NEWTYPE TO SCLOUT-AREA.
055270 WRITE SCLOUT-AREA.
055280 PERFORM 1000-GET-RECORD
055290 THRU 1000-GET-RECORD-EXIT.
055291 0000-CHECK-TYPE-EXIT.
055292 EXIT.
055293 EJECT
064600*******************************************************************
064700* GET-RECORD - INTERNAL SUBROUTINE THAT READS THE INPUT RECORD. *
064800*******************************************************************
064900 1000-GET-RECORD.
065000 READ SCLIN
065100 AT END
065200 MOVE 'EOF' TO WS1000-EOF-FLAG
065300 GO TO 1000-GET-RECORD-EXIT.
065400 1000-GET-RECORD-EXIT.
065500 EXIT.

SMF Data

Endevor can create SMF records for most every action being performed by users in the Endevor facility. These records contain a literal gold mine of information for both the Endevor Administrator and for Internal Audit.

The standard SMF reports provided with Endevor are superior to the “Last Action” reports also supplied with Endevor. One of the problems with the “Last Action” reports is that they do not show the execution of DELETE actions; after all, the element is deleted, so what is there to report on?

Conversely, the SMF records show all actions including DELETES. In this way, the Administrator can confirm exactly what happened to an element when “Endevor lost it”!

One of the other advantages of the SMF records is its ability to be used as a measure of Endevor usage at a site. By accumulating the records, the Administrator can monitor how much work Endevor is actually doing at a site. For instance, how many MOVES to production were performed over a period of dates, thereby reporting on the amount of change or volatility a shop has encountered during that time.

As the SMF Log file tends to be a very large file, I recommend a simple job (such as SAS) be run on a nightly basis to extract the Endevor records into their own file. This file should then be accumulated so that the Administrator has month-to-date, year-to-date and yearly SMF files available for reporting.

One of the easiest techniques to accumulate the data is a rolling GDG that appends information. For instance, the initial SMF log file could be scanned to dump the Endevor SMF record ID’s on a regular nightly basis. This file would then be input to a daily IEBGENER job as follows….

//SYSUT1 DD DSN=YOURHILI.ENDEVOR.SMFRECS,DISP=SHR
// DD DSN=YOURHILI.ENDEVOR.SMFMTD(0),DISP=SHR
//SYSUT2 DD DSN=YOURHILI.ENDEVOR.SMFMTD(+1),
// DISP=(NEW,CATLG,DELETE),etc.

The file created (YOURHILI.ENDEVOR.SMFMTD latest generation) will now contain the month-to-date records as extracted from SMF. At the end of the month, run a 2-step IEBGENER job as follows…

//SYSUT1 DD DSN=YOURHILI.ENDEVOR.SMFMTD(0),DISP=SHR
// DD DSN=YOURHILI.ENDEVOR.SMFYTD(0),DISP=SHR
//SYSUT2 DD DSN=YOURHILI.ENDEVOR.SMFYTD(+1),
// DISP=(NEW,CATLG,DELETE),etc.
:
//SYSUT1 DD DUMMY
//SYSUT2 DD DSN=YOURHILI.ENDEVOR.SMFMTD(+1),
// DISP=(NEW,CATLG,DELETE),etc.

The first step appends the month-to-date records into a year-to-date file. The second step creates an empty new generation of the month-to-date file so that when the evening daily SMF extraction job runs, it appends correctly into an empty dataset, starting the entire process again. At the end of the year, then, run a similar 2-step IEBGENER job as follows…

//SYSUT1 DD DSN=YOURHILI.ENDEVOR.SMFYTD(0),DISP=SHR
//SYSUT2 DD DSN=YOURHILI.ENDEVOR.SMFXXXX,
// DISP=(NEW,CATLG,DELETE),etc.
:
//SYSUT1 DD DUMMY
//SYSUT2 DD DSN=YOURHILI.ENDEVOR.SMFYTD(+1),
// DISP=(NEW,CATLG,DELETE),etc.

The first step copies the year-to-date file into an “archive” file. I suggest a naming convention that imbeds the year into the file name to make for easy retrieval later on. The second step empties the year-to-date generation dataset allowing the entire process to start again!

Ask, Permit, and Execute – The APE Philosophy

ill1

One of the largest challenges facing any Endevor installation are not the technical ones, but rather the changes that must occur in the daily execution of the change and configuration discipline as pertains to a site’s real versus perceived needs.

The easiest way that I have found to effectively implement a working change and configuration philosophy and discipline is to critically examine what I refer to as the APE questions. When addressed in a manner where the three areas interact with each other, an approach to effective change control can be identified with the appropriate controls being exercised at appropriate times by appropriate people.

First, a definition of each term:

  • ASK – This task identifies or examines who has the right to ASK Endevor to perform any identified action. In a batch sense of the word, this merely identifies who has the right to create the source control language (SCL) that Endevor may (or may not) execute to perform the specified action against an element. There is no harm in ASKing for an action to be performed; the harm only occurs when the action is permitted and executed!
  • PERMIT – This task identifies or examines whether or not the person who performed the preceding ASK task needs permission before the request can be granted. If permission is necessary, then it also identifies who must grant that permission.
  • EXECUTE – This task identifies that actual execution or occurrence of the action that has been ASKed and PERMITted. It may also identify who has the responsibility of performing this action.

Taken together, these 3 tasks provide the appropriate level of control required by most sites without having to resort to complex External Security Interface (ESI) and, at the same time, providing an effective “separation of duties” that Audit departments look to for control of elements.

In a typical life-cycle, the definition of who performs each task and to what level changes. Let’s walk through a simple life-cycle. First, our example map:

ill1

The promotion rules that this site has are:

  • Stage 1 – this is the development stage. Everyone is expected to do their work in this environment.
  • Stage 2 – this is the quality assurance stage. When the QA group is ready, developers must move their elements into this stage for QA testing.
  • Stage 3 – this is the emergency fix stage. Developers may use this stage for midnight fixes. For the purposes of this document, I will not be outlining the steps involved in emergency fixes.
  • Stage 4 – this is the production stage. After passing QA testing, elements are moved into this stage for production status.

Let’s examine how the APE philosophy works at this site.

Stage 1 – Development

In development, the need to exercise tight control, of necessity, needs to be taken in light of development realities. In other words, there should be little to no reason to exercise tight controls! Development is a constant cycle of edit-compile-test and needs rapid turn-around. Therefore, examination of each of the APE tasks needs to be done in light of this reality.

ASK – There should be no rules controlling the ability for the developer to ASK for an action to be performed. The developer should have full access to all the Endevor actions in foreground and batch since that is the quickest and most convenient mode for them to be operating in.

The exceptions to this rule for this stage would be:

  • COBOL compiles. While allowing the developer to ASK for a GENERATE to occur in foreground, it would likely be ill-advised to allow this action to execute since the resource consumption would likely be excessive. But rather than write a complex ESI rule to control all foreground GENERATE requests, we can more effectively disallow foreground requests by disallowing the processor from executing in foreground per the processor group definition.
  • SIGNOUT OVERRIDE. Signout override ability should only be granted to project leads or other “senior” resources specific to a system and/or subsystem.
  • MOVE to Stage 2. Since this request would require an execution into the next stage, the rules governing APE in Stage 2 need to be applied. See the rules for Stage 2.

Responsibility: Developer

PERMIT – Since our objective for this stage is ease-of-use and developer-friendly, little permission is required for a developer to execute actions at this stage. If they attempt to execute one of the exceptions as outlined in the ASK stage, permission is denied. But other than that, permission is automatic.

Responsibility: Developer

EXECUTE – Again, simplicity dictates that the same person who has ASKed for an action to be executed can, in fact, cause the execution to occur this stage.

Responsibility: Developer

In summary, APE at this stage is likely performed in its entirety by one person; the developer.

Stage 2 – Quality Assurance

The Quality Assurance group wants to ensure they are ready before any developer moves or changes things in the QA environment. Therefore, tighter process controls are needed before Endevor should perform actions.

ASK – There should be no rules controlling the ability of the developer to ASK for an action to be performed. This includes all actions; GENERATE, DELETE, whatever. However, permission for all actions being requested for this stage will only be granted if the request is placed in an Endevor package. This includes MOVE requests from Stage 1 since they will be moving from Stage 1 to Stage 2. The developer will have to make their request in a package and cast the package to ensure they aren’t forgetting something relevant to the testing in Stage 2 they will want to perform.

Responsibility: Developer

PERMIT – Since Quality Assurance wants to control what is flowing into their stage, they need to be set up to grant permission on what is or isn’t going to be accepted. Therefore, after the developer has ASKed for an action to be performed, someone from QA needs to grant the permission for the action to be done so that they are exercising appropriate control over what is flowing through the stage as well as when elements in the stage are being re-generated.

In a related vein, many projects may also require that the project lead also provide approval for the action to be performed. In this way they can exercise appropriate management control over the events that are occurring on behalf of their project as well as coordinating efforts as they flow into QA.

Responsibility: Quality Assurance, possibly Project Lead

EXECUTE: Once permission has been granted, the person who actually causes the execution to occur may be either the developer, the person in Quality Assurance that wants the control over the stage, or even an automated process that checks to see whether permission has been granted and then automatically executes the actions.

Responsibility: Possibly developer, possibly Quality Assurance, possibly automated, etc.

The important thing to note at this stage is how the different tasks are now being performed by different individuals, ensuring no one individual is a bottle-neck. This approach allows the developer to maintain control over the elements they want promoted by ensuring they are the ones always doing the ASKing; the PERMIT and EXECUTE can be performed by others in the organization as reflected by job responsibilities, The PERMIT step, for example, is the “responsibility” task; the person granting the permission for the action to take place is the appropriate person to grant that permission and the appropriate person to assume that responsibility.

Stage 4 – Production

The final stage to examine with our APE process is the production stage. In order to move from this stage, another package will need to be created.

ASK – Since the developer is the one with the biggest stake in what does or does not ultimately get promoted to production, it is most appropriate for the developer to continue assuming responsibility for ASKing the elements be moved into the production stage.

In the event an element needs to be deleted or archived from production, again, it is appropriate that the developer be allowed to ASK for the action to be performed. There is no harm in asking; the harm only occurs if inappropriate approval is given to perform the action being requested, and that should be the job function of an appropriate project lead or manager.

Responsibility: Developer

PERMIT – The production stage is the most sensitive of stages that needs to be effectively controlled. Therefore, it is appropriate that oversight of what is being promoted into production be shared across groups such as Operations, Production Control, Project Leads and possibly even Quality Assurance. One or all of these groups should likely grant permission before the action being ASKed for is allowed to execute.

Responsibility: Any one or combination of Operations, Production Control, Project Leadership, Quality Assurance, etc.

EXECUTE – Once permission is granted, the actual execution of the requests may be contingent on a number of things such as job rescheduling, initiator set up, available promotion windows, etc. Therefore, the execution of the requests is commonly a clerical job function within either Operations or Production Control. This step may also be automated much in the same way movement to Stage 2 could have been automated.

Responsibility: Typically Operations or Production Control

The point of this philosophy is to outline an appropriate separation of duties and to ensure responsibility lies where it best belongs. It is inappropriate for Endevor Administrators to try to “do everything for everybody”; in so doing, they typically wind up failing everyone in everything!

It is much more effective and efficient to “share the load”; developers assuming responsibility for the elements they have changed from signout to production promotion should not be in question. It is appropriate as part of their professional role that they assume this responsibility as part of the complete impact of the change they are making to the site’s production environment.

By using this APE philosophy, a site’s ESI and overall change control rules can be very simple. All developers have access to all actions in all stages in both foreground and background as far as the ability to ASK goes.

Inappropriate foreground execution is controlled through the processor group definition that will disallow the execution of generate processors in foreground[1]. SIGNOUT OVERRIDE authority is handled through the necessary ESI rules.

Inappropriate requests (ASKs) are simply denied during the PERMIT phase. If a stage needs controls, then actions in that stage must be done in package and that package must be approved.

Finally, EXECUTION of the actions into a stage only occurs once the PERMIT phase has been passed.

[1] DELETE and MOVE processors can typically be executed in foreground since they typically only execute IEBCOPY or IEBGENER programs, as well as various Endevor utilities.

Automated Package Handling

Creating the Package

A common requirement in many sites is the need to automate the execution of various package actions. For instance, a package may contain certain elements that need to be in a specific region at a specific time. However, there is no guarantee that an actual person will be around or available to submit the package to ensure it “happens” at the right time.

Automated package execution was introduced to help shops with this issue. By specifying a date and time for the package to execute, a developer can help drive and define the promotion aspects of the elements he or she is moving up the lifecycle. Conversely, a developer can leave the specific dates and times in the EXECUTION WINDOW alone; automated job submission jobs will then translate the package as being “open-windowed” as it can run at any time it meets the criteria of the package submission job.

MODIFY -------------------- CREATE/MODIFY PACKAGE -----------------------------
OPTION ===>

B - Build Package Actions I - Import SCL
E - Edit Package C - Copy Package
N - Add Notes to Package

PACKAGE ID: ILLUSTRATE STATUS: IN-EDIT
DESCRIPTION ===>
PACKAGE TYPE ===> STANDARD
SHARABLE PACKAGE ===> N (Y/N) APPEND TO PACKAGE ===> N (Y/N)
ENABLE BACKOUT ===> Y (Y/N)
EXECUTION WINDOW FROM ===> 03APR00 00:00 TO ===> 31DEC79 00:00

INPUT PACKAGE ID ===>

FROM ISPF LIBRARY:
PROJECT ===> DUEJO01
GROUP ===> TEST
TYPE ===> SCLPDS
MEMBER ===>

OTHER PARTITIONED OR SEQUENTIAL DATA SET:
DATA SET NAME ===>

As can be seen in this illustration, the Package Creation Panel supplies a specific “spot” for entering execution information; EXECUTION WINDOW FROM and TO. Most shops leave these values “as they are” since not every shop automates the execution of packages. However, if I wanted to create a package today (April 3, 2000) and wanted it to execute next week, I would modify the entry to look as follows:

MODIFY -------------------- CREATE/MODIFY PACKAGE -----------------------------
OPTION ===>

B - Build Package Actions I - Import SCL
E - Edit Package C - Copy Package
N - Add Notes to Package

PACKAGE ID: ILLUSTRATE STATUS: IN-EDIT
DESCRIPTION ===>
PACKAGE TYPE ===> STANDARD
SHARABLE PACKAGE ===> N (Y/N) APPEND TO PACKAGE ===> N (Y/N)
ENABLE BACKOUT ===> Y (Y/N)
EXECUTION WINDOW FROM ===> 10APR00 00:00 TO ===> 31DEC79 00:00

INPUT PACKAGE ID ===>

FROM ISPF LIBRARY:
PROJECT ===> DUEJO01
GROUP ===> TEST
TYPE ===> SCLPDS
MEMBER ===>

OTHER PARTITIONED OR SEQUENTIAL DATA SET:
DATA SET NAME ===>

Note that I have only changed the “FROM” entry; I want it to execute any time after midnight, April 10. In this case, I have an open-window after the FROM entry is met. I can tighten or close the window by specifying a time in the FROM time field and by specifying when to “close” the window in the TO fields.

Aside from supplying the ability to open and close execution windows for packages, the creation of a package that will be automatically submitted is no different than creating any other package.

Documentation on automatic package submission is supplied in the Endevor Change Manager SCL Reference Manual. Much of the following is an extract from various subsections of that chapter with (hopefully) some clarification where it might be required.

Before setting up the automated job for package execution, a “minor” decision has to be made in terms of how Endevor is to submit the package jobs. Specifically, (a) do you want the process to submit 1 Endevor job with separate execution steps for each package eligible or (b) do you want the process to submit a 1 step Endevor job for each package? If you choose (b), then (a) do you want each of the jobs submitted to contain the same jobname (and hence, ensure serial execution of the job) or (b) do you want each job submitted to have a unique name (and hence, ensure parallel running of multiple packages, depending on the availability of initiators). Obviously, the implications and ramifications of each these decisions are significant.

If you choose (a) (1 Endevor job with separate steps), then Endevor will create a job that will contain as many as 200 steps (assuming there are 200 packages that meet the execution criteria). If it were to encounter 300 packages ready to execute, it would submit 2 jobs; one with 200 steps, and another with 100 steps.

Submission JCL

The following JCL is what is “scheduled” to run at the appointed times/windows in order to examine the Endevor packages. This is NOT the JCL that executes the package; rather, it is the JCL that submits subsequent JCL to the internal reader to execute the packages.

//ENBP1000 EXEC PGM=NDVRC1,PARM='ENBP1000'
//STEPLIB DD DSN=iprfx.iqual.AUTHLIB,
// DISP=SHR
//CONLIB DD DSN=iprfx.iqual.CONLIB,
// DISP=SHR
//C1MSGS1 DD SYSOUT=*
//**************************************************
//* Uncomment the C1MSGS2 DD Statement if you want *
//* the Summary Report written to this location. By*
//* default the summary is written to C1MSGS1. *
//**************************************************
//*C1MSGS2 DD SYSOUT=*
//SYSTERM DD SYSOUT=*
//SYSPRINT DD SYSOUT=*
//SYSABEND DD SYSOUT=*
//**************************************************
//* The following 2 DD statements are used only by *
//* SUBMIT PACKAGE action. *
//**************************************************
//JCLIN DD DSN=iprfx.iqual.JCLLIB(JOBCARD),
// DISP=SHR
//JCLOUT DD SYSOUT=(A,INTRDR),
// DCB=(LRECL=80,RECFM=F,BLKSIZE=80)
//ENPSCLIN DD *

This is the same submission JCL that is used for all batch package requests. Therefore, in the subsequent sections of this write-up (i.e. DELETE and ARCHIVE packages), this same JCL is to be used to cause the actions to execute against packages. However, I would uniquely define a PDS dataset to the DDNAME ENPSCLIN with unique member names so that I could control what the JCL is examining at each execution. For instance, one member that is named SUBMIT, another ARCHIVE, another DELETE, and another COMMIT. I would then schedule 3 different execution times (or multiple execution times) for each of the difference combinations. Perhaps 3 times a day with the SUBMIT member, once a month with the ARCHIVE member, and every 90 days with the COMMIT member.ill1
Since the examination JCL is submitting jobs to the internal reader, it will require a jobcard. As documented above, the jobcard is supplied in the JCLIN DDName statement. The location of your internal reader queue is specified in the JCLOUT DDName.

SUBMIT PACKAGE Statement

The SUBMIT PACKAGE SCL statement is supplied as input to the ENPSCLIN DDNAME. The syntax rules may be found in the Endevor Change Manager SCL Reference Manual. In its shortest and simplest form, you could specify a SUBMIT PACKGE SCL statement in the following form:

SUBMIT PACKAGE * .

In this form, the package examination JCL will check Endevor for any packages that are approved and for which the execution window falls within the date and time the examination JCL is executing. Packages found will be submitted as per choice (A) as documented above; one Endevor job with a separate execution step for each package eligible.

If we supposed we had an internal policy that stated only packages beginning with A2 are to be submitted automatically, the same SCL would have the following form:

SUBMIT PACKAGE A2* .

Only A2* packages would be examined; all others would be bypassed.

Causing unique jobs per package or unique jobnames per package is controlled through various options added to the SUBMIT PACKAGE SCL statement. For example, consider SCL submitted that read as follows:

SUBMIT PACKAGE *

OPTIONS MULTIPLE JOBSTREAMS .

This would cause the examination JCL to submit a unique job to the internal reader for each package that meets the submission criteria. However, in this example, each of the jobs would contain the same jobname, thus serially executing. If we changed the SCL as follows, a different situation arises:

SUBMIT PACKAGE *

OPTIONS MULTIPLE JOBSTREAMS INCREMENT JOBNAME .

Using this SCL, the examination JCL would submit multiple uniquely-named jobs to the internal reader for each package found. It creates a unique name by incrementing the last character in the JCL jobcard specified in the JCLIN DDNAME.

COMMIT PACKAGE Statement

In much the same way the packages are submitted for execution, there is SCL syntax that can be supplied to cause automated “COMMITS”. Documentation on the syntax and other information is found in the Endevor Change Manager SCL Manual. Again, in its simplest form, the SCL that could be supplied may look as follows:

COMMIT PACKAGE ABC .

“The COMMIT PACKAGE clause identifies the package you are committing. You can use a fully specified, partially wildcarded or fully wildcarded package ID. If you wildcard the package ID, you must specify the WHERE OLDER THAN clause. If you fully specify the package ID, the WHERE OLDER THAN clause is ignored.

“You can include imbedded spaces in the package ID. If the package ID contains an imbedded space or comprises only numeric digits (for example, 12345), enclose the package ID in either single or double quotation marks.

“OPTIONS

“OPTION clauses allow you to further specify package actions.

WHERE OLDER THAN number DAYS– This clause allows you to specify the minimum age of the package you are committing. A package must be older than the number of days you specify in order to commit it. For example, if you specify WHERE OLDER THAN 30 DAYS and the current date is January 31, only packages executed successfully on or before January 1 are committed. There is no default value for the

WHERE OLDER THAN clause. If you wildcard the package ID you must specify the WHERE OLDER THAN clause. The WHERE OLDER THAN value must be between 0 and 999, inclusive. You receive an error message if you specify a value outside this range.”

Therefore, to COMMIT all packages older than 30 days, your SCL syntax would look like:

COMMIT PACKAGE * OPTIONS WHERE OLDER THAN 30 DAYS

Note that only packages in EXECUTED or EXEC-FAILED status can be COMMITed.

ARCHIVE PACKAGE Statement

The ARCHIVE PACKAGE SCL information is documented in the Endevor Change Manager SCL Reference Manual.

Many sites find this action extremely useful because it

  • allows them to reduce the size of their package dataset by archiving information and then deleting old packages
  • allows them to still run all package reports against an off-line dataset (i.e. tape) rather than against the Endevor package dataset.

Again, in its simplest form, the ARCHIVE PACKAGE statement would look as follows:

ARCHIVE PACKAGE ABC TO DSNAME ‘XXX.ARCHIVE’ .

This statement would then record all the information for package ABC to the DSNAME specified. This is an unlikely implementation of the facility since it does not make use of any of the truly powerful options available with the SCL statement. However, consider the following:

ARCHIVE PACKAGE *

TO DSNAME ‘XXX.ARCHIVE’

OPTIONS WHERE OLDER THAN 30 DAYS

DELETE AFTER ARCHIVE .

In this example, all packages in an EXECUTED state that are older than 30 days would be copied to the archive dataset and then deleted from the Package dataset.

DELETE PACKAGE Statement

The final important batch package SCL action to utilize for package automation is the DELETE statement. It is documented in the Endevor Change Manager SCL Reference Manual.

Rather than give various examples of the different combinations, the format of the action that I would tend to use the most would be as follows:

DELETE PACKAGE *

OPTIONS WHERE OLDER THAN 90 DAYS

WHERE PACKAGE STATUS IS ALLSTATES .

Used in conjunction with the ARCHIVE Package statement above, this job would clean up the packages that have not had any action done to them for a period of 90 days. In other words, someone likely created a package and then forgot about it. As soon as someone uses a package, the date gets reset. Therefore, 90 days of complete inactivity would equal deletion.

Since I have specified PACKAGE STATUS of ALLSTATES, this statement will automatically delete packages that are IN-EDIT, APPROVED, EXECUTED (although these would already be ARCHIVED and DELETED, so wouldn’t be there!), EXEC-FAILED, etc. In other words, this statement becomes my “ultimate” cleanup of garbage on my package dataset.

Season’s Greetings!

Rather than the usual article about various things Endevor, I thought I would take this post as an opportunity to wish all the readers of this blog a Merry Christmas and the very best in the New Year!

I have known many of you readers personally for many years and met many of you during my tenure at CA (and some even before that in the Legent days!). This past year has been a challenging one as I have entered the field of Endevor consulting and made myself available to any and all needing help trying to get the most out of their Endevor installation. But it also has been extremely rewarding as I have re-established connection with many friends and acquaintances from days gone by!

I look forward to continuing my relationship and interaction with a group of people I have always considered to be the most dynamic and united in focus of any in the greater IT community.

Once again, please accept a heartfelt

Merry Christmas and Happy New Year!