Thoughts on Integrating EGL from IBM with Endevor

The following article is specific to a tool from IBM known as Enterprise Generation Language. I provide the information not so much as a solution specific to EGL but rather as a model on tenets I believe are critical to effective source and configuration management for z/OS systems, the main one being “ultimately, the TRUE source (not just the generated source or derived source) needs to be captured for auditing and management purposes”. It’s not good enough for “models” on distributed platforms to be “the source” and then import what it creates in its completeness as an application; I believe to truly safeguard an application on z/OS, I must be able to recreate that application from the “stuff” I’ve stored… and my place of storage for applications is Endevor.

 

In the past, I was asked to investigate the options in integrating Enterprise Generation Language (EGL) for z/OS from IBM into Endevor. What are the choices that an Endevor site has in securing applications so that the same integrity Endevor gives to “native” code can be secured in “generated” code?

Based on my research, I have been able to determine the following:

Findings:

  • Unlike other CASE tools that generate code for execution on the z/OS system, the EGL ecosystem requires the target language to be generated on the workstation. Other CASE tools (such as CA GEN) provide the option of generating the code on z/OS.

egl1

  • One of the “choices” during COBOL code generation is to have the code automatically delivered, compiled, and otherwise made ready on z/OS from the Enterprise Developer on the workbench.

egl2

Note in this flow that at one point you can specify PREP=Y. This instruction on the workstation causes the generated COBOL, JCL, and if necessary, BIND statements to be transferred to the mainframe for execution. Otherwise, all built routines remain on the workstation for delivery to the z/OS platform based on how you want to send it there.

  • All sites contacted or from whom I have been able to get information have indicated that they are storing their EGL source in a distributed solution (either Clearcase or Harvest) and are storing the z/OS source in Endevor. The mechanism for storing the generated source in Endevor (i.e. manual or automatic) has not been determined.
  • Given the fact that sites ARE saving something that is referenced as EGL source and storing it in their distributed solution, this gives evidence (as well as reference in the EGL manuals) that there IS EGL source that needs to be stored.

Unknowns:

  • Is there a name or label or title or something in the EGL source that correlates to the generated z/OS elements? This is key to providing a quasi-automatic solution.

Design Options:

  • EGL in Distributed Solution/Manual delivery of z/OS components.

This option appears to be the most prevalent amongst those sites that are using EGL. Note that one of the other indicators from my research is the lack of sites using or implementing EGL at this time. While this may change in the future, there is limited experience or “current designs” to draw upon. This solution would, as the title implies, store the EGL in a distributed SCM solution, do the generation on the workstation, FTP or otherwise transmit the generated source to the mainframe, and then ADD/UPDATE the source into Endevor for the compilation.

Note that the transmission of the source generated on the workstation and the ADD/UPDATE of the source into Endevor can be accomplished today without signing onto the mainframe by accomplishing this step through Change Manager Enterprise Workbench (CMEW).

  • EGL in Distributed Solution/Automatic Delivery of z/OS components

In this scenario, the EGL would still be stored in the distributed SCM solution. However, if you specified PREP=Y, then the source would automatically be delivered and compiled by and in Endevor.

This scenario would require research and modification of the IBM provided z/OS Build Server. Based on the research conducted to-date, the z/OS Build Server is a started task that invokes the site-specific compile, link and bind processes. This process could, theoretically, be modified to instead execute an Endevor ADD/UPDATE action that would result in the source automatically being stored and compiled/linked/bound by Endevor instead of the “default” process provided by IBM.

  • EGL in Endevor Complete

In this scenario, the z/OS generated components remain on the workstation. All components, including the EGL source, COBOL source, link statements and anything else created by EGL, are then merged into a single “element” with each different type of source perhaps being identified with a separator line of some sort (maybe a string of “***********”). The ADD/UPDATE process of Endevor would then execute the different source components through their appropriate compile/link/bind programs as appropriate i.e. the first step in the processor would create temporary files that unbundled the different source types. These temporary files would then be the source that is generated.

Note: In order for any of the following designs to work, discovery of the previously documented “unknown” must be determined. These designs will only work if there is “something” in the EGL source that can be directly tied to generated z/OS components.

  • EGL in Endevor / EGL Delivery to z/OS

In this scenario, code generation would take place on the workstation and PREP=Y would execute as provided by IBM with no modifications (other than site-specific ones) to IBM’s z/OS Build Server. This will result in the COBOL, link, and BIND source being delivered to PDS on the mainframe and compiled there.

Assuming the delivery of the components to z/OS can be done to “protected” libraries, the EGL source could then be ADD/UPDATEd into Endevor using CMEW. The ADD/UPDATE process would then query the EGL source and automatically copy or otherwise bring in the COBOL, Link, and Bind source created and delivered earlier. The load modules created would be ignored; they would be recompiled again under Endevor’s control.

There are a variety of other options and designs and hybrids/combinations on the above ideas that I can think of. However, this paper should serve as the beginning of a discussion concerning which model or architecture best suits the needs of the site.

A Few More Simple Tips

Get Yourself A Dumb ID

A very helpful hint for the Endevor Administrator is to have a second userid for the system that has limited access to facilities along the same lines as the most restricted developer.

This ID is very useful for ensuring the changes you make in Endevor work for everyone. Your normal userid tends to have “god-like” access to everything. Therefore, everything always works for you!

The “dumb ID” reflects your users and is a good verification check to ensure changes you are making truly are “transparent”.

Use supportconnect

It is surprising how many sites are unaware of the excellent support provided by CA for all of its products through the Internet. Everyone using CA products should be registered and using the facility found at supportconnect.ca.com.

Through effective use of supportconnect, the Administrator has an opportunity to be more than just “reactive” to Endevor problems; they can actually be “pro-active”.

For instance, supportconnect provides the ability to list all current problems and fixes for the Endevor product. The Administrator can scan down that list, recognize circumstances that fit his/her site, and apply a fix before the problem is actually encountered.

Conversely, they can sign-up to be alerted to times when new solutions are posted, thereby being made aware of problems and fixes without even having to request a list.

Other features of the tool are the ability to create and track issues, as well as your site-specific product improvement suggestions.

Positioning the Endevor Administrator in an Organization

One of the struggles companies often face with Endevor is trying to define where the administration for the system best resides. While not offering a definitive answer to this question, I provide the follow excerpt from a report written for an Australian firm where I have endeavored (no pun intended) to provide at least some opinion.

Correctly positioning Endevor Administration in large sites can be as strategic as the decision to implement the product itself. Endevor represents a logical extension of change management, and as such should be autonomous. The dilemma presented is to be accountable for implementing company standards and procedures, but at the same time be responsive to the requirements of the true client – the developer. Initially the answer to this appears to be straightforward until you consider what the actual line management should be for the Endevor Administrators themselves. General practice is to locate Endevor Administrators in one of these areas;

  • Application Development
  • Change Management / Operations
  • Systems Administration

Development – this is one of the best areas for Endevor Administrators to be physically located, because they are working with their principal stakeholder. The major drawback with belonging to this reporting line is that they are responsible for maintaining company standards and procedures, which can potentially be compromised by a management request. Care must also be taken here to take the role seriously, because it usually decays into a caretaker or part-time role that no one is really interested in. Even though Endevor Administration is within the application development structure, developers should not be the ones managing Endevor.

Change Management / Operations – usually set up as a discrete business unit within Operations, Change Management walks the balance between maintaining corporate standards and being attentive to developers’ requirements, but with reduced risk of compromising their responsibility through a management request. Sites that select this option will usually have full time resources committed to the role, and consequently enjoy the benefits of that decision.

Systems Administration – although a realistic choice through technical requirements, positioning Endevor Administrators within this area is the least advantageous. The risk here is that they will see their role as enforcers of process first, before they take developers’ requirements into account. Traditionally they will not commit full time resources to the role, so users will miss out on features and functionality as new ‘best practices’ emerge.

In summary, the optimum could be to physically locate the Endevor Administrators with application developers, but their reporting line could be to Change Management / Operations or even Audit. No matter where Endevor Administration is located and what the reporting line, it is most important that the role is full time and taken just as seriously as that of the System Security team.

Catch Alls – Some Short-and-Sweet Tips

Optional Features Table

A good practice to get into is to regularly review the values and “switches” that have been placed in the Optional Features table supplied with Endevor. The source is found on the installation TABLES file and is member ENCOPTBL.

Each optional feature is documented as well as the various parameters that may be provided when activation the option.

Avoid Using Endevor within Endevor

Endevor processors allow you to iteratively invoke Endevor from within Endevor. This is a common practice when you need to retrieve something related to the element being worked on in a processor. For instance, you may want to check the submission JCL against the PROC being processed, or perhaps component information is needed to ensure complete processing occurs.

Before deciding to include “Endevor within Endevor (C1BM3000)” in your processor, make sure what you want to do can’t be done with one of the myriad of utilities specifically designed to execute in processors. Specifically, CONWRITE has had many extensions added to it that allow retrieval of elements and retrieval of component information.

A utility will always execute neater, cleaner, sweeter, and faster than Endevor-within-Endevor.

Move Commonly Used JCL Statements to the Submission JCL

A very simple technique to improve Endevor performance and reduce the number of service units required by the facility is to move (or copy) commonly used JCL statements from the processor to the submission JCL.

Specifically, either move or copy the declarations for SYSUT3, SYSUT4, SYSUT5, etc. into the skeleton member XXXXXXX so that they are defined automatically by the OS390 operating system at submission time. Then, when Endevor requires the DDNames and allocations, they are already done, thus saving overhead time required for Endevor to perform the dynamic allocation.

While this does not sound like a “big deal”, in actuality it can make a significant difference. A typical COBOL processor, for example, will need to allocation each of the SYSUTX Ddnames twice; once for the compile step and again for the linker step. If you are compiling 50 programs in an Endevor job, the allocations and de-allocations can occur over 300 times!

Based on personal experience, I would put the SYSUTX statements in BOTH the processor and the submission JCL. This is based on experimentation done that established a baseline CPU usage with the statements just in the processor. The first alteration removed the statements from the processor and placed them only in the submission JCL. This resulted in a drop in CPU usage. I then placed the statements in both the processor AND the submission JCL. This resulted in a further drop in CPU usage (lower than the first!). Therefore, no harm is done (in fact, good may be the result!) by having the statements in both locations, so I would leave it in both!

Some Available Traces

One of the problems with the Endevor documentation is that the different traces (and “hidden abilities”) that are available are scattered throughout different sections. Therefore, this list has been (and is being) constructed to try to capture all the different traces in one location.

  • EN$SMESI
    • ESI SMF record trace
  • EN$TRALC
    • Allocation service trace
  • EN$TRAUI
    • Alternate ID trace
  • EN$TRESI
    • ESI trace
  • EN$TRXIT
    • Exit trace
  • EN$TRITE
    • If-Then-Else trace
  • EN$TRAPI
    • API Trace
  • EN$TRLOG
    • Logon and logoff information
  • EN$TRSMF
    • Writes SMF records (needs to write to a dataset)
  • EN$TRFPV
    • Component Validation Trace
  • EN$AUSIM
    • AUTOGEN in simulation mode
  • EN$TROPT
    • Site Options Report (imho this should be a regular report not a trace)
  • EN$TRSYM
    • Symbolic resolution trace
  • EN$DYNxx
    • NOT A TRACE. A method where dynamically allocated datasets (eg done by REXX in a processor) can be monitored by Endevor

Take the time to search through the manuals looking for “EN$”. You might be surprised at the things you discover that you never knew you had!

The Endevor ESI Look-Aside Table

Many sites are unaware or have inadvertently disabled Endevor’s ESI Look-Aside Table (LAT). As the manual states:

“The security look aside table (LAT) feature allows you to reduce the number of calls to the SAF interface and thereby improve performance. The result of each resource access request to SAF is stored in the LAT. ESI checks the LAT first for authorization and if the resource access request is listed ESI does not make a call to SAF.

“Note: Do not use the LAT feature in a CA-Endevor/ROSCOE environment.”

Always ensure you have allocated a value to the LAT variable in the C1DEFLTS table as this is a simple (and supplied) manner of improving Endevor performance. Leaving the value blank or assigning a zero to the field will turn the function off, resulting in superfluous calls to your site security software during foreground processing.

The values that can be assigned to the LAT field are 2 to 10, with each number representing 4K page sizes of storage. A good starting value is 4.

Unlimited Allocations

Another vexing problem that large shops run into is the fixed number of dynamic allocation that MVS allows to be performed for a single job. As of the writing of this paper, that limit was set to 1,600. In the event you job requested more than 1,600, the system would abend the job with an S822 abend code.

On the surface, it appears to be very easy to exceed this number in the normal course of processing within Endevor. Since Endevor jobs execute as a single step with program NDVRC1, and since a package or batch job could easily hold 5,000 programs, the mathematics alone would seem to indicate the job will abend early in the process.

Consider a simple load module MOVE processor; a processor that moves the DBRM, Object, and Listings from one stage to the next. Each program being moved will require 2 allocations each for the DBRM, Object, and Listing libraries, 3 each of the SYSUT3 and SYSUT4 working libraries, 3 SYSIN allocations, and 3 SYSPRINT allocations. This works out to a total of 18 allocations per program. Therefore, theoretically, in our package of 5,000 programs, the system should fail us at program number 89, since during the processing of that program we will exceed the 1,600 allocation limit (program 89 x 18 allocations = 1602 allocations).

However, in reality, that doesn’t happen. In fact, Endevor will merrily continue on its way until program number 534. Although further along than program 89, the package is still not complete… and why here? Why not program 89?

The answer lies in the manner in which Endevor allocates (and de-allocates) datasets during execution. After the execution of every program in the package/batch request, Endevor de-allocates all the datasets that were used by the processor for that element. In this way, the 1,600 limit is not reached early in the processing. In essence, each program gets to start with a clean slate.

However, this is not true for any datasets destined for SYSOUT (e.g. SYSPRINT). Endevor does NOT release these dynamic allocations and, instead, accumulates them as the job executes. Therefore, the 3 allocations done for each program for SYSPRINT in my example are accumulative, so that when I reach program 534, I have once again hit the 1,600 allocation ceiling (Program 534 x 3 SYSOUT allocations = 1,602 allocations).

There are a couple of ways to resolve this problem, but I believe the best way is as follows.

For every processor, insert a new symbolic called something like DEBUG. Do conditional checking to see if you really need to see the output; after all, the majority of time, the output contained in SYSOUT is not germane to the task at hand. You only need to see it if you are debugging a problem. Consider the following sample processor.

//DLOAD01S PROC DEBUG=NO,
:
//S10 EXEC PGM=IEBGENER
//SYSUT1 DD DSN=MY.LIB.LIB1,
// DISP=SHR
//SYSUT2 DD DSN=&&TEMP1,
// DISP=(NEW,PASS),
// UNIT=&UNIT,
// SPACE=(TRK,(1,1),RLSE),
// DCB=(LRECL=80,BLKSIZE=6160,RECFM=FB)
// IF &DEBUG = ‘NO’ THEN
//SYSPRINT DD DUMMY
// ELSE
//SYSPRINT DD SYSOUT=*
// ENDIF
//SYSIN DD DUMMY
//*
:
:

The default value for symbolic &DEBUG is NO. Since the SYSPRINT will resolve DUMMY, the dynamic allocation will not occur and you will never incur the incremental count towards 1,600.

Again, I recommend this approach because it is seldom that you need the output from the different SYSOUTs in your processors unless you are debugging a problem. This approach allows you to “turn on” the output when you need it, but otherwise suppress it when you don’t. To turn on the output, just change the value of &DEBUG symbolic to something other than ‘NO’.

The second half of this solution is the FREE=CLOSE clause. This statement forces the system to drop the allocation of the device or dataset when the step finishes. Endevor does this automatically for every dataset it uses except SYSOUT. You can code the drop for the SYSOUT yourself.

However, be careful if you decide to place the clause in every SYSOUT without also analyzing which of the SYSOUTs you really need. It is entirely likely that you will flood your system’s JOES (job output) with SYSOUT data if you do not exercise discretion.

MONITOR = COMPONENTS is a Drag and Code Your Processors Properly

This is actually an amalgamation of two related topics.

Another common implementation that I have seen is the over-utilization of the MONITOR=COMPONENTS ability in processors that create Load modules.

In a previous blog on “Composite Load Module Creation”, I have detailed at length the effect over-utilization of the facility can have on Endevor performance. This section has merely been inserted to draw attention to the issue once again; do not place MONITOR=COMPONENTS on every library being included in a linkedit. It is wisest to monitor only those libraries that are relevant to your actual applications. Unless it is relevant to have a detailed inventory of all the COBOL support routines that are included in your IBM compiles, I highly recommend NOT monitoring those libraries!

And believe it or not, Endevor can be very forgiving of coding errors in its processors. Consider the following processor…

//DLOAD01S PROC SYSA=DUMMY,
//              UNIT=VIO
//S10      EXEC PGM=IEBGENER
//SYSUT1     DD DSN=MY.LIB.LIB1,
//              DISP=SHR
//SYSUT2     DD DSN=&&TEMP1,
//              DISP=(NEW,PASS)
//SYSPRINT   DD &SYSA
//SYSIN      DD DUMMY
//*
:
:

In this example, my definition for the temporary dataset name &&TEMP1 is incomplete; I have not specified any other attributes of the file other than it is new and it should be passed. Endevor will execute this processor correctly; in other words, it will not abend. Instead, Endevor will make assumptions regarding the dataset and attempt to resolve the problems, usually quite correctly.

However, the problem you may run into is mysterious page-outs occurring on your Endevor jobs. Endevor batch jobs will page-out during execution and likely never page back in again. This typically results in your Operations area canceling your job (which is reflected in your JES log as an S522 abend).

This occurs because of the manner in which Endevor enqueues and dequeues datasets for read and write. Since the information supplied in the processor is incomplete, Endevor must make some assumptions based on a finite set of rules. If, for some reason, it must make those assumptions again for a related, or even unrelated circumstance, then it is possible it will put itself into a “deadly embrace”.

The fix to this problem is simple. Just remember to code your processors correctly with all values present.

//DLOAD01S PROC SYSA=DUMMY,
//              UNIT=VIO
//S10      EXEC PGM=IEBGENER
//SYSUT1     DD DSN=MY.LIB.LIB1,
//              DISP=SHR
//SYSUT2     DD DSN=&&TEMP1,
//              DISP=(NEW,PASS),
//              UNIT=&UNIT,
//              SPACE=(TRK,(1,1),RLSE),
//              DCB=(LRECL=80,BLKSIZE=6160,RECFM=FB)
//SYSPRINT   DD &SYSA
//SYSIN      DD DUMMY
//*
:
:

Full and Incremental Unloads

Many shops rely on volume backups in order to provide their emergency and disaster recovery backups. However, there is an inherent problem with this approach.

A volume backup is a point-in-time backup that is, by definition, a physical solution. However, Endevor maintains its information and inventory logically, maintaining independence from the actual physical location of the actual elements. Therefore, unless all the information for an application happens to be stored on the same physical DASD device when the volume backup is occurring, there is a distinct possibility of serious synchronization problems.

To illustrate, my base libraries may be on SYSVL1, the delta libraries on SYSVL2, and my MCF libraries on VSMVL1. The volume backups for these particular devices occurred at 17:23, 18:34, and 19:02 respectively.

During that same time, I also have a regularly scheduled Endevor batch job that runs every hour on the hour to check for approved packages that meet the time windows for execution. The jobs ran, therefore, at 17:00, 18:00 and 19:00.

During that time, it found a job to execute at 18:00 and moved 200 elements from one stage to the next. At 19:00, it found another job that executed GENERATEs against 100 elements.

Based on this information, what do my volume backups contain for my system? If I restored these datasets from the backups, my base library would not match my delta library. And neither of them would match my MCF. In other words, I would have a serious problem to address in the middle of what is already a disaster (i.e. the event that forced the restore in the first place).

Using full and incremental UNLOADs solve this problem by addressing entire applications at the time of the unload separate from the physical location. Although time-consuming, the solution provided in the event of a disaster is comprehensive, complete, and correct. If my UNLOAD took place at 17:23, then no changes can occur to my application, regardless of file, until the UNLOAD is complete. This ensures that, in the event I need the data to RELOAD, I have complete information for my base, delta, and MCF files.

Am I advocating no longer doing volume backups? No, not at all. I recommend continuing to perform volume backups but keep the UNLOAD files as backup for your backup! During a disaster recovery exercise, the steps I recommend are to restore the volume backups and then run an Endevor VALIDATE job. If the job comes back clean, you’re good to go! But if there’s a problem, you have the necessary files to restore things to a stable state.

A final note. The format of the full and incremental UNLOAD files is the same as that created for the ARCHIVE action. Therefore, a secondary use for the UNLOAD file can be to restore “accidentally” deleted elements; in other words, even though not designed to restore specific elements, the fact is they can be restored by using the RESTORE SCL the same as you would against an ARCHIVE file. Just use the UNLOAD file instead!

Use IF-THEN-ELSE

(Republished at 9:55AM EST)

If your Endevor Processors are like most shops, you sometimes find yourself in the position of trying to remember which symbolics to override for which functions in which stage of which environment for which system… you know what I mean.

Many Administrators have taken advantage of symbolics in their CA-ENDEVOR processors so that they can control what is happening where. Unfortunately, we sometimes forget to customize exactly the same processor exactly the same way for every system in a stage. We have been able to ensure some consistency through the creative use of EXECIF and COND CODE checking (plus the later introduction of Batch Administration), but those tools are relatively limited leaving human intervention to complete the necessary customization.

Fortunately, Endevor introduced a tool to the Administrators arena; IF-THEN-ELSE processing. With a little imagination, some re-analysis of your present processors, and a little structure thrown in for good measure, you now have the opportunity to simplify your life as an Administrator; or at least reduce the number of calls due to processors operating differently across systems.

Consider the following processor…

//COPYSRC PROC INLIB=NDVR.&C1SY..&C1ST..SRCELIB1,
// OTLIB=NDVR.&C1SY..&C1ST..SRCELIB2
//COPY EXEC PGM=IEBCOPY,
// MAXRC=4,
// EXECIF=((&C1ST,NE,STG1),
// (&C1ST,NE,STG2),
// (&C1ST,NE,STG3))
//SYSPRINT DD SYSOUT=*
//INDD DD DSN=&INLIB,
// DISP=SHR
//OUTDD DD DSN=&OTLIB,
// DISP=SHR
//SYSIN DD *
COPY INDD=INDD,OUTDD=OUTDD
MEMBER=(&C1ELEMENT,,R)
/*
//* END OF PROCESSOR

This is a very simple processor that could be used to copy source from one library to another library. The conditions on the EXEC statement (i.e. the EXECIF statements) ensure the processor will never execute when the StageID is STG1, STG2, or STG3; otherwise, it will execute. While extremely simple, it will serve to illustrate the potential of IF-THEN-ELSE.

In our make-believe shop, there is no dataset that actually is suffixed “SRCELIB1” or “SRCELIB2” as in the example. Instead, the Administrator must remember to specify the real library name for the source-type the processor is defined to. Unfortunately, the Administrator did not embed or directly relate the name of the libraries to the types he/she wants to manipulate. So, consider the following table…

il1

The symbolics INLIB and OTLIB must be overridden manually by the Administrator to specifically supply the correct suffix depending on type every time the processor is used. If they forget, the processor abends.

So… how can we improve this processor? In essence, start making it a “smart” processor?

Let’s start with the EXEC statement.

Unlike regular JCL, IF-THEN-ELSE processing within ENDEVOR has full access to all the ENDEVOR symbolics. So why not use the opportunity to clarify (and eliminate) the EXECIF statement?

//COPYSRC PROC INLIB=NDVR.&C1SY..&C1ST..SRCELIB1,
// OTLIB=NDVR.&C1SY..&C1ST..SRCELIB2
// IF (&C1ST NE STG1) AND
// (&C1ST NE STG2) AND
// (&C1ST NE STG3) THEN
//COPY EXEC PGM=IEBCOPY,
// MAXRC=4
//SYSPRINT DD SYSOUT=*
//INDD DD DSN=&INLIB,
// DISP=SHR
//OUTDD DD DSN=&OTLIB,
// DISP=SHR
//SYSIN DD *
COPY INDD=INDD,OUTDD=OUTDD
MEMBER=(&C1ELEMENT,,R)
/*
// ENDIF
//* END OF PROCESSOR

Controlling the execution step in this way allows you to conditionally execute programs within your processor based on (for example) Processor Group Name, Type, Element… any processor symbolic supplied by ENDEVOR or, for that matter, any symbolic you define to the processor.

Now, let’s eliminate the need for library symbolics entirely within the processor by adding some intelligent checking based on our known nomenclature for libraries.

//COPYSRC PROC
// IF ((&C1ST NE STG1) AND
// (&C1ST NE STG2) AND
// (&C1ST NE STG3)) AND
// ((&C1TY = JCL) OR
// (&C1TY = PROC) OR
// (&C1TY = COBOL) OR
// (&C1TY = PLI)) THEN
//COPY EXEC PGM=IEBCOPY,
// MAXRC=4
//SYSPRINT DD SYSOUT=*
// IF (&C1TY = PROC) THEN
//INDD DD DSN=NDVR.&C1SY..&C1ST..PRCLIB,
// DISP=SHR
//OUTDD DD DSN=NDVR.&C1SY..&C1ST..PRCLIB2,
// DISP=SHR
// ELSE
// IF (&C1TY = JCL) THEN
//INDD DD DSN=NDVR.&C1SY..&C1ST..DRVJC,
// DISP=SHR
//OUTDD DD DSN=NDVR.&C1SY..&C1ST..DRVJC2,
// DISP=SHR
// ELSE
// IF (&C1TY = COBOL) THEN
//INDD DD DSN=NDVR.&C1SY..&C1ST..COBSRCE,
// DISP=SHR
//OUTDD DD DSN=NDVR.&C1SY..&C1ST..COBSRCE2,
// DISP=SHR
// ELSE
//INDD DD DSN=NDVR.&C1SY..&C1ST..PLSRCE,
// DISP=SHR
//OUTDD DD DSN=NDVR.&C1SY..&C1ST..PLSRCE2,
// DISP=SHR
// ENDIF
// ENDIF
// ENDIF
//SYSIN DD *
COPY INDD=INDD,OUTDD=OUTDD
MEMBER=(&C1ELEMENT,,R)
/*
// ENDIF
//* END OF PROCESSOR

In this example, I have allowed a fall-through to the PLI library if none of the types match the specified ones because I changed the conditional execution of the program to also check for source type. Combinations of IF-THEN-ELSE in EXEC statements and in DD statements provide a powerful new tool for controlling the execution of steps within processors as well as libraries used.

Experiment! In a recent upgrade to a linkage-editor processor, I was able to remove as many as 30 symbolics that were “prone to error” due to just-plain human error.

If you want to structure or simplify the “look” of your processors, try using INCLUDEs. You could, for example, put all the different library sets into separate INCLUDE members. Now your processor could look like the following…

//COPYSRC PROC
// IF ((&C1ST NE STG1) AND
// (&C1ST NE STG2) AND
// (&C1ST NE STG3)) AND
// ((&C1TY = JCL) OR
// (&C1TY = PROC) OR
// (&C1TY = COBOL) OR
// (&C1TY = PLI)) THEN
//COPY EXEC PGM=IEBCOPY,
// MAXRC=4
//SYSPRINT DD SYSOUT=*
// IF (&C1TY = PROC) THEN
++INCLUDE PROCLIBS
// ELSE
// IF (&C1TY = JCL) THEN
++INCLUDE JCLLIBS
// ELSE
// IF (&C1TY = COBOL) THEN
++INCLUDE COBLIBS
// ELSE
++INCLUDE PLILIBS
// ENDIF
// ENDIF
// ENDIF
//SYSIN DD *
COPY INDD=INDD,OUTDD=OUTDD
MEMBER=(&C1ELEMENT,,R)
/*
// ENDIF
//* END OF PROCESSOR

There is a lot of potential in IF-THEN-ELSE processing. Use your imagination and automate yourself out of trouble!

INUGE: A Call to Action -Or is it?

First of all, I’ve come to understand that not everyone is aware there has been an International User Group for Endevor (INUGE) group in existence for more than 20 years. It has been instrumental in ensuring a unified voice is reflected in the user community when communicating the global Endevor community’s needs versus individual enterprise wants. It has been a key component in driving appropriate strategic changes to the software many enterprises use to ensure the integrity of their production environment.

The past couple of weeks has seen a little activity that should concern every Endevor administrator.

It began March 25 2016 with a note from Stuart Ashby on his CA Community blog:

“I just wanted to let everyone know that the http://www.inuge.com domain will not be renewed when the current contract with the ISP expires.

“I assume that this means everything on the site will be disposed of by the ISP so if you want to salvage anything, you should waste no time.”

Phon Shuffitt provided the following eloquent response a few days later:

“This is a sad day for me to see the end of the INUGE era, not just the website but the idea of what this User Community was founded on in its day. The purpose of this CA User Group was to share common ideas in an open and unbiased forum.  The concept of the Inter-national User Group for Endevor was formed independently to unite the Endevor user community with one voice communication into CA for recognition of product evolvement into its current form.  CA was so impressed with this process that is now the bases of how all of CA product lines are enhanced, with idea from the user involvement and requests.  This has allowed even more users to present ideas to CA, but has somehow “lost in translation” the discussion of the enhancement requests.

“The local, regional and INUGE communities were a way to share knowledge without reinventing the wheel because so many of us have similar needs for customization to reinforce standards in our own shops.  After all, we have various product integrations into Endevor processes that may or may not be a CA product line, which may cause a conflict of interest with CA managing the sole Community and without the independence of  INUGE.  This is how the Shareware concept was brought about, where a user was able to solve an issue with a customization and then was willing to share it with the community.  Many of these obstacles are no longer a problem in today’s world, therefor you may not require that customization, however it has always been a resource to “go-to” if needed and once you downloaded it, you owned it.  Since it was user developed and donated to INUGE, I personally do not believe it should be shared on this site. 

“I have been an active member of my regional user group and INUGE since 1991, I  have served in many leadership roles over these past 25 years and have seen the communities evolve.    However, over the last couple of years we have relied more heavily on CA’s input, direction and guidance, which is great in many aspects but in doing so we have lost our independence.  I have also seen in the past years the dissolvent of so many user communities, possibly due to legal restrictions or limited company funds for travel or many other unknown factors and the CA Endevor Community portal has helped fill in the gap.  But we as users and community members are missing out on so much more in those face to face meetings, round table discussions and sharing experiences across the board.  This is very disheartening as the new users/Admin now have only limited means to gain that valuable knowledge. After all a community is only as great as its leaders and we have not had enough of our members to step up to lead in order to keep all of these communities healthy.”

In order to reach a greater audience, it was recreated as a question posted by Philip Gineo on the CA Community board for Endevor:

Posting this here so Stuart’s post shows up in the “CA Endevor” community feed. Is I-NUGE finished as a concept?

The following is the reply that I have posted:

“All I can do is weigh in with a personal opinion, Phil. And since most of you know me, you also know I inevitably have one…

“Is INUGE finished as a concept? I would argue a vehement “no”. The concept is sound. An international community of users centered around the effective usage of a software product and its inherent processes that are absolutely essential for the well-being of companies and governments around the world is a noble undertaking.

“Is INUGE in its current form sustainable or arguably healthy? I would also say “no”…. and its current state gives evidence of the consequences of years of dependence on CA as opposed to operating independently.

“I think Phon was quite eloquent in her response to Stuart’s post in articulating the dismay many of us feel at the current state of affairs. From the erosion of influence to the sunset of the website, I think the siren’s song of CA’s sponsorship has inadvertently steered us onto the shoals of irrelevance.

“Since coming back into an active Endevor administrative role and engagement with the user community, I have been dismayed at the “state of the nation”. Some time ago, I posted a note in Linkedin (in the “Endevor Professionals” group) regarding this observation and asked whether it was time for a reboot. The crickets that responded were deafening… Stuart was (and is) accurate in his response to the post in that the support for INUGE appears to be disappearing and becoming non-existent.

“But I don’t think it needs to die.

“I think it needs to be revitalized around a more aggressive concept that goes beyond CA’s sponsorship with a more defined role, goals, and objective. Perhaps its time for a new constitution…. maybe its time it took on the role of certifying Endevor professionals or certifying what are REALLY best practices based on what REAL administrators and users encounter instead of what the last service contractor or previous Endevor administrator left on-site to be supported.

“To steal a phrase from one of the American candidates… We need to make INUGE great again!

“And the potential is there… arguably more so than it ever has been with the electronic tethers we all have access to these days. Why not have a quarterly meeting with objectives for the organization to achieve? Why not start voting on things WE bring up instead of items fed to us from CA? Why not form a voting block for (and sometimes against) “ideas” that are weirdly upvoted because an eclectic few want something in Endevor and the rest of us “just don’t care”. Why not be the “go-to” organization for the “correct” way to leverage Endevor instead of having newcomers or our replacements as we retire wallow in the dark and make Endevor behave weirdly because they didn’t know better?

“This doesn’t HAVE to be the end… but it can be a call to a new beginning.”

I bring this to this blog because I know the people that take the time to check and read it are generally actual Endevor administrators with a vested interest in the activities around the Endevor product.

And I invite anyone and everyone to post their opinion either here or on the CA Community website. Is there a need for a group like INUGE? Should efforts be made to save it? Reconstitute it? Evolve it?

Where should we go from here?

Composite Load Module Creation

It is not unusual for shops to be making use of composite statically linked load modules that consist of many (sometimes hundreds) of subroutines. The purpose of this section is to exam one commonly used method of creating those modules. Also, there exists a manner in which the process and time of creating those modules in Endevor can be significantly reduced.

First of all, let’s exam the steps typically followed in creating a composite load module:

  • Programs are compiled.
  • The output from the compile step is run through either BINDER or IEWL and the output from this step is stored in an “executable” or “object” library with an RECFM definition of U.

At this point, many shops begin to deviate. Depending on what I will call the “architectural application normalization”, this “link-edit” step may or may not be executed with an input parameter of “NCAL”. If it is run with “NCAL”, then I will assume you are normalized; if not, then not.

What’s the difference?

The “NCAL” input parameter instructs the link-edit program on whether or not to try to resolve the external calls or addresses referred to in the compiled program. In essence, it controls whether or not the output from the step is an “executable” object (i.e. one with external addresses resolved) or will require further linking at a future point in order to resolve the external calls.

Why would you use “NCAL”?

The NCAL parameter used in conjunction with a second separate link-edit job can be used to create “efficient” composite load modules. In essence, shops that make extensive use of the NCAL parameter build up a library of submodule object libraries that are then linked into composite load modules on a “mix-and-match” basis.

To illustrate, consider the following diagram that maps the calls of composite load module “A”.

ill1

If we assume that this shop is not using the “NCAL” link-edit parameter, executable load module “A” is created by compiling (and linking) program “A”; the output from the compile job for “A” is the “A” load module. Therefore, if a change is done to any submodule that is part of the composite load module (for instance, “F”), it is “linked” into the “A” load module by, again, recompiling program “A”.

But what have we really created if every submodule has resolved its own set of external addresses? If every program is compiled and the “link-edit” step does not contain the NCAL parameter, the final result when “A” is compiled and linked is to have a load module that looks more like the following illustration.

ill1

This figure is trying to illustrate the fact that every submodule contains the full address resolution of every subroutine underneath it; each routine is executable by itself in its own right (although you would likely never execute any one of them as a stand-alone program).

Therefore, when “A” is compiled and linked to create composite load module “A”, each address is resolved (once again) for each subroutine that has already resolved each external address. In other words, a large degree of redundancy occurs in the final load module and the size of the actual executable can be much larger than would result if each external address were resolved only once. If you consider that each programming language also results in calls to other external support routines, the degree of redundancy increases many times.

However, if we compiled and supplied “NCAL” as the input parameter to the link-edit step in our compiles, each subroutine becomes a stand-alone piece with unresolved external addresses. When all programs have been compiled, a separate link-edit job can be created that resolves all addresses for all routines in the composite load module only once. Input to a link-edit job contains “INCLUDE” statements, an ENTRY statement, and a NAME statement. The input to create composite load module “A” may look like the following.

INCLUDE SYSLIB(A)
INCLUDE SYSLIB(B)
INCLUDE SYSLIB(C)
INCLUDE SYSLIB(D)
INCLUDE SYSLIB(E)
INCLUDE SYSLIB(F)
INCLUDE SYSLIB(G)
INCLUDE SYSLIB(H)
INCLUDE SYSLIB(I)
INCLUDE SYSLIB(J)
ENTRY A
NAME A(R)

Since this is source like any other source, it is an excellent candidate for Endevor control. Simply define a type such as “LKED” and keep these statements as the “source” for your composite load modules.

Now when you change a subroutine in the composite load module, it is no longer necessary to recompile the mainline. Instead, you merely compile the subroutine and then re-link the composite load module source that uses the subroutine. One of the advantages of this approach is that you mitigate risk to your environment by not unnecessarily recompiling mainlines that have not had any changes done to them.

Another hint to keep your “LKED” source type simple is to construct your link-edit job to have all of the composite load module input (i.e. object) libraries defined to DDNAME SYSLIB. This is the default DDNAME that the linkage-editor will scan to resolve external addresses making an INCLUDE unnecessary for every subroutine; it is only necessary if the member name of the subroutine in the input library is different than the “called” name in the program that calls it. This may happen if you have multiple entry points in a program and are using those entry points instead of the actual program name.

INCLUDE SYSLIB(A)
ENTRY A
NAME A(R)

You may want to remove INCLUDE statements and let the linkage-editor determine for itself the routines to pull into the composite load module. If you leave an INCLUDE statement in your LKED type but no longer call that subroutine, the module will still be part of your composite load. This, again, can result in a “larger than necessary” load module and superfluous information in Endevor if you are tracking with the Automated Configuration Manager (ACM).

So, let’s assume you have bought into this “normalization” scheme and have begun to change your approach. The next hurdle you encounter is that the link-edit step seems to take an inordinate amount of time inside Endevor. Why is this happening?

The short answer is ACM and use of the “MONITOR=COMPONENTS” clause in your link-edit processor. For proof, try executing the link-edit inside Endevor without the MONITOR=COMPONENTS turned on. You should find that the link-edits are as quick inside Endevor as they are outside.

But this isn’t a good solution. ACM captures extremely valuable information that is vital for effective impact analysis. If I don’t know which composite load modules use which subroutines, how do I know what to re-link and what to re-test?

So the first step is to determine why ACM adds so much time to the link-edit step. When ACM is activated, it tracks the “opens” that occur when your element is being generated. In the case of a link-edit, it captures the library name(s) that are being accessed in order to construct your composite load module.

Ideally, ACM can get the information it requires from the directory of the library being accessed. Unfortunately, this does not work with “executable”/object libraries with a LRECL of U. When a library is accessed with this type of definition, ACM determines that this is a “load”-type of library and knows it cannot get the information it needs from the directory. Therefore, ACM issues its own OPEN and READ to the member being linked-in and reads the information it requires from the member’s CSECT. The net result is that ACM is causing double the opens/reads/closes for every member linked into a composite load module. Is it any wonder, then, that link-edits can take an inordinate amount of time?

So how can we improve this process? Is there another approach that gives us the best of both worlds i.e. composite load modules without redundant addresses and ACM data with all the integrity we rely on?

The answer is yes, but only if you have migrated to at least MVS 4.x or higher and are using BINDER instead of the old IEWL program to perform your link-edits (check with your internal technical support)

Let’s go back to the beginning and review the steps we can now take…

  • Programs are compiled.

STOP! You have all that you need now! The subsequent link-edit step is no longer necessary.

But what do you have?

  • Compile listing (keep this. You never know when Audit might ask for it!)
  • The “true” program object that was written to DDNAME SYSLIN during the compile

This “true” program object is quite different than the ones that we created out of the link-edit step. For one thing, the library is a sequential FB LRECL 80 type of file. For another, the content bears no resemblance to the output from the link-edit step; regardless of whether you used NCAL or not.

So if we have enough at this point, why have we always included the link-edit step after the compile? The answer is because we had no choice. The old IEWL program that existed in previous MVS operating systems required its input to be either all “true” object libraries (i.e. FB LRECL 80) or all “executable” libraries (RECFM U). They could not be mixed and matched.

Since language support libraries and DBMS support routines were all supplied as “executable” libraries and since these libraries came from a variety of vendors, we were forced to supply our own in-house written routines in the same format for the link-edit to work.

The new BINDER routine has changed all that. BINDER allows you to mix, in the same concatenation list, libraries of fixed and “executable” format. In other words, the datasets in the concatenation of SYSLIB DDNAME in the BINDER step can consist of “true” object libraries (FB LRECL 80) AND “executable” libraries (RECFM U).

So how does this help Endevor and ACM? ACM can now get the information it needs to track from the directory of the “true” object library (FB LRECL 80). That means an extra open/read to the library itself is not required and that saves significant time.

This is an excellent “going-forward” strategy. In order to exploit this capability, it will be necessary to modify your processors by adding a step that copies the output from the compiler’s SYSLIN library to a PDS that you save. This step is necessary, as most compilers will insist that SYSLIN is a sequential library and the easiest thing to administrate and concatenate into your link-edit processor is a PDS of the saved objects. A simple insertion of code like the following should do the trick for you.

//*
//********************************************************************
//* GENER TO PDS TO SAVE SYSLIN MEMBER *
//********************************************************************
//*
//GENNO1 EXEC PGM=IEBGENER,
// COND=(5,LT),
// MAXRC=8
//*
//SYSPRINT DD DUMMY
//SYSUT1 DD DSN=&&SYSLIN,
// DISP=(SHR,PASS)
//SYSUT2 DD DSN=YOURHILI.&C1SY..&C1ST..OBJECT(&C1ELEMENT),
// MONITOR=&MONITOR,
// FOOTPRNT=CREATE,
// DISP=SHR
//SYSIN DD DUMMY
//*

Start saving the “true” objects and including them in your link-edit. Over time and as your inventory of “true” objects builds, the linkage-editor and Endevor/ACM will find what it needs in the new libraries and the extra reads to the old libraries will become a thing of the past.