Defining “Best Practices”

The term “best practices” is often bandied about as a “catch phrase” to indicate what some people hope is a panacea of methods that will solve all their problems. Other people use it as a replacement for saying “do it my way”. Still others correctly use the term to identify commonly proven practices that have both stood the test of time as well as review.

 

It is important, then, that a definition of what “best practices” means in the context of vendor-provided software. This is particularly true of a product like Endevor, and as such, best practices with any 3rd party software can be typically found with the following characteristics:

1 – It makes use of the software’s implicit designs and methods. All software comes with an implicit intention for which the vendor designed it for and, arguably, how the vendor intended it to be used. Any software can be bent to “technically” do other things (such as holding data in Endevor versus source code), but if you start doing things it was not really intended or designed to do, you can find yourself hitting a “wall”. By virtue of the fact there is a “wall” to hit is a clear sign that what you are doing is not a best practice.

In the case of Endevor, I am a firm believer in exploiting its innate capabilities and using fields/definitions in the tool for what they are labeled to be used for.

2  – It exploits the software’s native ability without undo customizations. Some customization is inevitable, although arguably the term “customization” may be a tad overused and confused with “configuration”. Telon provided various “custom code” invocations, CoolGEN provides for “external action blocks”, and Endevor provides for processors, interfaces, and exits when used appropriately.

The danger point comes when the customization begins to try to replace functionality already being performed by the software OR when the customization is a reflection of “how” rather than “what”. “How” problems can generally be addressed in software through alteration of process/procedure to match the implicit design already likely considered in the software. It is the rare software solution that hasn’t already encountered most, if not all, the situations that are likely to arise in the discipline of field it is designed to address. Ways and methods of resolving the core problem being encountered, then, need to be adapted accordingly, not by “changing the software”. Again, by virtue of “changing the software”, you cannot possibly be following “best practices” as the implication is that every site that has installed the software has had to change the software the same way.

Appropriate use of exits, processors and other interfaces, then, is where it enhances the basic functionality already being performed by the software. Adding additional data to a logged event, for instance, or picking up additional data from external files for processing are generally appropriate examples.

3 – Are marked by making things obvious rather than obscure. In other words, a best practice is never a euphemism for “black box”. Everything from data modeling techniques that preach data normalization to experience with effective human interaction devices (VCRs, ATMs, Windows) tells us that the more obvious you are and the more control you put in the hands of the person will be met with greater acceptance than making things a “mystery”.

Hiding a vendor’s software behind “front ends” is usually done to prevent the need for education. In other words, an approach has been taken whereby the implementers feel they know what the end-user wants and needs, so they will automate everything for them. Unfortunately, this leads to heavy customizations again as they try to anticipate every need of the end-user and force the back-end software to comply. It is rather like the old “give a man a fish/teach a man to fish” syndrome. Custom-built front-ends require constant care and attention as well as retro-fitting to ensure upward compatibility. Again, by virtue of this added labour, it cannot possibly be considered a “best practice”.

4 – Is supportable by the vendor’s technical support organization. When help is required, the vendor has no choice but to support what they know. What they know, by extension, is the product as shipped from the vendor’s site. Since a best practice, by definition, implies rapid resolution of problems and quick support, any practice or implementation that deviates from the vendor’s implicit design cannot, by definition, be considered a “best practice”.

 

In contrast, the characteristics of non “best practices” can typically be identified as follows:

 

  • Complicated invocations, implementations, or installations. If processes are defined that require the developer to visit a number of places or an installation is conducted that requires extensive external files or procedures, it cannot be considered a “best practice”. This approach has, in essence, broken all 4 definitions of what “is” a best practice.

 

  • Long and involved customizations. While the customizations might fit “how” a site wants to perform a certain function with the software, it is definitely a customization specific to that site and cannot be considered an industry “best practice”. Again, all 4 definitions or criteria for a best practice have not been met.

 

  • Requires extensive training beyond (or instead of) the vendor’s standard training. This is the clearest sign that best practices could not possibly have been followed. If a vendor cannot come on-site and immediately relate to both the installation and methods by which the site is using their software, it cannot be considered a best practice. Again, it may be site-specific, but it is not something the entire industry would support or every education engagement would be “custom” and no material could possibly ever be re-used!

 

  • Hides the product from end-users. This is typically in direct violation of characteristic (3). Hiding the software behind front-ends, if it were a ‘best practice’, would have to be done by every site. If this were the case, no site would buy the software in the first place OR the vendor would revamp the software to fit the needs of its clients.

 

  • Change’s the products implicit design into something it was not designed to do. As iterated in characteristic (1), all software comes with an implicit method and design for its use. Arbitrarily changing the usage of fields for something they were not intended to be used for or changing the meaning of fields to something else cannot possibly considered a “best practice”.

Top 10 Endevor Implementation Pitfalls

Over the years, I have reviewed almost a hundred different installations and implementations of Endevor around the world. Some are examples of simple elegance, while others are testaments of good ideas taken too far.

My overall philosophy has always been and continues to be one of “simplicity”; I’d much rather see implementations of the elegantly simple than the convoluted complex. The only way to truly achieve simplicity is to use Endevor “as it’s intended”, not in ways that result in heavy customizations or extensive use of exits. I am a big believer in “teaching a person how to fish rather than giving a person fish”. I’d much rather any problem or issue I have with an Endevor installation or implementation be “CA’s” problem rather than mine!

So, recognizing this is one person’s opinion, what are the top 10 pitfalls I see sites make in their implementations of Endevor. In no particular order:

10) Lack of Normalized Processes

In an earlier blog, I wrote an article about something I call “process normalization”. As I like to say, in my mind “a dog is a dog is a dog”. You don’t say furrydog, you don’t say browndog…. You say “there is a dog and it is brown and it is furry”. In other words, it is a DOG and its attributes are covering (fur) and colour (brown).

The same principle needs to apply to definitions within a good Endevor implementation. When I see definitions of TYPES such as COBSUB or COBDB2, I am encountering non-normalized implementations. Both TYPES are really just COBOL…. Their attributes are role (subroutine) and dbms (DB2).

Attributes are more correctly addressed in the definition of PROCESSOR GROUPS, not TYPE names. By calling COBOL by what it is, I can then easily change the attribute by merely selecting a different PROCESSOR GROUP. For instance, if I have 2 types named COBSUB and COBDB2S (for DB2 subroutine)…. and the program defined in COBSUB is altered to now contain DB2 calls, it needs to be moved to a totally new TYPE definition. However, if the site were normalized, no change to the TYPE need take place (and thus no risk to the history of changes that have ever taken place with the element). Instead, one merely need associate a new processor group to the element that now includes the DB2 steps.

The same principle applies to various type definitions and is often either misunderstood or purposely ignored in the interest of “giving people fish”.

9) Need for VSAM-RLS or CA-L-Serv

While the preferred method today is VSAM-RLS, either VSAM-RLS or CA-L-Serv can be easily implemented to ease performance issues around Endevor VSAM libraries. It often surprises me how few sites are exploiting this easy and simple method of reducing their throughput times because they have not implemented either of these available and free solutions.

8) Forward-Base-Delta (FBD) and/or ELibs used exclusively for base libraries

As someone whose career in IT grew up in the application area versus the systems programming area, it often astounds me that I often encounter set ups in Endevor that are so overtly application-area-hostile. Selecting FBD and/or Elibs as your base libraries always tends to signal to me that the person who originally set up this installation likely never worked in applications!

If I don’t see a “Source Output Library” declared, I get really concerned. At that point, I’m already guessing the application area (whether the Endevor administrator is aware of it or not) is likely keeping an entire parallel universe of code available in their bottom-drawer for the work they really need to do… and likely really really dislike Endevor!

It was always my experience that the application area needs to have clear and unfettered access to the libraries that Endevor is maintaining. It serves no “application” purpose to scramble up the names or compress the source; they NEED that source to do scans, impact analysis, business scope change analysis… in other words, do their job. If the Endevor installation is not providing easy access and views of that source (and by easy, I also mean ability that is allowed OUTSIDE Endevor control), then the implementation cannot be considered a good one.

For this reason among many, I am a huge advocate of always defining every application source type within Endevor as Reverse-Base-Delta, unencrypted/uncompressed… and PDS or PDS/E as the base library. This implementation is the friendliest you can be to the application area while at the same time maintaining the integrity of the Endevor inventory.

While I accept that, currently, Package Shipment requires Source Output Library, this need not be any kind of constraint. Its unlikely most sites are shipping from every environment and every stage; arguably you need only define a Source Output Library at the location you do shipments. Therefore, using RBD and PDS as your base library, you replace the need for a Source Output Library everywhere else since the application can now use the REAL base library for their REAL work…. With the exception of editing (unless you are using Quickedit). All their scans can now make use of whatever tools that your site has available.

PDS/E has come a long way since Endevor first began using them and are rapidly becoming the de facto standard for PDS definition. However, if you are still using the original definitions of PDS, I tend to also recommend looking into a product named “CA-PDSMAN”. It automates compression, thus relieving that as a maintenance issue, and actually provides a series of Endevor-compatible utilities that can be exploited by the Endevor administrator.

7) Need for Quickedit

A universal truth is that “familiarity breeds contempt”. Depending on your definition of the word “contempt”, Endevor is no exception.

As Endevor administrators, it’s important to remember that we live and breath the screens and processes around Endevor. Most of us know the panels and how things operate like the back of our hand.

However, the application area often is completely intimidated by the myriad of screens, options, choices, and executions that happen under the umbrella known as Endevor.

A simple solution to this issue can be the introduction of Quickedit at your site. Basically, you can move people from navigating a complex set of panels and processes to “one-stop shopping”. Many application areas that see demonstrations of Quickedit often completely change their opinion of the facility.

Part of the reason for this is the change of view that comes with the Quickedit option. Endevor “base” is oriented along “action-object” execution. In other words, you have to tell Endevor what “action” (MOVE, ADD, GENERATE, etc) you want to do before it shows you the element list the action will be executed against.

Quickedit is oriented against a more natural flow of “object-action”. With Quickedit, you are first displayed a list of the elements you asked for. Once the list is displayed, you can then choose the action you want to do. This is much more intuitive to the manner in which we generally operate when we are doing the application development tasks.

6) Generation Listings

It surprises me how often I encounter sites that are need keeping their generation listings… or keeping them in very strange places!

When I find they’re not keeping them at all, I generally discover the attitude is “well, if we need it, we’ll just regenerate the program”. What this ignores is the fact that the newly generated program may very well have completely different offsets or addresses than the one that caused the generation to have to take place! The listing has all the potential of being completely useless.

Generation listings are, for all intents and purposes, audit reports. They record the offsets, addresses, and linked status of the module as it was being generated by Endevor. They should NOT be deleted and they should be kept.

The issue of “where” to keep generation listings, however, can be tricky. Using PDS’ often results in what I refer to as “the rat in the snake”. A project at the lower levels will require a large amount of space (more than normally might be required) as it is developing and testing its changes. Then, once it moves to QA, that space in test is released, but now must be accounted for in QA! And then, once QA is satisfied, it must be moved into production, where a reorganization of files might be required in order to accommodate the listings arriving.

Personally, I’m an advocate of a mix of Elibs and CA-View. Elibs take care of themselves space-wise and can easily accommodate the “rat in the snake” scenario. The downside is that the information is encrypted and compressed, making it necessary to view/print the listing information in Endevor.

CA-View, however, makes a great “final resting” place for the listings. It is an appropriate use of your enterprise’s production report repository AND it can keep a “history” of listings; deltas, if you prefer. This can be very handy if someone needs to compare “before” and “after” listings!

One final note if you decide to use Elibs for your listings: do NOT place those Elibs under CA-L-Serv control! Due to the manner in which Endevor writes to listing Elibs, placing them under CA-L-Serv control will actually harm your performance rather than improve it!

5) Backups

I’m surprised how many sites are solely reliant on their volume backups.

Volume backups are a good thing to have and use in the event of the need to invoke a disaster recovery plan (DRP). But they very arguably are not enough when it comes to Endevor and the manner in which it is architected.

Endevor spans a variety of volumes and stores different “pieces” on different volumes often at different times. For instance, the package dataset may be on VOLA, the base libraries on VOLB, and the delta libraries on VOLC. A site may do a backup of those volumes over the space of an hour… but during that hour, an Endevor job ran 3 packages moving 15 elements with a variety of changes. Assuming the volumes are restored to the image taken, exactly what is the state of those Endevor libraries in terms of synchronization? Was VOLA restored to the last package execution? The first? What about the element base library? Is it in sync with the delta?

Fortunately, Endevor has a VALIDATE job that can be run to see if there is a problem. And I’m sure the vast majority of times, there will not be…..

But what if there is? What are you going to do if it turns out there is a verification problem and your libraries are out of sync?

For this reason I strongly advocate the use of regularly scheduled FULL and INCREMENTAL UNLOAD as a critical part of any site’s DRP. A FULL UNLOAD takes considerable time and should be used with discretion and planning, but INCREMENTAL UNLOADS tend to be relatively quick. I recommend doing both and consolidating them into files that are accessible during a DRP exercise.

During the DRP exercise, do the volume restores first. Then run the Endevor VALIDATE job. If the job returns and says things are fine, you’re done! But if not, you have the necessary files to do a RELOAD job and put Endevor back into the state it needs to be.

4) Security Overkill or Underkill

Unfortunately, the usage of the External Security Interface continues to be a mysterious black box to many sites. This is sad as there are a variety of exploitations that can take place by using the security abilities to your advantage!

Read through the articles I have posted on “Security Optimization” and “The APE Principle”. And if I have time, I will try to write a future article on demystifying the ESI to help the layman understand exactly how the facility really works!

3) SMF Records

Another ability that is often overlooked at installations is the fact that Endevor can cut SMF records to record each and every action taking place at the site. It’s been my experience that these records are literally a gold mine of information for the Endevor administrators and, frankly, should be mandatory from any auditor worth their salt!

The reporting available from the SMF records is far superior to the “Element Activity” reports that are provided by Endevor itself. While the “Element Activity” reports are better than nothing, I would argue not a lot.

To illustrate, an element in Endevor is promoted 125 times in the last month. Those 125 times will be recorded and reported as such with the Endevor SMF reports… but the “Element Activity” report would show the last action the element did (MOVE) as 1.

To illustrate further, an element is DELETED from Endevor. The SMF reports will show who, when, and where the element was deleted. “Element Activity” is blind; the element is no longer in existent and thus just drops from the report!

If one of the Endevor administrators objectives is to measure the “load” under which Endevor is operating, SMF records provide the detail to monitor how much is flowing through on a given time period.

SMF records truly provide the definitive log of what’s going on with the Endevor inventory.

2) DONTUSE Processor

I’d like to see CA properly address this issue with a change to Endevor, and I’ve submitted the idea to the community website, but to quote the idea as recorded on the website:

“As an Endevor user/developer/person-that-actually-has-to-use-Endevor-and-is-not-an-administrator, I want Endevor to KNOW what I am adding to it is a new element and requires me to select a processor group rather than ME knowing I need to put an “*” in the PROCESSOR GROUP (because I will NOT remember I need to do that and will let it default… and inevitably the default processor group is NOT the one I want making ME do MORE work) so that I can add my new elements intelligently and proactively rather than reactively.

“As an Endevor administrator, I want to define a default processor group that automatically triggers the “Select PROCESSOR GROUP” display if my user does not enter “*” or has not entered an override value so that they can correctly choose the right processing without having to go back and forth because they have inevitably forgotten they need to choose something and the default is either wrong for their particular element.”

In essence, what I advocate is the Endevor administrator should not assume to know what the default processor group is when there is a choice to be made. Take the example of the COBOL program I used earlier in this article. If I were to assume every new program coming in as a COBOL type is to be a subroutine with DB2, then the day that someone adds a program that does not use DB2 is the day “Endevor is broken and you, Mr/Mrs Endevor Administrator, are WRONG!”. And that will surely happen as the sun rising in the morning!

A workaround is to have your default processor be declared along the lines of the DONTUSE processor I have documented in an earlier article. In essence, if someone puts in a new program and doesn’t specify the processor group, the default DONTUSE processor will send them a message with instructions on how to choose a processor group and fail the element. It’s clumsy and awkward, but works for now until CA provides a product enhancement.

1)     Need for X-Process

It’s surprising how often I encounter sites that still have not built or captured ACM information because “we don’t want to generate and lose our production loads”.

What’s needed is a tool I used to call the XPROCESS. In essence, what the process does is cause Endevor to generate your element (make, build, compile, whatever) and thus create the ACM, throw out the output, and then copy the current production version to the stage the generate is in, refootprinting the output accordingly. A splash title page in the listing can clearly identify this is a conversion or clean-up listing only… and the problem is solved.

This is a valuable tool to have in the Endevor administrator’s arsenal. For your reference, modification, and usage, here is a copy of a simple example:

//********************************************************************
//* *
//* PROCESSOR NAME: GCOB02X *
//* PURPOSE: SPECIAL PURPOSE COBOL PROCESSOR TO REGENERATE COBOL *
//* ELEMENTS AND THEN CREATE 'POINT-IN-TIME' COPIES OF THE *
//* 'REAL' OBJECT MEMBER FROM THE CONVERTING SYSTEMS OBJECT *
//* LIBRARY. *
//* *
//********************************************************************
//GCOB02X PROC ADMNLIB='NDVLIB.ADMIN.STG6.LOADLIB',
// COMCOP1='NDVLIB.COMMON.STG1.COPYLIB',
:
:
:
// LIB1I=NO/WHATEVER,
// LIB1O=NO/WHATEVER,
// LIB2I=NO/WHATEVER,
// LIB2O=NO/WHATEVER,
// LIB3I=NO/WHATEVER,
// LIB3O=NO/WHATEVER,
// LIB4I=NO/WHATEVER,
// LIB4O=NO/WHATEVER,
:
:
:
//*
//********************************************************************
//* DELETE 'JUST CREATED' OBJECT! *
//********************************************************************
//DELOBJ EXEC PGM=CONDELE
// IF (&C1EN = DVLP)
// OR (&C1EN = DVL2)
// OR (&C1EN = ACPT)
// OR (&C1EN = PROD) THEN
//C1LIB DD DSN=NDVLIB.&C1SY..&C1ST..OBJLIB,
// DISP=SHR
// ELSE
//C1LIB DD DSN=NDVLIB.&C1EN..&C1ST..OBJLIB,
// DISP=SHR
// ENDIF
//*
//COPY1A EXEC PGM=IEBCOPY,
// EXECIF=(&LIB1I(1,2),NE,NO),
// MAXRC=04
//SYSPRINT DD SYSOUT=&SYSOUT,
// FREE=CLOSE
//IN1 DD DSN=&LIB1I,
// DISP=SHR
//OUT1 DD DSN=&&TEMP1,
// DISP=(NEW,PASS),
// UNIT=&WRKUNIT,
// SPACE=(CYL,(10,10,10)),
// DCB=&LIB1I
//SYSIN DD *
COPY INDD=IN1,OUTDD=OUT1
SELECT MEMBER=((&C1ELEMENT,,R))
/*
//*
//COPY1B EXEC PGM=IEBCOPY,
// EXECIF=(&LIB1I(1,2),NE,NO),
// MAXRC=04
//SYSPRINT DD SYSOUT=&SYSOUT,
// FREE=CLOSE
//OUT1 DD DSN=&LIB1O,
// DISP=SHR,
// FOOTPRNT=CREATE
//IN1 DD DSN=&&TEMP1,
// DISP=(OLD,PASS)
//SYSIN DD *
COPY INDD=IN1,OUTDD=OUT1
SELECT MEMBER=((&C1ELEMENT,,R))
/*
//*
//COPY2A EXEC PGM=IEBCOPY,
// EXECIF=(&LIB2I(1,2),NE,NO),
// MAXRC=04
//SYSPRINT DD SYSOUT=&SYSOUT,
// FREE=CLOSE
//IN1 DD DSN=&LIB2I,
// DISP=SHR
//OUT1 DD DSN=&&TEMP2,
// DISP=(NEW,PASS),
// UNIT=&WRKUNIT,
// SPACE=(CYL,(10,10,10)),
// DCB=&LIB2I
//SYSIN DD *
COPY INDD=IN1,OUTDD=OUT1
SELECT MEMBER=((&C1ELEMENT,,R))
/*
//*
//COPY2B EXEC PGM=IEBCOPY,
// EXECIF=(&LIB2I(1,2),NE,NO),
// MAXRC=04
//SYSPRINT DD SYSOUT=&SYSOUT,
// FREE=CLOSE
//OUT1 DD DSN=&LIB2O,
// DISP=SHR,
// FOOTPRNT=CREATE
//IN1 DD DSN=&&TEMP2,
// DISP=(OLD,PASS)
//SYSIN DD *
COPY INDD=IN1,OUTDD=OUT1
SELECT MEMBER=((&C1ELEMENT,,R))
/*
//*
//COPY3A EXEC PGM=IEBCOPY,
// EXECIF=(&LIB3I(1,2),NE,NO),
// MAXRC=04
//SYSPRINT DD SYSOUT=&SYSOUT,
// FREE=CLOSE
//IN1 DD DSN=&LIB3I,
// DISP=SHR
//OUT1 DD DSN=&&TEMP3,
// DISP=(NEW,PASS),
// UNIT=&WRKUNIT,
// SPACE=(CYL,(10,10,10)),
// DCB=&LIB3I
//SYSIN DD *
COPY INDD=IN1,OUTDD=OUT1
SELECT MEMBER=((&C1ELEMENT,,R))
/*
//*
//COPY3B EXEC PGM=IEBCOPY,
// EXECIF=(&LIB3I(1,2),NE,NO),
// MAXRC=04
//SYSPRINT DD SYSOUT=&SYSOUT,
// FREE=CLOSE
//OUT1 DD DSN=&LIB3O,
// DISP=SHR,
// FOOTPRNT=CREATE
//IN1 DD DSN=&&TEMP3,
// DISP=(OLD,PASS)
//SYSIN DD *
COPY INDD=IN1,OUTDD=OUT1
SELECT MEMBER=((&C1ELEMENT,,R))
/*
//*
//COPY4A EXEC PGM=IEBCOPY,
// EXECIF=(&LIB4I(1,2),NE,NO),
// MAXRC=04
//SYSPRINT DD SYSOUT=&SYSOUT,
// FREE=CLOSE
//IN1 DD DSN=&LIB4I,
// DISP=SHR
//OUT1 DD DSN=&&TEMP4,
// DISP=(NEW,PASS),
// UNIT=&WRKUNIT,
// SPACE=(CYL,(10,10,10)),
// DCB=&LIB4I
//SYSIN DD *
COPY INDD=IN1,OUTDD=OUT1
SELECT MEMBER=((&C1ELEMENT,,R))
/*
//*
//COPY4B EXEC PGM=IEBCOPY,
// EXECIF=(&LIB4I(1,2),NE,NO),
// MAXRC=04
//SYSPRINT DD SYSOUT=&SYSOUT,
// FREE=CLOSE
//OUT1 DD DSN=&LIB4O,
// DISP=SHR,
// FOOTPRNT=CREATE
//IN1 DD DSN=&&TEMP4,
// DISP=(OLD,PASS)
//SYSIN DD *
COPY INDD=IN1,OUTDD=OUT1
SELECT MEMBER=((&C1ELEMENT,,R))
/*
:
:
:

An Opinion about Endevor “core” Pieces

External Security Interface (ESI)

 

Originally, using the External Security Interface was an option and in my opinion, it was always folly to not take advantage of this software. Without going into a long lecture about security, suffice it to say that it is a key component of effective configuration management.

Security and Configuration Management have 2 components: physical security and functional security.

Endevor does not supply physical security. This is the security that specifies who can read/write to the different high-level indexes at a site and is handled at every site by whatever proprietary security software they have (i.e. RACF, ACF2, TOP-SECRET).

Functional security is the component that determines, once in Endevor, who is allowed to do what to which systems. Your choices are to either set up Endevor Native Security tables or interface with your current on-site security software. It makes sense to most shops to continue leveraging their current on-site security software; it provides a single point of administration and continues to leverage the investment they have already made in security at their site. If you use the Endevor Native Security tables, you must remember to reflect any general changes in system security there as well as in your “standard” shop software. Also, this means a component of your site’s software security requirement is NOT being managed by your site’s security software. This can be a favourite target for security auditor’s to hit.

Extended Processors

This is the heart-and-soul of Endevor. Without Extended Processors, you can’t compile, generate, check, crossreference, or any of the other cool neat stuff Endevor can do for you. In essence, without Extended Processors, Endevor becomes nothing more than a source repository; a toothless tiger; a fancy version of Panvalet.

Automated Configuration Manager (ACM)

If Extended Processors are the heart-and-soul, then ACM is the brains. ACM is the piece that allows you to automatically monitor the input and output components of elements as they are being processed by an Extended Processor. ACM, then, allows effective impact analysis and ensures the integrity of your applications. The information ACM captures is what package processing uses to verify that a developer is not missing pieces when they create a promotion package for production.

Commenting on COMMENTS

The following case study is an investigation I conducted on the manner in which COMMENTS are reflected in the MCF of Endevor. It serves to illustrate that there’s much more to Endevor than meets the eye!

Problem:

The customer is making use of EXIT02 in order to cause special processing to take place when an element is being promoted or otherwise worked on and the COMMENT contains the word “EMERGENCY”.

When there is no source change to the element, the customer has determined that the COMMENT field does not contain the comment they had entered into Endevor. Instead, the previous comment (or “level” comment) is the only one seen by the exit program. This is resulting in the customer having to make a “dummy” change to the source for the sole purpose of having the “EMERGENCY” word be contained in the COMMENT field for the exit.

Investigation:

Endevor Behaviour

One of the first things to understand about Endevor is that there is MORE than just one comment associated to an element. In fact, there are as many as 5, depending on the reason for the comment and what it is being associated with. Consider the following COPYBOOK named JRDCOPY4. This copybook is being created for the very first time and has never existed in Endevor before. Endevor is creating v1.0 of the copybook. The screen that adds the element to Endevor might look like the following:

comment01

Note that the comment I have recorded says “V1.0 BASE COMMENT”. After a successful execution of the action, the Endevor Master Control File (MCF) for the element contains the following:

comment02

comment03

Note the highlighting done in the Element Master displays; Endevor has replicated the comment across 5 different areas. These are 5 distinct and unique areas within Endevor that contain comments and are not the same field. Each field displays what it contains at different times in Endevor. In this instance, because we have only done one (1) thing, there is only one comment to display.

The next event I do is to MOVE the element to the next stage. I would then build the screen as follows:

comment04

When Endevor does the move, the MCF for the element now contains the following:

comment05

comment06

Note that the comment that changed is NOT the comment that was associated to the element when I created it; rather, the comment is associated with a unique comment field in the MCF that contains comments associated to the last action done.

The next event that may occur is to work on this element. To do so, I would execute a RETRIEVE (or a QuickEdit session). The retrieve I execute may look as follows:

comment07

The MCF for the element would now contain the following information:

comment08

comment09

For the RETRIEVE action, there is a specific comment field area in the MCF that contains the information and it has been updated with the RETRIEVE COMMENT accordingly.

I will now make a few changes to the element and add it back into Endevor with the following screen:

comment10

The MCF associated to the element now contains the following in THIS stage (note that the MCF information in the next (target) stage still contains the original comments as indicated in figures 8 and 9).

comment11

comment12

Note that these are the comments associated to the element at this location where the changes have been made. The RETRIEVE comment is blank because this is NOT where I did my RETRIEVE! This is Stage “T” and, if you will review figures 7, 8 and 9, you will see that the RETRIEVE that I did was at Stage “Q”.

The next event I want to do is to MOVE the element to Stage “Q”. My MOVE screen would look as follows:

comment13

The changes that took place to the MCF comment fields are in the following screens:

comment14

comment15

Several things are important to note at this stage.

  • The BASE comments never change. They will always reflect the original comment that was placed into Endevor when the element was first created.
  • The RETRIEVE comment has now been dropped from stage “Q” MCF. This is because we have now moved back to the original place that I did my RETRIEVE.
  • The CURRENT SOURCE comment reflects the comments associated with the change. This is the field that is updated when a change is detected in the actual source lines of the program.
  • The LAST ELEMENT ACTION comment reflects the comment associated to the last action executed, in this situation “MOVE”.
  • The GENERATE comment reflects the same as the CURRENT SOURCE comment because I have not done any additional “generate” aside from the one that is done automatically when you “add/update” an element.

In order to ensure all the comment fields show their purpose, I will now cause a specific GENERATE action to take place against the element in stage “Q” to see which comments change. I would expect the comment I make to be reflected in the “LAST ACTION” comment and the “GENERATE” comment. The screen I use looks as follows:

comment16

The results in Endevor now are exactly as I had hoped:

comment17

comment18

To re-iterate, the comment associated to a change is the “CURRENT SOURCE” comment. The comment associated to activity or actions taking place in Endevor is the “LAST ELEMENT ACTION” comment.

In the customer’s scenario, they have an element for which no changes to the element are detected. To recreate the scenario, I begin by retrieving the element again.

comment19

The results in the MCF are as follows:

comment20

comment21

As I would expect, only the RETRIEVE comment has been changed.

Now I will add the element back into Endevor with NO CHANGES. This exactly replicates the condition at the customer where they are adding elements in with the EMERGENCY comment. In my case, I won’t use “emergency” but a comment that continues to identify what I am doing as follows:

comment22

comment23

Note the message in the top-right corner “NO CHANGES DETECTED”. If I query the MCF, the following information shows where the comment was contained.

comment24

comment25

This is the exact result I would hope Endevor would contain as the comments are in the correct place and Endevor is ensuring the wrong comments are not associated to the wrong things.

  • The BASE comment remains as originally coded.
  • The LAST ELEMENT ACTION and GENERATE comments indicate that the action was executed and the comment associated to the action
  • The CURRENT SOURCE comment has not changed and should not change because the source did not change. This comment field, based on what Endevor does, should only change if the source itself changes.

The next thing I want to do is MOVE the element with no changes back to the next stage. I would use a screen as follows:

comment26

comment27

Note again the message in the top-right corner that shows no changes were detected. If I query the MCF, the comment fields that have been affected are shown as follows:

comment28

comment29

There results are exactly what I would expect. Each comment is contained in its appropriate area. Endevor is maintaining the integrity of the right comment to the right action.

Exit Behaviour

Since we have established that Endevor is maintaining comments for the right things in the right places, the next thing to investigate is what is available to each of the exits during processing. In the case of the customer having this problem, the exit being invoked is EXIT02.

EXIT02 is invoked by Endevor before executing any actions. In other words, in Endevor, this exit is passed information before Endevor has actually done anything. All source is still where it is and no movement (for example) has begun.

During investigation of the issue, Technical Support asked the customer to provide an EXIT trace so that the information could be confirmed. The following is an extract of that trace that was provided:

comment30

Based on understanding how, when and where Endevor stores comments, this trace makes complete sense. The source comments (as reflected in the ELM* fields) does not change because the source has not changed. This is correct.

The REQCOMM comment, which reflects the comment associated to the action being done, correctly shows the comment associated to the action that is being requested.

Solution:

The solution to the problem the customer is having is actually very simple although does require a change to their exit program.

The problem is that the exit program is looking at the wrong comment field for the wrong reason. The comment field being looked at by the program is likely the “CURRENT SOURCE” comment.

The comment field the program SHOULD be looking at is for activity that is taking place against the element. This field will always contain the comment to trigger the event such as EMERGENCY that the client is looking for since it always contains the comment regardless of whether there are source changes or not.

Simply put, the program must be modified to look at field REQCOMM (if written in Assembler) or REQ-COMMENT (if written in COBOL) and not look at any of the ELM* fields for the “EMERGENCY” word. This is the only change required by the customer to ensure their solutions keeps working as designed.

No change is required in Endevor.

Systems Programming under Endevor Control

Some time ago, I polled the Endevor community to discover who might be using Endevor to manage and control the changes that the systems programming area does.

This document contains the original question and responses (without editing aside from removal of name and company). I thought you might find the content interesting and thought provoking….!

Question posed:

“Who might have their systems programming people also under Endevor control? Also, what components of Systems Programming do they have under control – i.e. all aspects, just parmlibs, just “their” programs, etc. I am in the process of designing and putting together a presentation on putting virtually all aspects of systems programming under Endevor control and I am curious as to the “state of the nation” today. “

Responses:

  • “It is the same old story, but now they have SMPE so the argument has be very solid in order for us lowly Endevor admins to convince the big Systems Programmers.”
  • “You must be kidding! I consider myself lucky that I get to assemble my own routines (C1DEFLTS, BC1TNEQU, etc…) in Endevor and have them copied to the systems libraries so I have footprints to check when things go south.

Besides don’t you know that the SMP installer does its own configuration management? (at least that’s the excuse the systems programmers give me).

I have tried to get some of the Endevor install into Endevor as a foot in the door, but have failed. If nothing else after the install creates the system libraries I would like Endevor to do the copies from LPAR library to LPAR library so when I need one thing changed they don’t copy the whole library and along with it those LPAR specific modules that then break the ‘TO’ instance of Endevor. I will try again when (and if) 7.0 SP1 ever goes GA. We have just outsourced most of our systems programming so who knows. Any ammunition I can get would be a great help.”

  • “Hi John, I am one of the systems programmers here at xxxxxxxxxxxx and the Endevor Administrator and there is no way that I would put our systems under Endevor. I can’t say that we would enjoy bringing Endevor into the mix if we had a problem with a parmlib member during an IPL.

So, that’s a big no for Endevor control for systems as long as I’m at this site. Of course, we are breaking one of the number one rules of Endevor (never let the programming staff administer Endevor), so we may just be the exception. Good luck with the presentation.”

  • “Although we have some older systems programmers still using Librarian to maintain their personal JCL files, none use Endevor for this purpose (including myself) and none of our z/OS datasets are maintained by either product. SMP/E is a requirement for all app’s that can be installed that way here, so that trumps Endevor. Encouraging systems programmers to use Endevor has been a tough sell. We plan to migrate all our scheduler JCL off Librarian to Endevor probably next year and even then, I doubt many system programmers will show any interest in using Endevor. It makes sense, but doesn’t happen..”
  • “Most in-house written exits, batch jobs etc. used by systems programmers are under the control of Endevor. We also store alot of the parmlibs, syms, includes etc. under Endevor.

In addition, we have a couple of pieces of software managed by Endevor as well.

For example, we use Endevor to manage the Endevor software. A new release gets installed in a development environment. Then, we load all modules associated with the Endevor software into a Product Environment and use Endevor to migrate the new release through the testing state and onto production. This same philosophy is used whenever a PTF is applied to Endevor. We apply the PTF in development, migrate any changed load modules, source etc. through Endevor into our test states, test the ptf, then move it on to Production. This also helps use to track any changes we have made to panels, defaults table etc.

The majority of the software installed by us is not managed by Endevor but we have been trying to recommend it as the route to go. We just put QuickStart under Endevor’s control last month.”

  • “It would have to be without processors, I think, because you would want it to be as simple as possible. I should say that it really wouldn’t be much of a problem, except for the first one that popped into my head, namely trying to fix a problem during an ipl. If we can find a way to work with our data during ipl’s it would be fine. But, obviously, SOX is going to make us audit the system in far different ways than we do right now, but I don’t think Endevor (in it’s current form) is a good solution for systems.   I shouldn’t have said “never”, but definitely the current way of using Endevor for application source is not going to be viable for our systems. Thanks!”
  • “It is nice to see someone else exploring this question.  My position is Endevor has a place for in-house written “things.”  Let SMP/E do the work is was designed for.  In-house written mods for system elements belong with SMP/E. For purposes of this discussion sys-progs work with items/elements that need a separate LPAR to upgrade/test.  Totally in-house programs and other things might fit within the Endevor umbrella. The question always comes back to testing.  How does one relate a stage1 SOL to a TEST lpar?  In a pure application arena I oppose using Endevor as a “junk drawer.”  By this I mean when one does not know where to store something just “put it in Endevor.” “
  • “I’m not sure what all you include in ‘system’ programs.  Because most state agencies use xxxxxxxx, I think any true systems programmers (that work for the state) would be there.

All JCL used to run our scheduled Production jobs are in Endevor.  I had our procs and parms in at one time, but our database group that is ‘in charge’ of those balked, so I had to take them out, although the boss over all of us had wanted EVERYTHING in Endevor.  I had intended on doing exactly that, including C-lists, database segment definitions, and PSB’s.  Alas, they are not (yet).”

  • “Hi John – We have our in-house written infrastructure code managed in Endevor.  Our primary goal was to get all the “language code” converted, (Assembler, COBOL, etc), this goal has been met.  Over the years we have been chipping away at getting other types of code converted, we’re in good shape here too.  I am happy to say that we are getting requests from the systems programmers, asking…how can Endevor handle this type of code, and of course we always come up with a nice solution.  Please let me know if you have any other questions.”
  • “I have joined the wonderful world of consulting, so bear in mind that the information I am providing is from past employers, but I thought it might be helpful or useful if you get a low percentage of responses.

At xxxxxx, the z/OS team leader wanted all items under Endevor control.  We had entries for just about all aspects (including SYS2 and SYS3 libraries – all CICS regions’ JCL, control cards etc.) except SYS1 libraries.  We were working towards converting all components of both in-house and purchased software tools (i.e. programs, JCL, control cards etc.) to Endevor.  Unfortunately, the bank was bought by xxxxx before we were able to complete that transition.  😦  Keep in mind that the Endevor administrators (myself included) were systems programmers and reported directly to the z/OS team leader who also served as our backup – in the event we were unavailable.  My manager’s exposure and high level of comfort with the product played a major role in driving the team to get systems components under Endevor control.  Everyone had to learn how to use the tool – no excuses.

My position at a subsequent job as Endevor administrator was in the operations area for an insurance company.  They had/have as “little as possible” under Endevor control and if the Systems people had their way, they would take it all out of Endevor and perform their mundane, space hogging, risk laden process of back up member A, rename previous backup of member A, rename member A, copy in member B etc. etc. etc….  It is next to impossible to go back more than one level of change or to determine the exact nature of the change and the approval process is tied in with the change (change record) tool, but there is no fool proof means to reconcile the items that were actually changed with the items referenced in the change record.  Most of the systems programmers have no desire to learn how to use the product and they are not obligated to do so – unless the element currently exists in an Endevor library.  There didn’t seem to be any rhyme or reason as to what was put under Endevor.  I think in total there were a couple of applications – programs, JCL etc., and a few unrelated jobs and control cards.  My guess is that there was a programmer that was comfortable with the product (he had administrator authority) and so he setup his applications and then just left them there.”

  • “When I was the Endevor person in charge at xxxxx (seems like it was many, many years ago), we had some of the parmlib members under Endevor’s control (mainly in the network area) and set up the processors to generate some of the network executables (we had multiple sets depending on what the target system volume was). We also had all of the system programmers JCL in Endevor (including IDMS startup) and most of the IDMS homegrown utilities source, but that was about it. Have a nice weekend.”
  • “John, the only things from the Systems side of the house that is under Endevor control are items where we might need a history. Otherwise the systems programmers are controlled by a separate change control system.”
  • “The issue we’re facing, as I see it, is around resistance of change to existing work practices by the Host Systems group and what they see as an ‘intrusive’ solution that requires effort to configure.

Our ‘competitor’, xxxxxxxx, purportedly does not require them to change the way they work.  You define the libraries/datasets to be monitored and audited and it just sits there tracking activity.  Then when you want to report on access and change you run the report and ‘”hey presto”.  Also, if you wish to rollback to an earlier version/backup it provides this capability.  The real clincher selling point (it seems) is that it was written by a System Programmer for Systems Programmers (this has been mentioned to me a couple of times).

Anyway – I’ve told them that I’m not going to give up – that I’m going to get the Product Manager to evangelise why they should use the incumbent product and save spending $’s (well – at least on a competitor’s product).   “

Thoughts on Integrating EGL from IBM with Endevor

The following article is specific to a tool from IBM known as Enterprise Generation Language. I provide the information not so much as a solution specific to EGL but rather as a model on tenets I believe are critical to effective source and configuration management for z/OS systems, the main one being “ultimately, the TRUE source (not just the generated source or derived source) needs to be captured for auditing and management purposes”. It’s not good enough for “models” on distributed platforms to be “the source” and then import what it creates in its completeness as an application; I believe to truly safeguard an application on z/OS, I must be able to recreate that application from the “stuff” I’ve stored… and my place of storage for applications is Endevor.

 

In the past, I was asked to investigate the options in integrating Enterprise Generation Language (EGL) for z/OS from IBM into Endevor. What are the choices that an Endevor site has in securing applications so that the same integrity Endevor gives to “native” code can be secured in “generated” code?

Based on my research, I have been able to determine the following:

Findings:

  • Unlike other CASE tools that generate code for execution on the z/OS system, the EGL ecosystem requires the target language to be generated on the workstation. Other CASE tools (such as CA GEN) provide the option of generating the code on z/OS.

egl1

  • One of the “choices” during COBOL code generation is to have the code automatically delivered, compiled, and otherwise made ready on z/OS from the Enterprise Developer on the workbench.

egl2

Note in this flow that at one point you can specify PREP=Y. This instruction on the workstation causes the generated COBOL, JCL, and if necessary, BIND statements to be transferred to the mainframe for execution. Otherwise, all built routines remain on the workstation for delivery to the z/OS platform based on how you want to send it there.

  • All sites contacted or from whom I have been able to get information have indicated that they are storing their EGL source in a distributed solution (either Clearcase or Harvest) and are storing the z/OS source in Endevor. The mechanism for storing the generated source in Endevor (i.e. manual or automatic) has not been determined.
  • Given the fact that sites ARE saving something that is referenced as EGL source and storing it in their distributed solution, this gives evidence (as well as reference in the EGL manuals) that there IS EGL source that needs to be stored.

Unknowns:

  • Is there a name or label or title or something in the EGL source that correlates to the generated z/OS elements? This is key to providing a quasi-automatic solution.

Design Options:

  • EGL in Distributed Solution/Manual delivery of z/OS components.

This option appears to be the most prevalent amongst those sites that are using EGL. Note that one of the other indicators from my research is the lack of sites using or implementing EGL at this time. While this may change in the future, there is limited experience or “current designs” to draw upon. This solution would, as the title implies, store the EGL in a distributed SCM solution, do the generation on the workstation, FTP or otherwise transmit the generated source to the mainframe, and then ADD/UPDATE the source into Endevor for the compilation.

Note that the transmission of the source generated on the workstation and the ADD/UPDATE of the source into Endevor can be accomplished today without signing onto the mainframe by accomplishing this step through Change Manager Enterprise Workbench (CMEW).

  • EGL in Distributed Solution/Automatic Delivery of z/OS components

In this scenario, the EGL would still be stored in the distributed SCM solution. However, if you specified PREP=Y, then the source would automatically be delivered and compiled by and in Endevor.

This scenario would require research and modification of the IBM provided z/OS Build Server. Based on the research conducted to-date, the z/OS Build Server is a started task that invokes the site-specific compile, link and bind processes. This process could, theoretically, be modified to instead execute an Endevor ADD/UPDATE action that would result in the source automatically being stored and compiled/linked/bound by Endevor instead of the “default” process provided by IBM.

  • EGL in Endevor Complete

In this scenario, the z/OS generated components remain on the workstation. All components, including the EGL source, COBOL source, link statements and anything else created by EGL, are then merged into a single “element” with each different type of source perhaps being identified with a separator line of some sort (maybe a string of “***********”). The ADD/UPDATE process of Endevor would then execute the different source components through their appropriate compile/link/bind programs as appropriate i.e. the first step in the processor would create temporary files that unbundled the different source types. These temporary files would then be the source that is generated.

Note: In order for any of the following designs to work, discovery of the previously documented “unknown” must be determined. These designs will only work if there is “something” in the EGL source that can be directly tied to generated z/OS components.

  • EGL in Endevor / EGL Delivery to z/OS

In this scenario, code generation would take place on the workstation and PREP=Y would execute as provided by IBM with no modifications (other than site-specific ones) to IBM’s z/OS Build Server. This will result in the COBOL, link, and BIND source being delivered to PDS on the mainframe and compiled there.

Assuming the delivery of the components to z/OS can be done to “protected” libraries, the EGL source could then be ADD/UPDATEd into Endevor using CMEW. The ADD/UPDATE process would then query the EGL source and automatically copy or otherwise bring in the COBOL, Link, and Bind source created and delivered earlier. The load modules created would be ignored; they would be recompiled again under Endevor’s control.

There are a variety of other options and designs and hybrids/combinations on the above ideas that I can think of. However, this paper should serve as the beginning of a discussion concerning which model or architecture best suits the needs of the site.

A Few More Simple Tips

Get Yourself A Dumb ID

A very helpful hint for the Endevor Administrator is to have a second userid for the system that has limited access to facilities along the same lines as the most restricted developer.

This ID is very useful for ensuring the changes you make in Endevor work for everyone. Your normal userid tends to have “god-like” access to everything. Therefore, everything always works for you!

The “dumb ID” reflects your users and is a good verification check to ensure changes you are making truly are “transparent”.

Use supportconnect

It is surprising how many sites are unaware of the excellent support provided by CA for all of its products through the Internet. Everyone using CA products should be registered and using the facility found at supportconnect.ca.com.

Through effective use of supportconnect, the Administrator has an opportunity to be more than just “reactive” to Endevor problems; they can actually be “pro-active”.

For instance, supportconnect provides the ability to list all current problems and fixes for the Endevor product. The Administrator can scan down that list, recognize circumstances that fit his/her site, and apply a fix before the problem is actually encountered.

Conversely, they can sign-up to be alerted to times when new solutions are posted, thereby being made aware of problems and fixes without even having to request a list.

Other features of the tool are the ability to create and track issues, as well as your site-specific product improvement suggestions.

Positioning the Endevor Administrator in an Organization

One of the struggles companies often face with Endevor is trying to define where the administration for the system best resides. While not offering a definitive answer to this question, I provide the follow excerpt from a report written for an Australian firm where I have endeavored (no pun intended) to provide at least some opinion.

Correctly positioning Endevor Administration in large sites can be as strategic as the decision to implement the product itself. Endevor represents a logical extension of change management, and as such should be autonomous. The dilemma presented is to be accountable for implementing company standards and procedures, but at the same time be responsive to the requirements of the true client – the developer. Initially the answer to this appears to be straightforward until you consider what the actual line management should be for the Endevor Administrators themselves. General practice is to locate Endevor Administrators in one of these areas;

  • Application Development
  • Change Management / Operations
  • Systems Administration

Development – this is one of the best areas for Endevor Administrators to be physically located, because they are working with their principal stakeholder. The major drawback with belonging to this reporting line is that they are responsible for maintaining company standards and procedures, which can potentially be compromised by a management request. Care must also be taken here to take the role seriously, because it usually decays into a caretaker or part-time role that no one is really interested in. Even though Endevor Administration is within the application development structure, developers should not be the ones managing Endevor.

Change Management / Operations – usually set up as a discrete business unit within Operations, Change Management walks the balance between maintaining corporate standards and being attentive to developers’ requirements, but with reduced risk of compromising their responsibility through a management request. Sites that select this option will usually have full time resources committed to the role, and consequently enjoy the benefits of that decision.

Systems Administration – although a realistic choice through technical requirements, positioning Endevor Administrators within this area is the least advantageous. The risk here is that they will see their role as enforcers of process first, before they take developers’ requirements into account. Traditionally they will not commit full time resources to the role, so users will miss out on features and functionality as new ‘best practices’ emerge.

In summary, the optimum could be to physically locate the Endevor Administrators with application developers, but their reporting line could be to Change Management / Operations or even Audit. No matter where Endevor Administration is located and what the reporting line, it is most important that the role is full time and taken just as seriously as that of the System Security team.

Catch Alls – Some Short-and-Sweet Tips

Optional Features Table

A good practice to get into is to regularly review the values and “switches” that have been placed in the Optional Features table supplied with Endevor. The source is found on the installation TABLES file and is member ENCOPTBL.

Each optional feature is documented as well as the various parameters that may be provided when activation the option.

Avoid Using Endevor within Endevor

Endevor processors allow you to iteratively invoke Endevor from within Endevor. This is a common practice when you need to retrieve something related to the element being worked on in a processor. For instance, you may want to check the submission JCL against the PROC being processed, or perhaps component information is needed to ensure complete processing occurs.

Before deciding to include “Endevor within Endevor (C1BM3000)” in your processor, make sure what you want to do can’t be done with one of the myriad of utilities specifically designed to execute in processors. Specifically, CONWRITE has had many extensions added to it that allow retrieval of elements and retrieval of component information.

A utility will always execute neater, cleaner, sweeter, and faster than Endevor-within-Endevor.

Move Commonly Used JCL Statements to the Submission JCL

A very simple technique to improve Endevor performance and reduce the number of service units required by the facility is to move (or copy) commonly used JCL statements from the processor to the submission JCL.

Specifically, either move or copy the declarations for SYSUT3, SYSUT4, SYSUT5, etc. into the skeleton member XXXXXXX so that they are defined automatically by the OS390 operating system at submission time. Then, when Endevor requires the DDNames and allocations, they are already done, thus saving overhead time required for Endevor to perform the dynamic allocation.

While this does not sound like a “big deal”, in actuality it can make a significant difference. A typical COBOL processor, for example, will need to allocation each of the SYSUTX Ddnames twice; once for the compile step and again for the linker step. If you are compiling 50 programs in an Endevor job, the allocations and de-allocations can occur over 300 times!

Based on personal experience, I would put the SYSUTX statements in BOTH the processor and the submission JCL. This is based on experimentation done that established a baseline CPU usage with the statements just in the processor. The first alteration removed the statements from the processor and placed them only in the submission JCL. This resulted in a drop in CPU usage. I then placed the statements in both the processor AND the submission JCL. This resulted in a further drop in CPU usage (lower than the first!). Therefore, no harm is done (in fact, good may be the result!) by having the statements in both locations, so I would leave it in both!

Some Available Traces

One of the problems with the Endevor documentation is that the different traces (and “hidden abilities”) that are available are scattered throughout different sections. Therefore, this list has been (and is being) constructed to try to capture all the different traces in one location.

  • EN$SMESI
    • ESI SMF record trace
  • EN$TRALC
    • Allocation service trace
  • EN$TRAUI
    • Alternate ID trace
  • EN$TRESI
    • ESI trace
  • EN$TRXIT
    • Exit trace
  • EN$TRITE
    • If-Then-Else trace
  • EN$TRAPI
    • API Trace
  • EN$TRLOG
    • Logon and logoff information
  • EN$TRSMF
    • Writes SMF records (needs to write to a dataset)
  • EN$TRFPV
    • Component Validation Trace
  • EN$AUSIM
    • AUTOGEN in simulation mode
  • EN$TROPT
    • Site Options Report (imho this should be a regular report not a trace)
  • EN$TRSYM
    • Symbolic resolution trace
  • EN$DYNxx
    • NOT A TRACE. A method where dynamically allocated datasets (eg done by REXX in a processor) can be monitored by Endevor

Take the time to search through the manuals looking for “EN$”. You might be surprised at the things you discover that you never knew you had!

The Endevor ESI Look-Aside Table

Many sites are unaware or have inadvertently disabled Endevor’s ESI Look-Aside Table (LAT). As the manual states:

“The security look aside table (LAT) feature allows you to reduce the number of calls to the SAF interface and thereby improve performance. The result of each resource access request to SAF is stored in the LAT. ESI checks the LAT first for authorization and if the resource access request is listed ESI does not make a call to SAF.

“Note: Do not use the LAT feature in a CA-Endevor/ROSCOE environment.”

Always ensure you have allocated a value to the LAT variable in the C1DEFLTS table as this is a simple (and supplied) manner of improving Endevor performance. Leaving the value blank or assigning a zero to the field will turn the function off, resulting in superfluous calls to your site security software during foreground processing.

The values that can be assigned to the LAT field are 2 to 10, with each number representing 4K page sizes of storage. A good starting value is 4.

Unlimited Allocations

Another vexing problem that large shops run into is the fixed number of dynamic allocation that MVS allows to be performed for a single job. As of the writing of this paper, that limit was set to 1,600. In the event you job requested more than 1,600, the system would abend the job with an S822 abend code.

On the surface, it appears to be very easy to exceed this number in the normal course of processing within Endevor. Since Endevor jobs execute as a single step with program NDVRC1, and since a package or batch job could easily hold 5,000 programs, the mathematics alone would seem to indicate the job will abend early in the process.

Consider a simple load module MOVE processor; a processor that moves the DBRM, Object, and Listings from one stage to the next. Each program being moved will require 2 allocations each for the DBRM, Object, and Listing libraries, 3 each of the SYSUT3 and SYSUT4 working libraries, 3 SYSIN allocations, and 3 SYSPRINT allocations. This works out to a total of 18 allocations per program. Therefore, theoretically, in our package of 5,000 programs, the system should fail us at program number 89, since during the processing of that program we will exceed the 1,600 allocation limit (program 89 x 18 allocations = 1602 allocations).

However, in reality, that doesn’t happen. In fact, Endevor will merrily continue on its way until program number 534. Although further along than program 89, the package is still not complete… and why here? Why not program 89?

The answer lies in the manner in which Endevor allocates (and de-allocates) datasets during execution. After the execution of every program in the package/batch request, Endevor de-allocates all the datasets that were used by the processor for that element. In this way, the 1,600 limit is not reached early in the processing. In essence, each program gets to start with a clean slate.

However, this is not true for any datasets destined for SYSOUT (e.g. SYSPRINT). Endevor does NOT release these dynamic allocations and, instead, accumulates them as the job executes. Therefore, the 3 allocations done for each program for SYSPRINT in my example are accumulative, so that when I reach program 534, I have once again hit the 1,600 allocation ceiling (Program 534 x 3 SYSOUT allocations = 1,602 allocations).

There are a couple of ways to resolve this problem, but I believe the best way is as follows.

For every processor, insert a new symbolic called something like DEBUG. Do conditional checking to see if you really need to see the output; after all, the majority of time, the output contained in SYSOUT is not germane to the task at hand. You only need to see it if you are debugging a problem. Consider the following sample processor.

//DLOAD01S PROC DEBUG=NO,
:
//S10 EXEC PGM=IEBGENER
//SYSUT1 DD DSN=MY.LIB.LIB1,
// DISP=SHR
//SYSUT2 DD DSN=&&TEMP1,
// DISP=(NEW,PASS),
// UNIT=&UNIT,
// SPACE=(TRK,(1,1),RLSE),
// DCB=(LRECL=80,BLKSIZE=6160,RECFM=FB)
// IF &DEBUG = ‘NO’ THEN
//SYSPRINT DD DUMMY
// ELSE
//SYSPRINT DD SYSOUT=*
// ENDIF
//SYSIN DD DUMMY
//*
:
:

The default value for symbolic &DEBUG is NO. Since the SYSPRINT will resolve DUMMY, the dynamic allocation will not occur and you will never incur the incremental count towards 1,600.

Again, I recommend this approach because it is seldom that you need the output from the different SYSOUTs in your processors unless you are debugging a problem. This approach allows you to “turn on” the output when you need it, but otherwise suppress it when you don’t. To turn on the output, just change the value of &DEBUG symbolic to something other than ‘NO’.

The second half of this solution is the FREE=CLOSE clause. This statement forces the system to drop the allocation of the device or dataset when the step finishes. Endevor does this automatically for every dataset it uses except SYSOUT. You can code the drop for the SYSOUT yourself.

However, be careful if you decide to place the clause in every SYSOUT without also analyzing which of the SYSOUTs you really need. It is entirely likely that you will flood your system’s JOES (job output) with SYSOUT data if you do not exercise discretion.