Top 10 Endevor Implementation Pitfalls

Over the years, I have reviewed almost a hundred different installations and implementations of Endevor around the world. Some are examples of simple elegance, while others are testaments of good ideas taken too far.

My overall philosophy has always been and continues to be one of “simplicity”; I’d much rather see implementations of the elegantly simple than the convoluted complex. The only way to truly achieve simplicity is to use Endevor “as it’s intended”, not in ways that result in heavy customizations or extensive use of exits. I am a big believer in “teaching a person how to fish rather than giving a person fish”. I’d much rather any problem or issue I have with an Endevor installation or implementation be “CA’s” problem rather than mine!

So, recognizing this is one person’s opinion, what are the top 10 pitfalls I see sites make in their implementations of Endevor. In no particular order:

10) Lack of Normalized Processes

In an earlier blog, I wrote an article about something I call “process normalization”. As I like to say, in my mind “a dog is a dog is a dog”. You don’t say furrydog, you don’t say browndog…. You say “there is a dog and it is brown and it is furry”. In other words, it is a DOG and its attributes are covering (fur) and colour (brown).

The same principle needs to apply to definitions within a good Endevor implementation. When I see definitions of TYPES such as COBSUB or COBDB2, I am encountering non-normalized implementations. Both TYPES are really just COBOL…. Their attributes are role (subroutine) and dbms (DB2).

Attributes are more correctly addressed in the definition of PROCESSOR GROUPS, not TYPE names. By calling COBOL by what it is, I can then easily change the attribute by merely selecting a different PROCESSOR GROUP. For instance, if I have 2 types named COBSUB and COBDB2S (for DB2 subroutine)…. and the program defined in COBSUB is altered to now contain DB2 calls, it needs to be moved to a totally new TYPE definition. However, if the site were normalized, no change to the TYPE need take place (and thus no risk to the history of changes that have ever taken place with the element). Instead, one merely need associate a new processor group to the element that now includes the DB2 steps.

The same principle applies to various type definitions and is often either misunderstood or purposely ignored in the interest of “giving people fish”.

9) Need for VSAM-RLS or CA-L-Serv

While the preferred method today is VSAM-RLS, either VSAM-RLS or CA-L-Serv can be easily implemented to ease performance issues around Endevor VSAM libraries. It often surprises me how few sites are exploiting this easy and simple method of reducing their throughput times because they have not implemented either of these available and free solutions.

8) Forward-Base-Delta (FBD) and/or ELibs used exclusively for base libraries

As someone whose career in IT grew up in the application area versus the systems programming area, it often astounds me that I often encounter set ups in Endevor that are so overtly application-area-hostile. Selecting FBD and/or Elibs as your base libraries always tends to signal to me that the person who originally set up this installation likely never worked in applications!

If I don’t see a “Source Output Library” declared, I get really concerned. At that point, I’m already guessing the application area (whether the Endevor administrator is aware of it or not) is likely keeping an entire parallel universe of code available in their bottom-drawer for the work they really need to do… and likely really really dislike Endevor!

It was always my experience that the application area needs to have clear and unfettered access to the libraries that Endevor is maintaining. It serves no “application” purpose to scramble up the names or compress the source; they NEED that source to do scans, impact analysis, business scope change analysis… in other words, do their job. If the Endevor installation is not providing easy access and views of that source (and by easy, I also mean ability that is allowed OUTSIDE Endevor control), then the implementation cannot be considered a good one.

For this reason among many, I am a huge advocate of always defining every application source type within Endevor as Reverse-Base-Delta, unencrypted/uncompressed… and PDS or PDS/E as the base library. This implementation is the friendliest you can be to the application area while at the same time maintaining the integrity of the Endevor inventory.

While I accept that, currently, Package Shipment requires Source Output Library, this need not be any kind of constraint. Its unlikely most sites are shipping from every environment and every stage; arguably you need only define a Source Output Library at the location you do shipments. Therefore, using RBD and PDS as your base library, you replace the need for a Source Output Library everywhere else since the application can now use the REAL base library for their REAL work…. With the exception of editing (unless you are using Quickedit). All their scans can now make use of whatever tools that your site has available.

PDS/E has come a long way since Endevor first began using them and are rapidly becoming the de facto standard for PDS definition. However, if you are still using the original definitions of PDS, I tend to also recommend looking into a product named “CA-PDSMAN”. It automates compression, thus relieving that as a maintenance issue, and actually provides a series of Endevor-compatible utilities that can be exploited by the Endevor administrator.

7) Need for Quickedit

A universal truth is that “familiarity breeds contempt”. Depending on your definition of the word “contempt”, Endevor is no exception.

As Endevor administrators, it’s important to remember that we live and breath the screens and processes around Endevor. Most of us know the panels and how things operate like the back of our hand.

However, the application area often is completely intimidated by the myriad of screens, options, choices, and executions that happen under the umbrella known as Endevor.

A simple solution to this issue can be the introduction of Quickedit at your site. Basically, you can move people from navigating a complex set of panels and processes to “one-stop shopping”. Many application areas that see demonstrations of Quickedit often completely change their opinion of the facility.

Part of the reason for this is the change of view that comes with the Quickedit option. Endevor “base” is oriented along “action-object” execution. In other words, you have to tell Endevor what “action” (MOVE, ADD, GENERATE, etc) you want to do before it shows you the element list the action will be executed against.

Quickedit is oriented against a more natural flow of “object-action”. With Quickedit, you are first displayed a list of the elements you asked for. Once the list is displayed, you can then choose the action you want to do. This is much more intuitive to the manner in which we generally operate when we are doing the application development tasks.

6) Generation Listings

It surprises me how often I encounter sites that are need keeping their generation listings… or keeping them in very strange places!

When I find they’re not keeping them at all, I generally discover the attitude is “well, if we need it, we’ll just regenerate the program”. What this ignores is the fact that the newly generated program may very well have completely different offsets or addresses than the one that caused the generation to have to take place! The listing has all the potential of being completely useless.

Generation listings are, for all intents and purposes, audit reports. They record the offsets, addresses, and linked status of the module as it was being generated by Endevor. They should NOT be deleted and they should be kept.

The issue of “where” to keep generation listings, however, can be tricky. Using PDS’ often results in what I refer to as “the rat in the snake”. A project at the lower levels will require a large amount of space (more than normally might be required) as it is developing and testing its changes. Then, once it moves to QA, that space in test is released, but now must be accounted for in QA! And then, once QA is satisfied, it must be moved into production, where a reorganization of files might be required in order to accommodate the listings arriving.

Personally, I’m an advocate of a mix of Elibs and CA-View. Elibs take care of themselves space-wise and can easily accommodate the “rat in the snake” scenario. The downside is that the information is encrypted and compressed, making it necessary to view/print the listing information in Endevor.

CA-View, however, makes a great “final resting” place for the listings. It is an appropriate use of your enterprise’s production report repository AND it can keep a “history” of listings; deltas, if you prefer. This can be very handy if someone needs to compare “before” and “after” listings!

One final note if you decide to use Elibs for your listings: do NOT place those Elibs under CA-L-Serv control! Due to the manner in which Endevor writes to listing Elibs, placing them under CA-L-Serv control will actually harm your performance rather than improve it!

5) Backups

I’m surprised how many sites are solely reliant on their volume backups.

Volume backups are a good thing to have and use in the event of the need to invoke a disaster recovery plan (DRP). But they very arguably are not enough when it comes to Endevor and the manner in which it is architected.

Endevor spans a variety of volumes and stores different “pieces” on different volumes often at different times. For instance, the package dataset may be on VOLA, the base libraries on VOLB, and the delta libraries on VOLC. A site may do a backup of those volumes over the space of an hour… but during that hour, an Endevor job ran 3 packages moving 15 elements with a variety of changes. Assuming the volumes are restored to the image taken, exactly what is the state of those Endevor libraries in terms of synchronization? Was VOLA restored to the last package execution? The first? What about the element base library? Is it in sync with the delta?

Fortunately, Endevor has a VALIDATE job that can be run to see if there is a problem. And I’m sure the vast majority of times, there will not be…..

But what if there is? What are you going to do if it turns out there is a verification problem and your libraries are out of sync?

For this reason I strongly advocate the use of regularly scheduled FULL and INCREMENTAL UNLOAD as a critical part of any site’s DRP. A FULL UNLOAD takes considerable time and should be used with discretion and planning, but INCREMENTAL UNLOADS tend to be relatively quick. I recommend doing both and consolidating them into files that are accessible during a DRP exercise.

During the DRP exercise, do the volume restores first. Then run the Endevor VALIDATE job. If the job returns and says things are fine, you’re done! But if not, you have the necessary files to do a RELOAD job and put Endevor back into the state it needs to be.

4) Security Overkill or Underkill

Unfortunately, the usage of the External Security Interface continues to be a mysterious black box to many sites. This is sad as there are a variety of exploitations that can take place by using the security abilities to your advantage!

Read through the articles I have posted on “Security Optimization” and “The APE Principle”. And if I have time, I will try to write a future article on demystifying the ESI to help the layman understand exactly how the facility really works!

3) SMF Records

Another ability that is often overlooked at installations is the fact that Endevor can cut SMF records to record each and every action taking place at the site. It’s been my experience that these records are literally a gold mine of information for the Endevor administrators and, frankly, should be mandatory from any auditor worth their salt!

The reporting available from the SMF records is far superior to the “Element Activity” reports that are provided by Endevor itself. While the “Element Activity” reports are better than nothing, I would argue not a lot.

To illustrate, an element in Endevor is promoted 125 times in the last month. Those 125 times will be recorded and reported as such with the Endevor SMF reports… but the “Element Activity” report would show the last action the element did (MOVE) as 1.

To illustrate further, an element is DELETED from Endevor. The SMF reports will show who, when, and where the element was deleted. “Element Activity” is blind; the element is no longer in existent and thus just drops from the report!

If one of the Endevor administrators objectives is to measure the “load” under which Endevor is operating, SMF records provide the detail to monitor how much is flowing through on a given time period.

SMF records truly provide the definitive log of what’s going on with the Endevor inventory.

2) DONTUSE Processor

I’d like to see CA properly address this issue with a change to Endevor, and I’ve submitted the idea to the community website, but to quote the idea as recorded on the website:

“As an Endevor user/developer/person-that-actually-has-to-use-Endevor-and-is-not-an-administrator, I want Endevor to KNOW what I am adding to it is a new element and requires me to select a processor group rather than ME knowing I need to put an “*” in the PROCESSOR GROUP (because I will NOT remember I need to do that and will let it default… and inevitably the default processor group is NOT the one I want making ME do MORE work) so that I can add my new elements intelligently and proactively rather than reactively.

“As an Endevor administrator, I want to define a default processor group that automatically triggers the “Select PROCESSOR GROUP” display if my user does not enter “*” or has not entered an override value so that they can correctly choose the right processing without having to go back and forth because they have inevitably forgotten they need to choose something and the default is either wrong for their particular element.”

In essence, what I advocate is the Endevor administrator should not assume to know what the default processor group is when there is a choice to be made. Take the example of the COBOL program I used earlier in this article. If I were to assume every new program coming in as a COBOL type is to be a subroutine with DB2, then the day that someone adds a program that does not use DB2 is the day “Endevor is broken and you, Mr/Mrs Endevor Administrator, are WRONG!”. And that will surely happen as the sun rising in the morning!

A workaround is to have your default processor be declared along the lines of the DONTUSE processor I have documented in an earlier article. In essence, if someone puts in a new program and doesn’t specify the processor group, the default DONTUSE processor will send them a message with instructions on how to choose a processor group and fail the element. It’s clumsy and awkward, but works for now until CA provides a product enhancement.

1)     Need for X-Process

It’s surprising how often I encounter sites that still have not built or captured ACM information because “we don’t want to generate and lose our production loads”.

What’s needed is a tool I used to call the XPROCESS. In essence, what the process does is cause Endevor to generate your element (make, build, compile, whatever) and thus create the ACM, throw out the output, and then copy the current production version to the stage the generate is in, refootprinting the output accordingly. A splash title page in the listing can clearly identify this is a conversion or clean-up listing only… and the problem is solved.

This is a valuable tool to have in the Endevor administrator’s arsenal. For your reference, modification, and usage, here is a copy of a simple example:

//********************************************************************
//* *
//* PROCESSOR NAME: GCOB02X *
//* PURPOSE: SPECIAL PURPOSE COBOL PROCESSOR TO REGENERATE COBOL *
//* ELEMENTS AND THEN CREATE 'POINT-IN-TIME' COPIES OF THE *
//* 'REAL' OBJECT MEMBER FROM THE CONVERTING SYSTEMS OBJECT *
//* LIBRARY. *
//* *
//********************************************************************
//GCOB02X PROC ADMNLIB='NDVLIB.ADMIN.STG6.LOADLIB',
// COMCOP1='NDVLIB.COMMON.STG1.COPYLIB',
:
:
:
// LIB1I=NO/WHATEVER,
// LIB1O=NO/WHATEVER,
// LIB2I=NO/WHATEVER,
// LIB2O=NO/WHATEVER,
// LIB3I=NO/WHATEVER,
// LIB3O=NO/WHATEVER,
// LIB4I=NO/WHATEVER,
// LIB4O=NO/WHATEVER,
:
:
:
//*
//********************************************************************
//* DELETE 'JUST CREATED' OBJECT! *
//********************************************************************
//DELOBJ EXEC PGM=CONDELE
// IF (&C1EN = DVLP)
// OR (&C1EN = DVL2)
// OR (&C1EN = ACPT)
// OR (&C1EN = PROD) THEN
//C1LIB DD DSN=NDVLIB.&C1SY..&C1ST..OBJLIB,
// DISP=SHR
// ELSE
//C1LIB DD DSN=NDVLIB.&C1EN..&C1ST..OBJLIB,
// DISP=SHR
// ENDIF
//*
//COPY1A EXEC PGM=IEBCOPY,
// EXECIF=(&LIB1I(1,2),NE,NO),
// MAXRC=04
//SYSPRINT DD SYSOUT=&SYSOUT,
// FREE=CLOSE
//IN1 DD DSN=&LIB1I,
// DISP=SHR
//OUT1 DD DSN=&&TEMP1,
// DISP=(NEW,PASS),
// UNIT=&WRKUNIT,
// SPACE=(CYL,(10,10,10)),
// DCB=&LIB1I
//SYSIN DD *
COPY INDD=IN1,OUTDD=OUT1
SELECT MEMBER=((&C1ELEMENT,,R))
/*
//*
//COPY1B EXEC PGM=IEBCOPY,
// EXECIF=(&LIB1I(1,2),NE,NO),
// MAXRC=04
//SYSPRINT DD SYSOUT=&SYSOUT,
// FREE=CLOSE
//OUT1 DD DSN=&LIB1O,
// DISP=SHR,
// FOOTPRNT=CREATE
//IN1 DD DSN=&&TEMP1,
// DISP=(OLD,PASS)
//SYSIN DD *
COPY INDD=IN1,OUTDD=OUT1
SELECT MEMBER=((&C1ELEMENT,,R))
/*
//*
//COPY2A EXEC PGM=IEBCOPY,
// EXECIF=(&LIB2I(1,2),NE,NO),
// MAXRC=04
//SYSPRINT DD SYSOUT=&SYSOUT,
// FREE=CLOSE
//IN1 DD DSN=&LIB2I,
// DISP=SHR
//OUT1 DD DSN=&&TEMP2,
// DISP=(NEW,PASS),
// UNIT=&WRKUNIT,
// SPACE=(CYL,(10,10,10)),
// DCB=&LIB2I
//SYSIN DD *
COPY INDD=IN1,OUTDD=OUT1
SELECT MEMBER=((&C1ELEMENT,,R))
/*
//*
//COPY2B EXEC PGM=IEBCOPY,
// EXECIF=(&LIB2I(1,2),NE,NO),
// MAXRC=04
//SYSPRINT DD SYSOUT=&SYSOUT,
// FREE=CLOSE
//OUT1 DD DSN=&LIB2O,
// DISP=SHR,
// FOOTPRNT=CREATE
//IN1 DD DSN=&&TEMP2,
// DISP=(OLD,PASS)
//SYSIN DD *
COPY INDD=IN1,OUTDD=OUT1
SELECT MEMBER=((&C1ELEMENT,,R))
/*
//*
//COPY3A EXEC PGM=IEBCOPY,
// EXECIF=(&LIB3I(1,2),NE,NO),
// MAXRC=04
//SYSPRINT DD SYSOUT=&SYSOUT,
// FREE=CLOSE
//IN1 DD DSN=&LIB3I,
// DISP=SHR
//OUT1 DD DSN=&&TEMP3,
// DISP=(NEW,PASS),
// UNIT=&WRKUNIT,
// SPACE=(CYL,(10,10,10)),
// DCB=&LIB3I
//SYSIN DD *
COPY INDD=IN1,OUTDD=OUT1
SELECT MEMBER=((&C1ELEMENT,,R))
/*
//*
//COPY3B EXEC PGM=IEBCOPY,
// EXECIF=(&LIB3I(1,2),NE,NO),
// MAXRC=04
//SYSPRINT DD SYSOUT=&SYSOUT,
// FREE=CLOSE
//OUT1 DD DSN=&LIB3O,
// DISP=SHR,
// FOOTPRNT=CREATE
//IN1 DD DSN=&&TEMP3,
// DISP=(OLD,PASS)
//SYSIN DD *
COPY INDD=IN1,OUTDD=OUT1
SELECT MEMBER=((&C1ELEMENT,,R))
/*
//*
//COPY4A EXEC PGM=IEBCOPY,
// EXECIF=(&LIB4I(1,2),NE,NO),
// MAXRC=04
//SYSPRINT DD SYSOUT=&SYSOUT,
// FREE=CLOSE
//IN1 DD DSN=&LIB4I,
// DISP=SHR
//OUT1 DD DSN=&&TEMP4,
// DISP=(NEW,PASS),
// UNIT=&WRKUNIT,
// SPACE=(CYL,(10,10,10)),
// DCB=&LIB4I
//SYSIN DD *
COPY INDD=IN1,OUTDD=OUT1
SELECT MEMBER=((&C1ELEMENT,,R))
/*
//*
//COPY4B EXEC PGM=IEBCOPY,
// EXECIF=(&LIB4I(1,2),NE,NO),
// MAXRC=04
//SYSPRINT DD SYSOUT=&SYSOUT,
// FREE=CLOSE
//OUT1 DD DSN=&LIB4O,
// DISP=SHR,
// FOOTPRNT=CREATE
//IN1 DD DSN=&&TEMP4,
// DISP=(OLD,PASS)
//SYSIN DD *
COPY INDD=IN1,OUTDD=OUT1
SELECT MEMBER=((&C1ELEMENT,,R))
/*
:
:
:

3 thoughts on “Top 10 Endevor Implementation Pitfalls

  1. Another pitfall is circular type dependencies. In particular, when there are multiple types creating object modules that are intended to be converted to load modules, and the same types creating the object modules also create load modules that input from the object module libraries, there is no type sequence for those types that is always correct. This becomes a bigger problem when AUTOGEN is utilized or when concurrent processing is turned on, blocking user control over the generate sequence. Some people avoid this problem by avoiding object modules, but object modules are better to avoid bypassing testing of main programs when there are changes to subprograms. So a better solution is to reserve one type for creating load modules using object module libraries as inputs and sequence that type after all of the types that create the object modules.

    Like

  2. Your scenario is why I tend to advocate having a separate type called something like “LKED”. THAT TYPE creates the LOAD module; language types (COBOL, ASM, PLI, etc) always just create OBJECT modules! Normalization delivers all sorts of clearer paths….

    Like

Leave a comment