Something “New”

Have you ever gotten frustrated with CA’s Community website? Are you concerned about the Endevor community chain being rolled up into a “DevOps” community chain?

Would you like to maintain independence and focus strictly on Endevor issues with “side channels” as may be appropriate?

I’d like to suggest adding yourself to a Slack Channel that I have created where we can focus on Endevor and keep side issues as side issues. You can find the channel and sign up here: https://join.slack.com/t/endevorgroup/shared_invite/enQtMjc3NjUwNzMzNzUwLTAxZjk3OTM3Nzc4YjVhN2JmZTRjNmY3MDIzNTA2MDU3ZjVhMDU2MTk1ZjNmMDliZGRlMTg2ZmU2YjdjMzdiMjM

Slack is freeware that I think we could all exploit in terms of communication either directly or through the board. Pass this information along to the other Endevor community people you know and let’s get refocussed!

Advertisements

Some Endevor “ideation” observations…

So today I asked myself a question: “Of the things that are contributed to the CA Endevor Community Ideation, what’s the track record of CA delivering on the ideas from it’s customer base?”

Some interesting stats came out of that question and I thought I would share them.

The categories of ideas I looked at were “New”, “Wishlisted”, “Currently Planned”, “Delivered”, “Under Review”, and “Not Planned”.

The total number of ideas in all these categories was 286. Of those 286, 102 were authored by CA, not by customers (about 36%).

Breaking down the stats even further….

New = 42; CA authored = 6 (about 14%)

Wishlisted = 8; CA authored = 1 (about 13%)

Currently Planned = 12; CA authored = 10 (about 83%)

Delivered = 21; CA authored = 14 (about 67%)

Under Review = 138; CA authored = 48 (about 35%)

Not Planned = 65; CA authored = 23 (about 35%)

Is it just me, or does CA seem to be a little too concentrated on itself for planning and delivery of ideas? I accept that CA sometimes writes ideas on behalf of its customers, but… I don’t know…. the “stats” look a little skewed….

Thoughts and feedback more than welcome! 🙂

CONPARMX – Dissecting a New Approach

One of the newer utilities introduced with CA Endevor SCM is “CONPARMX”. With the introduction of this utility, CA provided an opportunity to a new approach to an old “vexing” problem; a good way of introducing things like a parameter list for compiles and linkedits without the need for doing extensive symbolic overrides or “symbolic substring value substitutions” (aka ZCONCAT aka if you don’t know, don’t worry about it!)

I have found that adhering to the principles of “Process Normalization”, exploiting CONPARMX fits quite nicely into providing a structured framework that is easier to administer.  So while I have written an entire article on “Process Normalization”, a quick recap is worth consideration.

To summarize, normalization seeks to identify objects in terms of “what they are” versus “how they are managed”. In other words, TYPE definitions such as COBOL, PLI, ASM, and FORTRAN would be considered normalized. Each TYPE definition is exactly that; a definition of the TYPE of language to be processed.

TYPE definitions such as COBMAIN, COBDB2, or COBSUB would NOT be considered normalized. In these examples, the TYPE of language (implicitly COBOL but who really knows?) is mixed up with part of its processing or use (mainline, DB2, subroutine…. But are they really?).

In a non-normalized environment, one finds many definitions for the same language. In the example cited above, there are at least 3 definitions for COBOL! Yet, COBOL is COBOL is COBOL.

In a normalized environment, there is generally ONE definition for a language (COBOL) and Endevor PROCESSOR GROUPS differentiate the handling of specific elements defined to the language TYPE. In other words, if I have a COBOL program that uses DB2 and is a subroutine, I associated it to the processor group that is defined to handle DB2 and subroutines.

Clearly, this approach eases the definitions of TYPES but results in the need to reflect different combinations of different parameters into processor group definitions. This can be an onerous task but also now simplified with CONPARMX.

In the manual, CA describes the CONPARMX utility as follows:

“The CONPARMX utility lets administrators reduce processor complexity and the number of processor groups. CONPARMX dynamically builds execution parameters for processor programs such as compilers or the linkage editor. The build process uses the symbols defined in the processor and one or more of the following groups of options, which may or may not be concatenated:

Default options

Processor group options

Element options

“Using this utility avoids the need to create and update large numbers of processors or processor groups with different input parameters.”

Figure 1 -CA Endevor SCM – 18.0

Put another way, many generate processes require parameters to be specified in order to achiever desired results. DB2, for instance, requires a pre-compile. Compilers themselves have many parameters, as does the linkage-editor.

The traditional way of providing unique combinations of parameters has been as indicated earlier; different processor groups invoking different combinations. A less traditional way has been to use symbolic substring substitutions. A “sneaky way” has been when developers discover they can override the Endevor administrator and provide compile parameters in-stream with their code (something every Endevor administrator should be aware of and provide a quick program to check for. If found, I would fail the generate!).

To illustrate the traditional way, consider the piece of processor below:

//COB1    EXEC PGM=IGYCRCTL,
//             MAXRC=4,
//             PARM=('LIB,OPT,RES,X,MAP,TRUNC(BIN),OFF',
//             'NUMPROC(MIG),&COMPPARM')
//*

Figure 2 – Compile Step in Processor

In this example, certain compile parameters are considered “technical standards” for the site and are thus “hard-coded” in the compile step (eg. LIB, OPT, RES, X, MAP, etc). Other values are “developer’s choice” and controlled by the value(s) specified in symbolic overrides defined for the processor group in &COMPPARM.

Now consider the following “grid” of processor groups:

conp01

Figure 3 – Grid of Processor Groups

Using this key, we can determine the “extra” compile parameters to be provided for each processor group. The highlighted options are the “technical standards” and are covered by the processor automatically by being hard-coded. Note that processor group “CB2AA” only uses the highlighted options and thus does not require any values to be placed in symbolic &COMPPARM.

However, processor group “CB2AB” requires the Endevor administrator to provide the value APOST and thus must override every occurrence of CB2AB at their installation with &COMPPARM=’APOST’. Processor group “CB2AC” requires &COMPPARM=’APOST,RENT’… and so forth through the entire table.

This process of providing, verifying, modifying, and setting up overrides can be labour-intensive although also typically a “one time effort”. Batch administration allows for the values to be placed quickly.

Among the challenges, however, is the fact that mistakes can be easily made and inconsistency accidentally propagated if the administrators models future processor groups on one that was defined to a system incorrectly.

CONPARMX provides for the opportunity to more cleanly and clearly define multiple processor groups without the need to provide extensive or complex symbolic overrides. The syntax for the utility is as follows:

//stepname EXEC PGM=CONPARMX,
//                 MAXRC=n,
// PARM=(parm1,'(parm2)',parm3,parm4,parm5,'(parm6)','parm7','parm8')
//PARMSDEF DD DSN=library.PRGRP,
//            MONITOR=COMPONENTS,
//            ALLOC=PMAP

Figure 4 – CONPARMX Syntax

As documented by CA, each parameter (parm) translates to a different purpose:

  • Parm1 = program name
  • Parm2 = processor symbolic
  • Parm3 = default options member name
  • Parm4 = processor group options member name
  • Parm5 = element options member name
  • Parm6 = another processor symbolic entry
  • Parm7 = concatenation instruction
  • Parm8 = write to file instruction

Walking through a conversion of a processor from the “traditional” code to the new code is the easier way to understand how CONPARMX works, so let’s start with our traditional processor compile step:

//COB1    EXEC PGM=IGYCRCTL,
//             MAXRC=4,
//             PARM=('LIB,OPT,RES,X,MAP,TRUNC(BIN),OFF',
//             'NUMPROC(MIG),&COMPPARM')
//*

Figure 5 – Compile step in processor

Step 1: Isolate the program to be executed by CONPARMX

//COB1    EXEC PGM=CONPARMX,
//             MAXRC=4,
//             PARM=(IGYCRCTL,…)
//*

Figure 6 – Isolate program to be executed

Step 2: Define the mandatory parameters.

Generally speaking, PARM2 are meant to be the “first” parameters you want CONPARMX to use with the program defined in PARM1. Based on our example, those mandatory options should be “LIB,OPT,RES,X,MAP,TRUNC(BIN),OFF,NUMPROG(MIG)”. We COULD define those into a symbolic and then code them into the processor…. And that’s very tempting…..

But then we notice PARM3…. And decide against using PARM2 at all! So for now, accept that after Step 2 that the step in the processor now looks as follows:

//COB1    EXEC PGM=CONPARMX,
//             MAXRC=4,
//             PARM=(IGYCRCTL,,…)
//*

Figure 7 – Ignoring PARM2

Step 3: Define the default options member name.

PARM3 is a good place to specify the technical defaults options that we WERE going to define to PARM2. PARM3 is the first member that CONPARMX will use to search the library specified in the DDNAME  PARMSDEF. If we make a member named something like “$$$$DFLT”, then the member in the library (controlled by Endevor!) will ultimately contain the default values for all programs being invoked by CONPARMX.

As our first entry in member $$$$DFLT, the entry looks as follows:

IGYCRCTL=’LIB,OPT,RES,X,MAP,TRUNC(BIN),OFF,NUMPROG(MIG),’

One of the benefits of this approach is that if the technical defaults for the site should change, the Endevor administrator need only reflect the change in ONE location that is tracked and automatically picked up by all processors throughout the installation.

After making the necessary change, our processor step now looks as follows:

//COB1    EXEC PGM=CONPARMX,
//             MAXRC=4,
//             PARM=(IGYCRCTL,,$$$$DFLT,…)
//PARMSDEF  DD DSN=library.of.process.grps,
//             MONITOR=COMPONENTS,
//             ALLOC=PMAP
//*

Figure 8 – Specifying $$$$DFLT

Step 4: Continue with PARM4 for Processor Groups

Following the example of what was done for PARM3, we want to drive additional parameters based on the name of the processor group. So, we add the symbolic for processor group as PARM4.

//COB1    EXEC PGM=CONPARMX,
//             MAXRC=4,
//             PARM=(IGYCRCTL,,$$$$DFLT,&C1PRGRP,…)
//PARMSDEF  DD DSN=library.of.process.grps,
//             MONITOR=COMPONENTS,
//             ALLOC=PMAP
//*

Figure 9 – Adding Processor Group name

Now CONPARMX will look into the library specified for PARMSDEF to find the member name that matches the processor group name. Using the grid defined earlier and looking just at processor groups CB2AA, CB2AB, and CB2AC, we see that

  • CB2AA has no additional parameters
  • CB2AB needs APOST
  • CB2AC needs APOST and RENT

Since CB2AA has no additional parameters, we don’t need to do any action. CONPARMX will look for member CB2AA in the PARMSDEF library and, when not found, will simply ignore the parameter. This is precisely what we want it to do.

Member CB2AB, however, will have an entry as follows:

IGYCRCTL=’APOST,’

Member CB2AC will have an entry as follows:

IGYCRCTL=’APOST,RENT,’

Step 5: Do we want PARM5?

PARM5 allows for “element specific” parameters to be specified. Personally, I’m not an advocate of using element-specific parameters. My philosophy is that if it’s good enough for an element, it’s good enough for a processor group! But we may still want to provide for it with the knowledge that CONPARMX will ignore member names “not found”.

//COB1    EXEC PGM=CONPARMX,
//             MAXRC=4,
//             PARM=(IGYCRCTL,,$$$$DFLT,&C1PRGRP,&C1ELEMENT…)
//PARMSDEF  DD DSN=library.of.process.grps,
//             MONITOR=COMPONENTS,
//             ALLOC=PMAP
//*

Figure 10 – Providing for element-specific

Step 6: Making CONPARMX act in reverse and other odd actions

PARM6, PARM7, and PARM8 are not covered by this article. They affect the order in which the parameters are invoked and “stopping points” in terms of what to do. If the reader would like more details or has a need for invoking parameters outside of what has been documented here, I recommend referring to the CA Endevor manual for more information.

The final product of our conversion will now look as follows:

//COB1    EXEC PGM=CONPARMX,
//             MAXRC=4,
//             PARM=(IGYCRCTL,,$$$$DFLT,&C1PRGRP,&C1ELEMENT,,’N’,’N’)
//PARMSDEF  DD DSN=library.of.process.grps,
//             MONITOR=COMPONENTS,
//             ALLOC=PMAP
//*

Figure 11 – Final State of converted compile step

What Have We Accomplished?

  • Elimination of parameter overrides as part of defining new processor group names
    • Vastly simplifies introduction of new groups identifying unique combinations of program parameters
    • No longer necessary to visit every override in every system
      • Assured that processor group parameters are the same Endevor-wide
  • Eased definition of processor groups
    • Names drive included parameters
    • Included parameters are now part of tracked Endevor elements
      • Endevor for Endevor administrators!
  • One stop definition, update, and override
    • The dataset library.PRGRP contains all your processor group names and what they override, independent of environment, system, etc.

conp02

Figure 12 – Type definition and library usage for PARMSDEF

conp03

Figure 13 – PARMSDEF Library Member Contents

“in-approval” is now “executed”

Due to work commitments and “normal life” (whatever that means to an Endevor administrator!), “in-approval” has run its course in terms of off-the-shelf topics, tips, and techniques that I had documented over the years. From this point forward, “in-approval” will contain articles that occur to me over time as I actively work at helping sites attain better implementations.

In this final blog in the regularly-scheduled series, I want to reflect on the state of Endevor and life-cycle management as a discipline and as it seems to have evolved based on my experience over the past 30+ years.

One of the “hot” trends in application development is the rise of “Agile” methodologies. As a project management professional, it has been interesting to see the “snobbish” attitude that strict adherents to the principles of agile have over traditional waterfall methods. As an IT professional who cut his teeth doing application development for many years before engaging as an Endevor administrator/SCLM manager, I actually intimately understand the frustrations developers had with waterfall and the theoretical “freedom” and ability to react to perceived end-user needs that is more inherent in agile.

There have been those, however, that criticize Endevor as being “more for waterfall than for agile”. I would counter argue, however, that that argument reflects a poor understanding of both methodologies and is actually an argument for another look at “what” a site is doing with Endevor versus “how” they are doing it; a classic need to re-examine real objectives rather than perceived subjectives.

At the end of the day, neither method changes the reality of a developers day-to-day activity; the need to rapidly code-compile-test-repeat. Neither method changes the reality of, once the developer has created something successful, moving to a state where code-compile-test-repeat begins again with other coders bringing forward their contributions. And then moving forward again until a sprint (or phase) is ready for release.

In the distributed world, this is often achieved by moving to different machines that represent different states. In the Endevor world, welcome to environments and stages.

Agile or Waterfall; they both still need life-cycle-management.

I think the difference is that, with Endevor, many of us have gotten too tied up in the naming and “strict” usage of environments and stages. Too often, we have gotten tied up with projects or testing managers in trying to “automate” the population of the CICS region or IMS region or DB2 catalog at a specific stage because “that’s the name of that stage”.

Times when an environment or stage in Endevor begins to deviate from a one-to-one relationship, especially for a specific system (today it wants to go here, tomorrow it wants to go there, and the next day it wants to go somewhere else), I would advocate it’s time to engage with a different definition of what your SCLM process is trying to do and approach the populating of operational libraries using something other than processors or processor groups.

Years ago, I began advocating having your stages represent states of being. In other words, if there is a one-to-one stage-to-testing-environment ratio, or even if the ratio is one-to-some, then let Endevor “automatically” populate those environments.

But if your testing, QA, or development centres are creating a vast and/or complex array of catalogs and regions that they want Endevor to populate, I believe it’s time to step back and call a reality check. I advocate falling back to a position of representation; the stages in Endevor provide libraries that Endevor can ensure represents a “state of being”. In other words, as Endevor administrator, you can assure people that elements at Stage X are created with the complete technical integrity that Endevor brings to bear. WHERE they want to test those elements has now become the responsibility of testing, QA, or the development centre.

And at this point, I think Package Shipment has FAR more to play than has been universally adopted so far. In essence, Package Shipment (remote AND local) has much to deliver in replicating the distributed worlds current model of installing development software for testing on different machines. In essence, Package Shipment has always been there to deliver the same functionality…. so let’s use it!

An imaginative leverage of Package Shipment would allow the developer to ship/deliver/install the same elements to as many different regions as may be defined. All the Endevor administrator needs to do is define the libraries and define some post-install scripts. This keeps Endevor “cleaner” and easier to administrate while at the same time delivering the ability to provide technical excellence across the enterprise.

This approach also keeps the promise of being development method agnostic; it doesn’t matter whether you got to the Endevor stage using “agile” or “waterfall”. What mattered is that the element created with integrity in Endevor is made available to whatever targets the developer has need to exercise their test or QA cycle in.

So with these final thoughts, “in-approval” has now been “executed”, a new “in-approval” package of comments and articles is on the horizon!

I want to thank you for reading these musings over the past months; it’s been my privilege to correspond with some of you and I consider it an honour to work with some of the finest minds in the SCLM discipline.

John Dueckman

http://www.johndconsulting.com

Defining “Best Practices”

The term “best practices” is often bandied about as a “catch phrase” to indicate what some people hope is a panacea of methods that will solve all their problems. Other people use it as a replacement for saying “do it my way”. Still others correctly use the term to identify commonly proven practices that have both stood the test of time as well as review.

 

It is important, then, that a definition of what “best practices” means in the context of vendor-provided software. This is particularly true of a product like Endevor, and as such, best practices with any 3rd party software can be typically found with the following characteristics:

1 – It makes use of the software’s implicit designs and methods. All software comes with an implicit intention for which the vendor designed it for and, arguably, how the vendor intended it to be used. Any software can be bent to “technically” do other things (such as holding data in Endevor versus source code), but if you start doing things it was not really intended or designed to do, you can find yourself hitting a “wall”. By virtue of the fact there is a “wall” to hit is a clear sign that what you are doing is not a best practice.

In the case of Endevor, I am a firm believer in exploiting its innate capabilities and using fields/definitions in the tool for what they are labeled to be used for.

2  – It exploits the software’s native ability without undo customizations. Some customization is inevitable, although arguably the term “customization” may be a tad overused and confused with “configuration”. Telon provided various “custom code” invocations, CoolGEN provides for “external action blocks”, and Endevor provides for processors, interfaces, and exits when used appropriately.

The danger point comes when the customization begins to try to replace functionality already being performed by the software OR when the customization is a reflection of “how” rather than “what”. “How” problems can generally be addressed in software through alteration of process/procedure to match the implicit design already likely considered in the software. It is the rare software solution that hasn’t already encountered most, if not all, the situations that are likely to arise in the discipline of field it is designed to address. Ways and methods of resolving the core problem being encountered, then, need to be adapted accordingly, not by “changing the software”. Again, by virtue of “changing the software”, you cannot possibly be following “best practices” as the implication is that every site that has installed the software has had to change the software the same way.

Appropriate use of exits, processors and other interfaces, then, is where it enhances the basic functionality already being performed by the software. Adding additional data to a logged event, for instance, or picking up additional data from external files for processing are generally appropriate examples.

3 – Are marked by making things obvious rather than obscure. In other words, a best practice is never a euphemism for “black box”. Everything from data modeling techniques that preach data normalization to experience with effective human interaction devices (VCRs, ATMs, Windows) tells us that the more obvious you are and the more control you put in the hands of the person will be met with greater acceptance than making things a “mystery”.

Hiding a vendor’s software behind “front ends” is usually done to prevent the need for education. In other words, an approach has been taken whereby the implementers feel they know what the end-user wants and needs, so they will automate everything for them. Unfortunately, this leads to heavy customizations again as they try to anticipate every need of the end-user and force the back-end software to comply. It is rather like the old “give a man a fish/teach a man to fish” syndrome. Custom-built front-ends require constant care and attention as well as retro-fitting to ensure upward compatibility. Again, by virtue of this added labour, it cannot possibly be considered a “best practice”.

4 – Is supportable by the vendor’s technical support organization. When help is required, the vendor has no choice but to support what they know. What they know, by extension, is the product as shipped from the vendor’s site. Since a best practice, by definition, implies rapid resolution of problems and quick support, any practice or implementation that deviates from the vendor’s implicit design cannot, by definition, be considered a “best practice”.

 

In contrast, the characteristics of non “best practices” can typically be identified as follows:

 

  • Complicated invocations, implementations, or installations. If processes are defined that require the developer to visit a number of places or an installation is conducted that requires extensive external files or procedures, it cannot be considered a “best practice”. This approach has, in essence, broken all 4 definitions of what “is” a best practice.

 

  • Long and involved customizations. While the customizations might fit “how” a site wants to perform a certain function with the software, it is definitely a customization specific to that site and cannot be considered an industry “best practice”. Again, all 4 definitions or criteria for a best practice have not been met.

 

  • Requires extensive training beyond (or instead of) the vendor’s standard training. This is the clearest sign that best practices could not possibly have been followed. If a vendor cannot come on-site and immediately relate to both the installation and methods by which the site is using their software, it cannot be considered a best practice. Again, it may be site-specific, but it is not something the entire industry would support or every education engagement would be “custom” and no material could possibly ever be re-used!

 

  • Hides the product from end-users. This is typically in direct violation of characteristic (3). Hiding the software behind front-ends, if it were a ‘best practice’, would have to be done by every site. If this were the case, no site would buy the software in the first place OR the vendor would revamp the software to fit the needs of its clients.

 

  • Change’s the products implicit design into something it was not designed to do. As iterated in characteristic (1), all software comes with an implicit method and design for its use. Arbitrarily changing the usage of fields for something they were not intended to be used for or changing the meaning of fields to something else cannot possibly considered a “best practice”.

Top 10 Endevor Implementation Pitfalls

Over the years, I have reviewed almost a hundred different installations and implementations of Endevor around the world. Some are examples of simple elegance, while others are testaments of good ideas taken too far.

My overall philosophy has always been and continues to be one of “simplicity”; I’d much rather see implementations of the elegantly simple than the convoluted complex. The only way to truly achieve simplicity is to use Endevor “as it’s intended”, not in ways that result in heavy customizations or extensive use of exits. I am a big believer in “teaching a person how to fish rather than giving a person fish”. I’d much rather any problem or issue I have with an Endevor installation or implementation be “CA’s” problem rather than mine!

So, recognizing this is one person’s opinion, what are the top 10 pitfalls I see sites make in their implementations of Endevor. In no particular order:

10) Lack of Normalized Processes

In an earlier blog, I wrote an article about something I call “process normalization”. As I like to say, in my mind “a dog is a dog is a dog”. You don’t say furrydog, you don’t say browndog…. You say “there is a dog and it is brown and it is furry”. In other words, it is a DOG and its attributes are covering (fur) and colour (brown).

The same principle needs to apply to definitions within a good Endevor implementation. When I see definitions of TYPES such as COBSUB or COBDB2, I am encountering non-normalized implementations. Both TYPES are really just COBOL…. Their attributes are role (subroutine) and dbms (DB2).

Attributes are more correctly addressed in the definition of PROCESSOR GROUPS, not TYPE names. By calling COBOL by what it is, I can then easily change the attribute by merely selecting a different PROCESSOR GROUP. For instance, if I have 2 types named COBSUB and COBDB2S (for DB2 subroutine)…. and the program defined in COBSUB is altered to now contain DB2 calls, it needs to be moved to a totally new TYPE definition. However, if the site were normalized, no change to the TYPE need take place (and thus no risk to the history of changes that have ever taken place with the element). Instead, one merely need associate a new processor group to the element that now includes the DB2 steps.

The same principle applies to various type definitions and is often either misunderstood or purposely ignored in the interest of “giving people fish”.

9) Need for VSAM-RLS or CA-L-Serv

While the preferred method today is VSAM-RLS, either VSAM-RLS or CA-L-Serv can be easily implemented to ease performance issues around Endevor VSAM libraries. It often surprises me how few sites are exploiting this easy and simple method of reducing their throughput times because they have not implemented either of these available and free solutions.

8) Forward-Base-Delta (FBD) and/or ELibs used exclusively for base libraries

As someone whose career in IT grew up in the application area versus the systems programming area, it often astounds me that I often encounter set ups in Endevor that are so overtly application-area-hostile. Selecting FBD and/or Elibs as your base libraries always tends to signal to me that the person who originally set up this installation likely never worked in applications!

If I don’t see a “Source Output Library” declared, I get really concerned. At that point, I’m already guessing the application area (whether the Endevor administrator is aware of it or not) is likely keeping an entire parallel universe of code available in their bottom-drawer for the work they really need to do… and likely really really dislike Endevor!

It was always my experience that the application area needs to have clear and unfettered access to the libraries that Endevor is maintaining. It serves no “application” purpose to scramble up the names or compress the source; they NEED that source to do scans, impact analysis, business scope change analysis… in other words, do their job. If the Endevor installation is not providing easy access and views of that source (and by easy, I also mean ability that is allowed OUTSIDE Endevor control), then the implementation cannot be considered a good one.

For this reason among many, I am a huge advocate of always defining every application source type within Endevor as Reverse-Base-Delta, unencrypted/uncompressed… and PDS or PDS/E as the base library. This implementation is the friendliest you can be to the application area while at the same time maintaining the integrity of the Endevor inventory.

While I accept that, currently, Package Shipment requires Source Output Library, this need not be any kind of constraint. Its unlikely most sites are shipping from every environment and every stage; arguably you need only define a Source Output Library at the location you do shipments. Therefore, using RBD and PDS as your base library, you replace the need for a Source Output Library everywhere else since the application can now use the REAL base library for their REAL work…. With the exception of editing (unless you are using Quickedit). All their scans can now make use of whatever tools that your site has available.

PDS/E has come a long way since Endevor first began using them and are rapidly becoming the de facto standard for PDS definition. However, if you are still using the original definitions of PDS, I tend to also recommend looking into a product named “CA-PDSMAN”. It automates compression, thus relieving that as a maintenance issue, and actually provides a series of Endevor-compatible utilities that can be exploited by the Endevor administrator.

7) Need for Quickedit

A universal truth is that “familiarity breeds contempt”. Depending on your definition of the word “contempt”, Endevor is no exception.

As Endevor administrators, it’s important to remember that we live and breath the screens and processes around Endevor. Most of us know the panels and how things operate like the back of our hand.

However, the application area often is completely intimidated by the myriad of screens, options, choices, and executions that happen under the umbrella known as Endevor.

A simple solution to this issue can be the introduction of Quickedit at your site. Basically, you can move people from navigating a complex set of panels and processes to “one-stop shopping”. Many application areas that see demonstrations of Quickedit often completely change their opinion of the facility.

Part of the reason for this is the change of view that comes with the Quickedit option. Endevor “base” is oriented along “action-object” execution. In other words, you have to tell Endevor what “action” (MOVE, ADD, GENERATE, etc) you want to do before it shows you the element list the action will be executed against.

Quickedit is oriented against a more natural flow of “object-action”. With Quickedit, you are first displayed a list of the elements you asked for. Once the list is displayed, you can then choose the action you want to do. This is much more intuitive to the manner in which we generally operate when we are doing the application development tasks.

6) Generation Listings

It surprises me how often I encounter sites that are need keeping their generation listings… or keeping them in very strange places!

When I find they’re not keeping them at all, I generally discover the attitude is “well, if we need it, we’ll just regenerate the program”. What this ignores is the fact that the newly generated program may very well have completely different offsets or addresses than the one that caused the generation to have to take place! The listing has all the potential of being completely useless.

Generation listings are, for all intents and purposes, audit reports. They record the offsets, addresses, and linked status of the module as it was being generated by Endevor. They should NOT be deleted and they should be kept.

The issue of “where” to keep generation listings, however, can be tricky. Using PDS’ often results in what I refer to as “the rat in the snake”. A project at the lower levels will require a large amount of space (more than normally might be required) as it is developing and testing its changes. Then, once it moves to QA, that space in test is released, but now must be accounted for in QA! And then, once QA is satisfied, it must be moved into production, where a reorganization of files might be required in order to accommodate the listings arriving.

Personally, I’m an advocate of a mix of Elibs and CA-View. Elibs take care of themselves space-wise and can easily accommodate the “rat in the snake” scenario. The downside is that the information is encrypted and compressed, making it necessary to view/print the listing information in Endevor.

CA-View, however, makes a great “final resting” place for the listings. It is an appropriate use of your enterprise’s production report repository AND it can keep a “history” of listings; deltas, if you prefer. This can be very handy if someone needs to compare “before” and “after” listings!

One final note if you decide to use Elibs for your listings: do NOT place those Elibs under CA-L-Serv control! Due to the manner in which Endevor writes to listing Elibs, placing them under CA-L-Serv control will actually harm your performance rather than improve it!

5) Backups

I’m surprised how many sites are solely reliant on their volume backups.

Volume backups are a good thing to have and use in the event of the need to invoke a disaster recovery plan (DRP). But they very arguably are not enough when it comes to Endevor and the manner in which it is architected.

Endevor spans a variety of volumes and stores different “pieces” on different volumes often at different times. For instance, the package dataset may be on VOLA, the base libraries on VOLB, and the delta libraries on VOLC. A site may do a backup of those volumes over the space of an hour… but during that hour, an Endevor job ran 3 packages moving 15 elements with a variety of changes. Assuming the volumes are restored to the image taken, exactly what is the state of those Endevor libraries in terms of synchronization? Was VOLA restored to the last package execution? The first? What about the element base library? Is it in sync with the delta?

Fortunately, Endevor has a VALIDATE job that can be run to see if there is a problem. And I’m sure the vast majority of times, there will not be…..

But what if there is? What are you going to do if it turns out there is a verification problem and your libraries are out of sync?

For this reason I strongly advocate the use of regularly scheduled FULL and INCREMENTAL UNLOAD as a critical part of any site’s DRP. A FULL UNLOAD takes considerable time and should be used with discretion and planning, but INCREMENTAL UNLOADS tend to be relatively quick. I recommend doing both and consolidating them into files that are accessible during a DRP exercise.

During the DRP exercise, do the volume restores first. Then run the Endevor VALIDATE job. If the job returns and says things are fine, you’re done! But if not, you have the necessary files to do a RELOAD job and put Endevor back into the state it needs to be.

4) Security Overkill or Underkill

Unfortunately, the usage of the External Security Interface continues to be a mysterious black box to many sites. This is sad as there are a variety of exploitations that can take place by using the security abilities to your advantage!

Read through the articles I have posted on “Security Optimization” and “The APE Principle”. And if I have time, I will try to write a future article on demystifying the ESI to help the layman understand exactly how the facility really works!

3) SMF Records

Another ability that is often overlooked at installations is the fact that Endevor can cut SMF records to record each and every action taking place at the site. It’s been my experience that these records are literally a gold mine of information for the Endevor administrators and, frankly, should be mandatory from any auditor worth their salt!

The reporting available from the SMF records is far superior to the “Element Activity” reports that are provided by Endevor itself. While the “Element Activity” reports are better than nothing, I would argue not a lot.

To illustrate, an element in Endevor is promoted 125 times in the last month. Those 125 times will be recorded and reported as such with the Endevor SMF reports… but the “Element Activity” report would show the last action the element did (MOVE) as 1.

To illustrate further, an element is DELETED from Endevor. The SMF reports will show who, when, and where the element was deleted. “Element Activity” is blind; the element is no longer in existent and thus just drops from the report!

If one of the Endevor administrators objectives is to measure the “load” under which Endevor is operating, SMF records provide the detail to monitor how much is flowing through on a given time period.

SMF records truly provide the definitive log of what’s going on with the Endevor inventory.

2) DONTUSE Processor

I’d like to see CA properly address this issue with a change to Endevor, and I’ve submitted the idea to the community website, but to quote the idea as recorded on the website:

“As an Endevor user/developer/person-that-actually-has-to-use-Endevor-and-is-not-an-administrator, I want Endevor to KNOW what I am adding to it is a new element and requires me to select a processor group rather than ME knowing I need to put an “*” in the PROCESSOR GROUP (because I will NOT remember I need to do that and will let it default… and inevitably the default processor group is NOT the one I want making ME do MORE work) so that I can add my new elements intelligently and proactively rather than reactively.

“As an Endevor administrator, I want to define a default processor group that automatically triggers the “Select PROCESSOR GROUP” display if my user does not enter “*” or has not entered an override value so that they can correctly choose the right processing without having to go back and forth because they have inevitably forgotten they need to choose something and the default is either wrong for their particular element.”

In essence, what I advocate is the Endevor administrator should not assume to know what the default processor group is when there is a choice to be made. Take the example of the COBOL program I used earlier in this article. If I were to assume every new program coming in as a COBOL type is to be a subroutine with DB2, then the day that someone adds a program that does not use DB2 is the day “Endevor is broken and you, Mr/Mrs Endevor Administrator, are WRONG!”. And that will surely happen as the sun rising in the morning!

A workaround is to have your default processor be declared along the lines of the DONTUSE processor I have documented in an earlier article. In essence, if someone puts in a new program and doesn’t specify the processor group, the default DONTUSE processor will send them a message with instructions on how to choose a processor group and fail the element. It’s clumsy and awkward, but works for now until CA provides a product enhancement.

1)     Need for X-Process

It’s surprising how often I encounter sites that still have not built or captured ACM information because “we don’t want to generate and lose our production loads”.

What’s needed is a tool I used to call the XPROCESS. In essence, what the process does is cause Endevor to generate your element (make, build, compile, whatever) and thus create the ACM, throw out the output, and then copy the current production version to the stage the generate is in, refootprinting the output accordingly. A splash title page in the listing can clearly identify this is a conversion or clean-up listing only… and the problem is solved.

This is a valuable tool to have in the Endevor administrator’s arsenal. For your reference, modification, and usage, here is a copy of a simple example:

//********************************************************************
//* *
//* PROCESSOR NAME: GCOB02X *
//* PURPOSE: SPECIAL PURPOSE COBOL PROCESSOR TO REGENERATE COBOL *
//* ELEMENTS AND THEN CREATE 'POINT-IN-TIME' COPIES OF THE *
//* 'REAL' OBJECT MEMBER FROM THE CONVERTING SYSTEMS OBJECT *
//* LIBRARY. *
//* *
//********************************************************************
//GCOB02X PROC ADMNLIB='NDVLIB.ADMIN.STG6.LOADLIB',
// COMCOP1='NDVLIB.COMMON.STG1.COPYLIB',
:
:
:
// LIB1I=NO/WHATEVER,
// LIB1O=NO/WHATEVER,
// LIB2I=NO/WHATEVER,
// LIB2O=NO/WHATEVER,
// LIB3I=NO/WHATEVER,
// LIB3O=NO/WHATEVER,
// LIB4I=NO/WHATEVER,
// LIB4O=NO/WHATEVER,
:
:
:
//*
//********************************************************************
//* DELETE 'JUST CREATED' OBJECT! *
//********************************************************************
//DELOBJ EXEC PGM=CONDELE
// IF (&C1EN = DVLP)
// OR (&C1EN = DVL2)
// OR (&C1EN = ACPT)
// OR (&C1EN = PROD) THEN
//C1LIB DD DSN=NDVLIB.&C1SY..&C1ST..OBJLIB,
// DISP=SHR
// ELSE
//C1LIB DD DSN=NDVLIB.&C1EN..&C1ST..OBJLIB,
// DISP=SHR
// ENDIF
//*
//COPY1A EXEC PGM=IEBCOPY,
// EXECIF=(&LIB1I(1,2),NE,NO),
// MAXRC=04
//SYSPRINT DD SYSOUT=&SYSOUT,
// FREE=CLOSE
//IN1 DD DSN=&LIB1I,
// DISP=SHR
//OUT1 DD DSN=&&TEMP1,
// DISP=(NEW,PASS),
// UNIT=&WRKUNIT,
// SPACE=(CYL,(10,10,10)),
// DCB=&LIB1I
//SYSIN DD *
COPY INDD=IN1,OUTDD=OUT1
SELECT MEMBER=((&C1ELEMENT,,R))
/*
//*
//COPY1B EXEC PGM=IEBCOPY,
// EXECIF=(&LIB1I(1,2),NE,NO),
// MAXRC=04
//SYSPRINT DD SYSOUT=&SYSOUT,
// FREE=CLOSE
//OUT1 DD DSN=&LIB1O,
// DISP=SHR,
// FOOTPRNT=CREATE
//IN1 DD DSN=&&TEMP1,
// DISP=(OLD,PASS)
//SYSIN DD *
COPY INDD=IN1,OUTDD=OUT1
SELECT MEMBER=((&C1ELEMENT,,R))
/*
//*
//COPY2A EXEC PGM=IEBCOPY,
// EXECIF=(&LIB2I(1,2),NE,NO),
// MAXRC=04
//SYSPRINT DD SYSOUT=&SYSOUT,
// FREE=CLOSE
//IN1 DD DSN=&LIB2I,
// DISP=SHR
//OUT1 DD DSN=&&TEMP2,
// DISP=(NEW,PASS),
// UNIT=&WRKUNIT,
// SPACE=(CYL,(10,10,10)),
// DCB=&LIB2I
//SYSIN DD *
COPY INDD=IN1,OUTDD=OUT1
SELECT MEMBER=((&C1ELEMENT,,R))
/*
//*
//COPY2B EXEC PGM=IEBCOPY,
// EXECIF=(&LIB2I(1,2),NE,NO),
// MAXRC=04
//SYSPRINT DD SYSOUT=&SYSOUT,
// FREE=CLOSE
//OUT1 DD DSN=&LIB2O,
// DISP=SHR,
// FOOTPRNT=CREATE
//IN1 DD DSN=&&TEMP2,
// DISP=(OLD,PASS)
//SYSIN DD *
COPY INDD=IN1,OUTDD=OUT1
SELECT MEMBER=((&C1ELEMENT,,R))
/*
//*
//COPY3A EXEC PGM=IEBCOPY,
// EXECIF=(&LIB3I(1,2),NE,NO),
// MAXRC=04
//SYSPRINT DD SYSOUT=&SYSOUT,
// FREE=CLOSE
//IN1 DD DSN=&LIB3I,
// DISP=SHR
//OUT1 DD DSN=&&TEMP3,
// DISP=(NEW,PASS),
// UNIT=&WRKUNIT,
// SPACE=(CYL,(10,10,10)),
// DCB=&LIB3I
//SYSIN DD *
COPY INDD=IN1,OUTDD=OUT1
SELECT MEMBER=((&C1ELEMENT,,R))
/*
//*
//COPY3B EXEC PGM=IEBCOPY,
// EXECIF=(&LIB3I(1,2),NE,NO),
// MAXRC=04
//SYSPRINT DD SYSOUT=&SYSOUT,
// FREE=CLOSE
//OUT1 DD DSN=&LIB3O,
// DISP=SHR,
// FOOTPRNT=CREATE
//IN1 DD DSN=&&TEMP3,
// DISP=(OLD,PASS)
//SYSIN DD *
COPY INDD=IN1,OUTDD=OUT1
SELECT MEMBER=((&C1ELEMENT,,R))
/*
//*
//COPY4A EXEC PGM=IEBCOPY,
// EXECIF=(&LIB4I(1,2),NE,NO),
// MAXRC=04
//SYSPRINT DD SYSOUT=&SYSOUT,
// FREE=CLOSE
//IN1 DD DSN=&LIB4I,
// DISP=SHR
//OUT1 DD DSN=&&TEMP4,
// DISP=(NEW,PASS),
// UNIT=&WRKUNIT,
// SPACE=(CYL,(10,10,10)),
// DCB=&LIB4I
//SYSIN DD *
COPY INDD=IN1,OUTDD=OUT1
SELECT MEMBER=((&C1ELEMENT,,R))
/*
//*
//COPY4B EXEC PGM=IEBCOPY,
// EXECIF=(&LIB4I(1,2),NE,NO),
// MAXRC=04
//SYSPRINT DD SYSOUT=&SYSOUT,
// FREE=CLOSE
//OUT1 DD DSN=&LIB4O,
// DISP=SHR,
// FOOTPRNT=CREATE
//IN1 DD DSN=&&TEMP4,
// DISP=(OLD,PASS)
//SYSIN DD *
COPY INDD=IN1,OUTDD=OUT1
SELECT MEMBER=((&C1ELEMENT,,R))
/*
:
:
:

An Opinion about Endevor “core” Pieces

External Security Interface (ESI)

 

Originally, using the External Security Interface was an option and in my opinion, it was always folly to not take advantage of this software. Without going into a long lecture about security, suffice it to say that it is a key component of effective configuration management.

Security and Configuration Management have 2 components: physical security and functional security.

Endevor does not supply physical security. This is the security that specifies who can read/write to the different high-level indexes at a site and is handled at every site by whatever proprietary security software they have (i.e. RACF, ACF2, TOP-SECRET).

Functional security is the component that determines, once in Endevor, who is allowed to do what to which systems. Your choices are to either set up Endevor Native Security tables or interface with your current on-site security software. It makes sense to most shops to continue leveraging their current on-site security software; it provides a single point of administration and continues to leverage the investment they have already made in security at their site. If you use the Endevor Native Security tables, you must remember to reflect any general changes in system security there as well as in your “standard” shop software. Also, this means a component of your site’s software security requirement is NOT being managed by your site’s security software. This can be a favourite target for security auditor’s to hit.

Extended Processors

This is the heart-and-soul of Endevor. Without Extended Processors, you can’t compile, generate, check, crossreference, or any of the other cool neat stuff Endevor can do for you. In essence, without Extended Processors, Endevor becomes nothing more than a source repository; a toothless tiger; a fancy version of Panvalet.

Automated Configuration Manager (ACM)

If Extended Processors are the heart-and-soul, then ACM is the brains. ACM is the piece that allows you to automatically monitor the input and output components of elements as they are being processed by an Extended Processor. ACM, then, allows effective impact analysis and ensures the integrity of your applications. The information ACM captures is what package processing uses to verify that a developer is not missing pieces when they create a promotion package for production.