I have a simple question for Broadcom concerning Endevor…
Given that years ago Endevor was adopted across CA by the mainframe tools as the source repository for the code…. are THEY adopting GIT and all the other extensions to Endevor?
I have a simple question for Broadcom concerning Endevor…
Given that years ago Endevor was adopted across CA by the mainframe tools as the source repository for the code…. are THEY adopting GIT and all the other extensions to Endevor?
I don’t get it.
Call me a boomer, call me old, call me old-fashioned, call me a dinosaur. But at least respect the 40-plus years I’ve spent in the IT industry doing business application development with almost 30 years responsible for different site’s Source and Configuration Life-cycle Management (SCLM), now more fashionably called Development Operations (DevOps).
I know a thing or two about DevOps, especially its evolution from z/OS platforms and “adoption” on distributed platforms.
I do not understand the enterprises that accept the perversion of the discipline of DevOps auditing requirements on distributed platforms. I understand why it’s happening, but I point the finger squarely at the current crop of IT auditors and their ongoing hesitancy to call a halt to what is (in my opinion) going to ultimately be an unmitigated disaster.
Years ago, when I started in IT, auditors did what was known as comprehensive Software Change Management Audits. This covered various aspects of DevOps designed to mitigate risks associated with changing software. It was the early days of Trojan Horses (hidden code that benefitted a 3rd party at the expense of the enterprise), lost source, and mismatched/untracked source-to-executable (to name a few of the dawning “issues”).
Many sites failed these audits, driving a need to address these risks effectively.
Most sites looked to software vendors to provide comprehensive tools to support features, functions, and abilities to address or at least reduce the chance of issues at their site.
More importantly, however, they looked to vendor solutions to provide security, support, stability, and “bullet-proof” software for which the vendor could be held accountable.
Yes, these vendor “safes” were expensive. But they more than did the job. In conjunction with changed processes, they more than adequately addressed the issues that were rising. Products such as Endevor answered the need of the auditors and those on-site responsible for delivering vetted code (albeit accepting that no code is vetted if reviewers don’t actually review!).
Then along came the distributed platforms….
At first, this new crop of platforms was expected to follow the same set of rules as the old platforms. Vendors responded accordingly; each vendor had a tool that would deliver most of the same features and functionality. Legent and then CA offered “Endevor Workstation”. Later, after the purchase of Platinum, CA offered “Harvest”.
Each vendor solution offered the support of a dedicated team committed to ensuring the viability of both their product as well as the enterprise’s. Each had a vested interest in the other and most audit requirements were met.
Then we hit “Agile Development”, the rise of the young recently-graduated developer, and, finally, “open source”.
Now, as an “old guy”, I would argue there is nothing new under the sun. In the “old days”, we called “Agile” by different names; JAD (joint application design), RAD (rapid application development)… to name just a couple. But as an application developer, I got it…. I understood the pushback to waterfall methodology and the seemingly endless paperwork and red-tape needed to get anything done.
That said, the need for iterative development does not supersede the needs of proper and secure controls, processes, and configuration management. If ever there was a need to have supported back office software, it is now. Most enterprises are not in the business of “rolling their own” when it comes to process support software and the need for an identified and accountable vendor to provide that back office software has (arguably) never been greater.
Second, the rise of the young recently-graduated developer….
Most enterprises are actively (or passively) “getting off the mainframe”. That’s been the story for almost 30 of my 40 years in the industry. So I’ll leave my thoughts on that topic for some other future blog.
The colleges and universities have done no one any service by undermining the z/OS platform in its offerings. Most graduates today have little knowledge of the platform that sent man to the moon but are very acquainted with the platform that gave us Mario Kart.
I get it. Distributed platforms are sexy. There’s an immediate gratification to seeing something you’ve coded be completely under your control doing whatever you want on your system. You’re operator, systems programmer, developer, and user all rolled into one!
Third, and arguably most insidiously, “open source”….
Let’s first be clear on what “open source” means.
“1. Free Redistribution
“The license shall not restrict any party from selling or giving away the software as a component of an aggregate software distribution containing programs from several different sources. The license shall not require a royalty or other fee for such sale.
“Rationale: By constraining the license to require free redistribution, we eliminate the temptation for licensors to throw away many long-term gains to make short-term gains. If we didn’t do this, there would be lots of pressure for cooperators to defect.
“2. Source Code
“The program must include source code, and must allow distribution in source code as well as compiled form. Where some form of a product is not distributed with source code, there must be a well-publicized means of obtaining the source code for no more than a reasonable reproduction cost, preferably downloading via the Internet without charge. The source code must be the preferred form in which a programmer would modify the program. Deliberately obfuscated source code is not allowed. Intermediate forms such as the output of a preprocessor or translator are not allowed…..” https://opensource.org/osd-annotated
It seems to me that the combination of “free distribution” and “source code” has become a siren’s song to the colleges and universities that are creating the IT professionals of tomorrow. As centers-of-excellence, they have instilled a sense of “rightness” about using “free software” to which “anyone can modify”.
When I worked at CA, we used to say “you get what you pay for” with no one expecting any enterprise would possibly subject itself to relying its business on “open source” software.
My, how things have changed!!
Fast forward to today, and the distributed software DevOps environment is rife with “open source software” providing the back office applications (and processes) that drive many businesses applications today.
Think about that for a moment…..
Enterprises allow or are allowing the introduction of software that can be changed by anyone anywhere in the world in an unaccountable manner. Software that controls the software developers are creating that drive the business of the enterprise. Software that no one ensures source-to-load integrity.
I don’t get it.
Where are the auditors? Why are they accepting such a dangerous environment to make its way into their businesses?
Where is the demand for proper and secure (and arguably proprietary) storage methods? Build methods? Component tracking mechanisms? Accountability for ALL components in the development cycle?
Sure, you can find free open source software to provide those functions…. But that’s the point! They’re open source! So how do you know (and I mean really know) there isn’t imbedded “ET call home” commands buried in there? You trust an open source community that much??
It’s like HIPAA or Sarbanes-Oxley never happened!
I don’t get it.
Yes, I recognize that vendor provided software configuration management (SCM) software is not as necessarily “flexible” or “easy to use” as what the young developer learned using open source solutions at their local college/university.
But this isn’t academia we’re talking about; this is now the real world.
My own opinion is that any site that uses open source software for more than just a pass through to vendor supported software running as the engine is playing with fire. The bottom-line is you do not and cannot know the integrity of open source software and you cannot hold an “open source community” accountable if someone sneaks through a malicious code element.
With a vendor, you have an accountability trail.
I don’t get it.
So today I asked myself a question: “Of the things that are contributed to the CA Endevor Community Ideation, what’s the track record of CA delivering on the ideas from it’s customer base?”
Some interesting stats came out of that question and I thought I would share them.
The categories of ideas I looked at were “New”, “Wishlisted”, “Currently Planned”, “Delivered”, “Under Review”, and “Not Planned”.
The total number of ideas in all these categories was 286. Of those 286, 102 were authored by CA, not by customers (about 36%).
Breaking down the stats even further….
New = 42; CA authored = 6 (about 14%)
Wishlisted = 8; CA authored = 1 (about 13%)
Currently Planned = 12; CA authored = 10 (about 83%)
Delivered = 21; CA authored = 14 (about 67%)
Under Review = 138; CA authored = 48 (about 35%)
Not Planned = 65; CA authored = 23 (about 35%)
Is it just me, or does CA seem to be a little too concentrated on itself for planning and delivery of ideas? I accept that CA sometimes writes ideas on behalf of its customers, but… I don’t know…. the “stats” look a little skewed….
Thoughts and feedback more than welcome! 🙂
One of the newer utilities introduced with CA Endevor SCM is “CONPARMX”. With the introduction of this utility, CA provided an opportunity to a new approach to an old “vexing” problem; a good way of introducing things like a parameter list for compiles and linkedits without the need for doing extensive symbolic overrides or “symbolic substring value substitutions” (aka ZCONCAT aka if you don’t know, don’t worry about it!)
I have found that adhering to the principles of “Process Normalization”, exploiting CONPARMX fits quite nicely into providing a structured framework that is easier to administer. So while I have written an entire article on “Process Normalization”, a quick recap is worth consideration.
To summarize, normalization seeks to identify objects in terms of “what they are” versus “how they are managed”. In other words, TYPE definitions such as COBOL, PLI, ASM, and FORTRAN would be considered normalized. Each TYPE definition is exactly that; a definition of the TYPE of language to be processed.
TYPE definitions such as COBMAIN, COBDB2, or COBSUB would NOT be considered normalized. In these examples, the TYPE of language (implicitly COBOL but who really knows?) is mixed up with part of its processing or use (mainline, DB2, subroutine…. But are they really?).
In a non-normalized environment, one finds many definitions for the same language. In the example cited above, there are at least 3 definitions for COBOL! Yet, COBOL is COBOL is COBOL.
In a normalized environment, there is generally ONE definition for a language (COBOL) and Endevor PROCESSOR GROUPS differentiate the handling of specific elements defined to the language TYPE. In other words, if I have a COBOL program that uses DB2 and is a subroutine, I associated it to the processor group that is defined to handle DB2 and subroutines.
Clearly, this approach eases the definitions of TYPES but results in the need to reflect different combinations of different parameters into processor group definitions. This can be an onerous task but also now simplified with CONPARMX.
In the manual, CA describes the CONPARMX utility as follows:
“The CONPARMX utility lets administrators reduce processor complexity and the number of processor groups. CONPARMX dynamically builds execution parameters for processor programs such as compilers or the linkage editor. The build process uses the symbols defined in the processor and one or more of the following groups of options, which may or may not be concatenated:
Processor group options
“Using this utility avoids the need to create and update large numbers of processors or processor groups with different input parameters.”
Figure 1 -CA Endevor SCM – 18.0
Put another way, many generate processes require parameters to be specified in order to achiever desired results. DB2, for instance, requires a pre-compile. Compilers themselves have many parameters, as does the linkage-editor.
The traditional way of providing unique combinations of parameters has been as indicated earlier; different processor groups invoking different combinations. A less traditional way has been to use symbolic substring substitutions. A “sneaky way” has been when developers discover they can override the Endevor administrator and provide compile parameters in-stream with their code (something every Endevor administrator should be aware of and provide a quick program to check for. If found, I would fail the generate!).
To illustrate the traditional way, consider the piece of processor below:
//COB1 EXEC PGM=IGYCRCTL, // MAXRC=4, // PARM=('LIB,OPT,RES,X,MAP,TRUNC(BIN),OFF', // 'NUMPROC(MIG),&COMPPARM') //*
Figure 2 – Compile Step in Processor
In this example, certain compile parameters are considered “technical standards” for the site and are thus “hard-coded” in the compile step (eg. LIB, OPT, RES, X, MAP, etc). Other values are “developer’s choice” and controlled by the value(s) specified in symbolic overrides defined for the processor group in &COMPPARM.
Now consider the following “grid” of processor groups:
Figure 3 – Grid of Processor Groups
Using this key, we can determine the “extra” compile parameters to be provided for each processor group. The highlighted options are the “technical standards” and are covered by the processor automatically by being hard-coded. Note that processor group “CB2AA” only uses the highlighted options and thus does not require any values to be placed in symbolic &COMPPARM.
However, processor group “CB2AB” requires the Endevor administrator to provide the value APOST and thus must override every occurrence of CB2AB at their installation with &COMPPARM=’APOST’. Processor group “CB2AC” requires &COMPPARM=’APOST,RENT’… and so forth through the entire table.
This process of providing, verifying, modifying, and setting up overrides can be labour-intensive although also typically a “one time effort”. Batch administration allows for the values to be placed quickly.
Among the challenges, however, is the fact that mistakes can be easily made and inconsistency accidentally propagated if the administrators models future processor groups on one that was defined to a system incorrectly.
CONPARMX provides for the opportunity to more cleanly and clearly define multiple processor groups without the need to provide extensive or complex symbolic overrides. The syntax for the utility is as follows:
//stepname EXEC PGM=CONPARMX, // MAXRC=n, // PARM=(parm1,'(parm2)',parm3,parm4,parm5,'(parm6)','parm7','parm8') //PARMSDEF DD DSN=library.PRGRP, // MONITOR=COMPONENTS, // ALLOC=PMAP
Figure 4 – CONPARMX Syntax
As documented by CA, each parameter (parm) translates to a different purpose:
Walking through a conversion of a processor from the “traditional” code to the new code is the easier way to understand how CONPARMX works, so let’s start with our traditional processor compile step:
//COB1 EXEC PGM=IGYCRCTL, // MAXRC=4, // PARM=('LIB,OPT,RES,X,MAP,TRUNC(BIN),OFF', // 'NUMPROC(MIG),&COMPPARM') //*
Figure 5 – Compile step in processor
//COB1 EXEC PGM=CONPARMX, // MAXRC=4, // PARM=(IGYCRCTL,…) //*
Figure 6 – Isolate program to be executed
Generally speaking, PARM2 are meant to be the “first” parameters you want CONPARMX to use with the program defined in PARM1. Based on our example, those mandatory options should be “LIB,OPT,RES,X,MAP,TRUNC(BIN),OFF,NUMPROG(MIG)”. We COULD define those into a symbolic and then code them into the processor…. And that’s very tempting…..
But then we notice PARM3…. And decide against using PARM2 at all! So for now, accept that after Step 2 that the step in the processor now looks as follows:
//COB1 EXEC PGM=CONPARMX, // MAXRC=4, // PARM=(IGYCRCTL,,…) //*
Figure 7 – Ignoring PARM2
PARM3 is a good place to specify the technical defaults options that we WERE going to define to PARM2. PARM3 is the first member that CONPARMX will use to search the library specified in the DDNAME PARMSDEF. If we make a member named something like “$$$$DFLT”, then the member in the library (controlled by Endevor!) will ultimately contain the default values for all programs being invoked by CONPARMX.
As our first entry in member $$$$DFLT, the entry looks as follows:
One of the benefits of this approach is that if the technical defaults for the site should change, the Endevor administrator need only reflect the change in ONE location that is tracked and automatically picked up by all processors throughout the installation.
After making the necessary change, our processor step now looks as follows:
//COB1 EXEC PGM=CONPARMX, // MAXRC=4, // PARM=(IGYCRCTL,,$$$$DFLT,…) //PARMSDEF DD DSN=library.of.process.grps, // MONITOR=COMPONENTS, // ALLOC=PMAP //*
Figure 8 – Specifying $$$$DFLT
Following the example of what was done for PARM3, we want to drive additional parameters based on the name of the processor group. So, we add the symbolic for processor group as PARM4.
//COB1 EXEC PGM=CONPARMX, // MAXRC=4, // PARM=(IGYCRCTL,,$$$$DFLT,&C1PRGRP,…) //PARMSDEF DD DSN=library.of.process.grps, // MONITOR=COMPONENTS, // ALLOC=PMAP //*
Figure 9 – Adding Processor Group name
Now CONPARMX will look into the library specified for PARMSDEF to find the member name that matches the processor group name. Using the grid defined earlier and looking just at processor groups CB2AA, CB2AB, and CB2AC, we see that
Since CB2AA has no additional parameters, we don’t need to do any action. CONPARMX will look for member CB2AA in the PARMSDEF library and, when not found, will simply ignore the parameter. This is precisely what we want it to do.
Member CB2AB, however, will have an entry as follows:
Member CB2AC will have an entry as follows:
PARM5 allows for “element specific” parameters to be specified. Personally, I’m not an advocate of using element-specific parameters. My philosophy is that if it’s good enough for an element, it’s good enough for a processor group! But we may still want to provide for it with the knowledge that CONPARMX will ignore member names “not found”.
//COB1 EXEC PGM=CONPARMX, // MAXRC=4, // PARM=(IGYCRCTL,,$$$$DFLT,&C1PRGRP,&C1ELEMENT…) //PARMSDEF DD DSN=library.of.process.grps, // MONITOR=COMPONENTS, // ALLOC=PMAP //*
Figure 10 – Providing for element-specific
PARM6, PARM7, and PARM8 are not covered by this article. They affect the order in which the parameters are invoked and “stopping points” in terms of what to do. If the reader would like more details or has a need for invoking parameters outside of what has been documented here, I recommend referring to the CA Endevor manual for more information.
The final product of our conversion will now look as follows:
//COB1 EXEC PGM=CONPARMX, // MAXRC=4, // PARM=(IGYCRCTL,,$$$$DFLT,&C1PRGRP,&C1ELEMENT,,’N’,’N’) //PARMSDEF DD DSN=library.of.process.grps, // MONITOR=COMPONENTS, // ALLOC=PMAP //*
Figure 11 – Final State of converted compile step
Figure 12 – Type definition and library usage for PARMSDEF
Figure 13 – PARMSDEF Library Member Contents
Due to work commitments and “normal life” (whatever that means to an Endevor administrator!), “in-approval” has run its course in terms of off-the-shelf topics, tips, and techniques that I had documented over the years. From this point forward, “in-approval” will contain articles that occur to me over time as I actively work at helping sites attain better implementations.
In this final blog in the regularly-scheduled series, I want to reflect on the state of Endevor and life-cycle management as a discipline and as it seems to have evolved based on my experience over the past 30+ years.
One of the “hot” trends in application development is the rise of “Agile” methodologies. As a project management professional, it has been interesting to see the “snobbish” attitude that strict adherents to the principles of agile have over traditional waterfall methods. As an IT professional who cut his teeth doing application development for many years before engaging as an Endevor administrator/SCLM manager, I actually intimately understand the frustrations developers had with waterfall and the theoretical “freedom” and ability to react to perceived end-user needs that is more inherent in agile.
There have been those, however, that criticize Endevor as being “more for waterfall than for agile”. I would counter argue, however, that that argument reflects a poor understanding of both methodologies and is actually an argument for another look at “what” a site is doing with Endevor versus “how” they are doing it; a classic need to re-examine real objectives rather than perceived subjectives.
At the end of the day, neither method changes the reality of a developers day-to-day activity; the need to rapidly code-compile-test-repeat. Neither method changes the reality of, once the developer has created something successful, moving to a state where code-compile-test-repeat begins again with other coders bringing forward their contributions. And then moving forward again until a sprint (or phase) is ready for release.
In the distributed world, this is often achieved by moving to different machines that represent different states. In the Endevor world, welcome to environments and stages.
Agile or Waterfall; they both still need life-cycle-management.
I think the difference is that, with Endevor, many of us have gotten too tied up in the naming and “strict” usage of environments and stages. Too often, we have gotten tied up with projects or testing managers in trying to “automate” the population of the CICS region or IMS region or DB2 catalog at a specific stage because “that’s the name of that stage”.
Times when an environment or stage in Endevor begins to deviate from a one-to-one relationship, especially for a specific system (today it wants to go here, tomorrow it wants to go there, and the next day it wants to go somewhere else), I would advocate it’s time to engage with a different definition of what your SCLM process is trying to do and approach the populating of operational libraries using something other than processors or processor groups.
Years ago, I began advocating having your stages represent states of being. In other words, if there is a one-to-one stage-to-testing-environment ratio, or even if the ratio is one-to-some, then let Endevor “automatically” populate those environments.
But if your testing, QA, or development centres are creating a vast and/or complex array of catalogs and regions that they want Endevor to populate, I believe it’s time to step back and call a reality check. I advocate falling back to a position of representation; the stages in Endevor provide libraries that Endevor can ensure represents a “state of being”. In other words, as Endevor administrator, you can assure people that elements at Stage X are created with the complete technical integrity that Endevor brings to bear. WHERE they want to test those elements has now become the responsibility of testing, QA, or the development centre.
And at this point, I think Package Shipment has FAR more to play than has been universally adopted so far. In essence, Package Shipment (remote AND local) has much to deliver in replicating the distributed worlds current model of installing development software for testing on different machines. In essence, Package Shipment has always been there to deliver the same functionality…. so let’s use it!
An imaginative leverage of Package Shipment would allow the developer to ship/deliver/install the same elements to as many different regions as may be defined. All the Endevor administrator needs to do is define the libraries and define some post-install scripts. This keeps Endevor “cleaner” and easier to administrate while at the same time delivering the ability to provide technical excellence across the enterprise.
This approach also keeps the promise of being development method agnostic; it doesn’t matter whether you got to the Endevor stage using “agile” or “waterfall”. What mattered is that the element created with integrity in Endevor is made available to whatever targets the developer has need to exercise their test or QA cycle in.
So with these final thoughts, “in-approval” has now been “executed”, a new “in-approval” package of comments and articles is on the horizon!
I want to thank you for reading these musings over the past months; it’s been my privilege to correspond with some of you and I consider it an honour to work with some of the finest minds in the SCLM discipline.
The term “best practices” is often bandied about as a “catch phrase” to indicate what some people hope is a panacea of methods that will solve all their problems. Other people use it as a replacement for saying “do it my way”. Still others correctly use the term to identify commonly proven practices that have both stood the test of time as well as review.
It is important, then, that a definition of what “best practices” means in the context of vendor-provided software. This is particularly true of a product like Endevor, and as such, best practices with any 3rd party software can be typically found with the following characteristics:
1 – It makes use of the software’s implicit designs and methods. All software comes with an implicit intention for which the vendor designed it for and, arguably, how the vendor intended it to be used. Any software can be bent to “technically” do other things (such as holding data in Endevor versus source code), but if you start doing things it was not really intended or designed to do, you can find yourself hitting a “wall”. By virtue of the fact there is a “wall” to hit is a clear sign that what you are doing is not a best practice.
In the case of Endevor, I am a firm believer in exploiting its innate capabilities and using fields/definitions in the tool for what they are labeled to be used for.
2 – It exploits the software’s native ability without undo customizations. Some customization is inevitable, although arguably the term “customization” may be a tad overused and confused with “configuration”. Telon provided various “custom code” invocations, CoolGEN provides for “external action blocks”, and Endevor provides for processors, interfaces, and exits when used appropriately.
The danger point comes when the customization begins to try to replace functionality already being performed by the software OR when the customization is a reflection of “how” rather than “what”. “How” problems can generally be addressed in software through alteration of process/procedure to match the implicit design already likely considered in the software. It is the rare software solution that hasn’t already encountered most, if not all, the situations that are likely to arise in the discipline of field it is designed to address. Ways and methods of resolving the core problem being encountered, then, need to be adapted accordingly, not by “changing the software”. Again, by virtue of “changing the software”, you cannot possibly be following “best practices” as the implication is that every site that has installed the software has had to change the software the same way.
Appropriate use of exits, processors and other interfaces, then, is where it enhances the basic functionality already being performed by the software. Adding additional data to a logged event, for instance, or picking up additional data from external files for processing are generally appropriate examples.
3 – Are marked by making things obvious rather than obscure. In other words, a best practice is never a euphemism for “black box”. Everything from data modeling techniques that preach data normalization to experience with effective human interaction devices (VCRs, ATMs, Windows) tells us that the more obvious you are and the more control you put in the hands of the person will be met with greater acceptance than making things a “mystery”.
Hiding a vendor’s software behind “front ends” is usually done to prevent the need for education. In other words, an approach has been taken whereby the implementers feel they know what the end-user wants and needs, so they will automate everything for them. Unfortunately, this leads to heavy customizations again as they try to anticipate every need of the end-user and force the back-end software to comply. It is rather like the old “give a man a fish/teach a man to fish” syndrome. Custom-built front-ends require constant care and attention as well as retro-fitting to ensure upward compatibility. Again, by virtue of this added labour, it cannot possibly be considered a “best practice”.
4 – Is supportable by the vendor’s technical support organization. When help is required, the vendor has no choice but to support what they know. What they know, by extension, is the product as shipped from the vendor’s site. Since a best practice, by definition, implies rapid resolution of problems and quick support, any practice or implementation that deviates from the vendor’s implicit design cannot, by definition, be considered a “best practice”.
In contrast, the characteristics of non “best practices” can typically be identified as follows:
Over the years, I have reviewed almost a hundred different installations and implementations of Endevor around the world. Some are examples of simple elegance, while others are testaments of good ideas taken too far.
My overall philosophy has always been and continues to be one of “simplicity”; I’d much rather see implementations of the elegantly simple than the convoluted complex. The only way to truly achieve simplicity is to use Endevor “as it’s intended”, not in ways that result in heavy customizations or extensive use of exits. I am a big believer in “teaching a person how to fish rather than giving a person fish”. I’d much rather any problem or issue I have with an Endevor installation or implementation be “CA’s” problem rather than mine!
So, recognizing this is one person’s opinion, what are the top 10 pitfalls I see sites make in their implementations of Endevor. In no particular order:
In an earlier blog, I wrote an article about something I call “process normalization”. As I like to say, in my mind “a dog is a dog is a dog”. You don’t say furrydog, you don’t say browndog…. You say “there is a dog and it is brown and it is furry”. In other words, it is a DOG and its attributes are covering (fur) and colour (brown).
The same principle needs to apply to definitions within a good Endevor implementation. When I see definitions of TYPES such as COBSUB or COBDB2, I am encountering non-normalized implementations. Both TYPES are really just COBOL…. Their attributes are role (subroutine) and dbms (DB2).
Attributes are more correctly addressed in the definition of PROCESSOR GROUPS, not TYPE names. By calling COBOL by what it is, I can then easily change the attribute by merely selecting a different PROCESSOR GROUP. For instance, if I have 2 types named COBSUB and COBDB2S (for DB2 subroutine)…. and the program defined in COBSUB is altered to now contain DB2 calls, it needs to be moved to a totally new TYPE definition. However, if the site were normalized, no change to the TYPE need take place (and thus no risk to the history of changes that have ever taken place with the element). Instead, one merely need associate a new processor group to the element that now includes the DB2 steps.
The same principle applies to various type definitions and is often either misunderstood or purposely ignored in the interest of “giving people fish”.
While the preferred method today is VSAM-RLS, either VSAM-RLS or CA-L-Serv can be easily implemented to ease performance issues around Endevor VSAM libraries. It often surprises me how few sites are exploiting this easy and simple method of reducing their throughput times because they have not implemented either of these available and free solutions.
As someone whose career in IT grew up in the application area versus the systems programming area, it often astounds me that I often encounter set ups in Endevor that are so overtly application-area-hostile. Selecting FBD and/or Elibs as your base libraries always tends to signal to me that the person who originally set up this installation likely never worked in applications!
If I don’t see a “Source Output Library” declared, I get really concerned. At that point, I’m already guessing the application area (whether the Endevor administrator is aware of it or not) is likely keeping an entire parallel universe of code available in their bottom-drawer for the work they really need to do… and likely really really dislike Endevor!
It was always my experience that the application area needs to have clear and unfettered access to the libraries that Endevor is maintaining. It serves no “application” purpose to scramble up the names or compress the source; they NEED that source to do scans, impact analysis, business scope change analysis… in other words, do their job. If the Endevor installation is not providing easy access and views of that source (and by easy, I also mean ability that is allowed OUTSIDE Endevor control), then the implementation cannot be considered a good one.
For this reason among many, I am a huge advocate of always defining every application source type within Endevor as Reverse-Base-Delta, unencrypted/uncompressed… and PDS or PDS/E as the base library. This implementation is the friendliest you can be to the application area while at the same time maintaining the integrity of the Endevor inventory.
While I accept that, currently, Package Shipment requires Source Output Library, this need not be any kind of constraint. Its unlikely most sites are shipping from every environment and every stage; arguably you need only define a Source Output Library at the location you do shipments. Therefore, using RBD and PDS as your base library, you replace the need for a Source Output Library everywhere else since the application can now use the REAL base library for their REAL work…. With the exception of editing (unless you are using Quickedit). All their scans can now make use of whatever tools that your site has available.
PDS/E has come a long way since Endevor first began using them and are rapidly becoming the de facto standard for PDS definition. However, if you are still using the original definitions of PDS, I tend to also recommend looking into a product named “CA-PDSMAN”. It automates compression, thus relieving that as a maintenance issue, and actually provides a series of Endevor-compatible utilities that can be exploited by the Endevor administrator.
A universal truth is that “familiarity breeds contempt”. Depending on your definition of the word “contempt”, Endevor is no exception.
As Endevor administrators, it’s important to remember that we live and breath the screens and processes around Endevor. Most of us know the panels and how things operate like the back of our hand.
However, the application area often is completely intimidated by the myriad of screens, options, choices, and executions that happen under the umbrella known as Endevor.
A simple solution to this issue can be the introduction of Quickedit at your site. Basically, you can move people from navigating a complex set of panels and processes to “one-stop shopping”. Many application areas that see demonstrations of Quickedit often completely change their opinion of the facility.
Part of the reason for this is the change of view that comes with the Quickedit option. Endevor “base” is oriented along “action-object” execution. In other words, you have to tell Endevor what “action” (MOVE, ADD, GENERATE, etc) you want to do before it shows you the element list the action will be executed against.
Quickedit is oriented against a more natural flow of “object-action”. With Quickedit, you are first displayed a list of the elements you asked for. Once the list is displayed, you can then choose the action you want to do. This is much more intuitive to the manner in which we generally operate when we are doing the application development tasks.
It surprises me how often I encounter sites that are need keeping their generation listings… or keeping them in very strange places!
When I find they’re not keeping them at all, I generally discover the attitude is “well, if we need it, we’ll just regenerate the program”. What this ignores is the fact that the newly generated program may very well have completely different offsets or addresses than the one that caused the generation to have to take place! The listing has all the potential of being completely useless.
Generation listings are, for all intents and purposes, audit reports. They record the offsets, addresses, and linked status of the module as it was being generated by Endevor. They should NOT be deleted and they should be kept.
The issue of “where” to keep generation listings, however, can be tricky. Using PDS’ often results in what I refer to as “the rat in the snake”. A project at the lower levels will require a large amount of space (more than normally might be required) as it is developing and testing its changes. Then, once it moves to QA, that space in test is released, but now must be accounted for in QA! And then, once QA is satisfied, it must be moved into production, where a reorganization of files might be required in order to accommodate the listings arriving.
Personally, I’m an advocate of a mix of Elibs and CA-View. Elibs take care of themselves space-wise and can easily accommodate the “rat in the snake” scenario. The downside is that the information is encrypted and compressed, making it necessary to view/print the listing information in Endevor.
CA-View, however, makes a great “final resting” place for the listings. It is an appropriate use of your enterprise’s production report repository AND it can keep a “history” of listings; deltas, if you prefer. This can be very handy if someone needs to compare “before” and “after” listings!
One final note if you decide to use Elibs for your listings: do NOT place those Elibs under CA-L-Serv control! Due to the manner in which Endevor writes to listing Elibs, placing them under CA-L-Serv control will actually harm your performance rather than improve it!
I’m surprised how many sites are solely reliant on their volume backups.
Volume backups are a good thing to have and use in the event of the need to invoke a disaster recovery plan (DRP). But they very arguably are not enough when it comes to Endevor and the manner in which it is architected.
Endevor spans a variety of volumes and stores different “pieces” on different volumes often at different times. For instance, the package dataset may be on VOLA, the base libraries on VOLB, and the delta libraries on VOLC. A site may do a backup of those volumes over the space of an hour… but during that hour, an Endevor job ran 3 packages moving 15 elements with a variety of changes. Assuming the volumes are restored to the image taken, exactly what is the state of those Endevor libraries in terms of synchronization? Was VOLA restored to the last package execution? The first? What about the element base library? Is it in sync with the delta?
Fortunately, Endevor has a VALIDATE job that can be run to see if there is a problem. And I’m sure the vast majority of times, there will not be…..
But what if there is? What are you going to do if it turns out there is a verification problem and your libraries are out of sync?
For this reason I strongly advocate the use of regularly scheduled FULL and INCREMENTAL UNLOAD as a critical part of any site’s DRP. A FULL UNLOAD takes considerable time and should be used with discretion and planning, but INCREMENTAL UNLOADS tend to be relatively quick. I recommend doing both and consolidating them into files that are accessible during a DRP exercise.
During the DRP exercise, do the volume restores first. Then run the Endevor VALIDATE job. If the job returns and says things are fine, you’re done! But if not, you have the necessary files to do a RELOAD job and put Endevor back into the state it needs to be.
Unfortunately, the usage of the External Security Interface continues to be a mysterious black box to many sites. This is sad as there are a variety of exploitations that can take place by using the security abilities to your advantage!
Read through the articles I have posted on “Security Optimization” and “The APE Principle”. And if I have time, I will try to write a future article on demystifying the ESI to help the layman understand exactly how the facility really works!
Another ability that is often overlooked at installations is the fact that Endevor can cut SMF records to record each and every action taking place at the site. It’s been my experience that these records are literally a gold mine of information for the Endevor administrators and, frankly, should be mandatory from any auditor worth their salt!
The reporting available from the SMF records is far superior to the “Element Activity” reports that are provided by Endevor itself. While the “Element Activity” reports are better than nothing, I would argue not a lot.
To illustrate, an element in Endevor is promoted 125 times in the last month. Those 125 times will be recorded and reported as such with the Endevor SMF reports… but the “Element Activity” report would show the last action the element did (MOVE) as 1.
To illustrate further, an element is DELETED from Endevor. The SMF reports will show who, when, and where the element was deleted. “Element Activity” is blind; the element is no longer in existent and thus just drops from the report!
If one of the Endevor administrators objectives is to measure the “load” under which Endevor is operating, SMF records provide the detail to monitor how much is flowing through on a given time period.
SMF records truly provide the definitive log of what’s going on with the Endevor inventory.
I’d like to see CA properly address this issue with a change to Endevor, and I’ve submitted the idea to the community website, but to quote the idea as recorded on the website:
“As an Endevor user/developer/person-that-actually-has-to-use-Endevor-and-is-not-an-administrator, I want Endevor to KNOW what I am adding to it is a new element and requires me to select a processor group rather than ME knowing I need to put an “*” in the PROCESSOR GROUP (because I will NOT remember I need to do that and will let it default… and inevitably the default processor group is NOT the one I want making ME do MORE work) so that I can add my new elements intelligently and proactively rather than reactively.
“As an Endevor administrator, I want to define a default processor group that automatically triggers the “Select PROCESSOR GROUP” display if my user does not enter “*” or has not entered an override value so that they can correctly choose the right processing without having to go back and forth because they have inevitably forgotten they need to choose something and the default is either wrong for their particular element.”
In essence, what I advocate is the Endevor administrator should not assume to know what the default processor group is when there is a choice to be made. Take the example of the COBOL program I used earlier in this article. If I were to assume every new program coming in as a COBOL type is to be a subroutine with DB2, then the day that someone adds a program that does not use DB2 is the day “Endevor is broken and you, Mr/Mrs Endevor Administrator, are WRONG!”. And that will surely happen as the sun rising in the morning!
A workaround is to have your default processor be declared along the lines of the DONTUSE processor I have documented in an earlier article. In essence, if someone puts in a new program and doesn’t specify the processor group, the default DONTUSE processor will send them a message with instructions on how to choose a processor group and fail the element. It’s clumsy and awkward, but works for now until CA provides a product enhancement.
It’s surprising how often I encounter sites that still have not built or captured ACM information because “we don’t want to generate and lose our production loads”.
What’s needed is a tool I used to call the XPROCESS. In essence, what the process does is cause Endevor to generate your element (make, build, compile, whatever) and thus create the ACM, throw out the output, and then copy the current production version to the stage the generate is in, refootprinting the output accordingly. A splash title page in the listing can clearly identify this is a conversion or clean-up listing only… and the problem is solved.
This is a valuable tool to have in the Endevor administrator’s arsenal. For your reference, modification, and usage, here is a copy of a simple example:
//******************************************************************** //* * //* PROCESSOR NAME: GCOB02X * //* PURPOSE: SPECIAL PURPOSE COBOL PROCESSOR TO REGENERATE COBOL * //* ELEMENTS AND THEN CREATE 'POINT-IN-TIME' COPIES OF THE * //* 'REAL' OBJECT MEMBER FROM THE CONVERTING SYSTEMS OBJECT * //* LIBRARY. * //* * //******************************************************************** //GCOB02X PROC ADMNLIB='NDVLIB.ADMIN.STG6.LOADLIB', // COMCOP1='NDVLIB.COMMON.STG1.COPYLIB', : : : // LIB1I=NO/WHATEVER, // LIB1O=NO/WHATEVER, // LIB2I=NO/WHATEVER, // LIB2O=NO/WHATEVER, // LIB3I=NO/WHATEVER, // LIB3O=NO/WHATEVER, // LIB4I=NO/WHATEVER, // LIB4O=NO/WHATEVER, : : : //* //******************************************************************** //* DELETE 'JUST CREATED' OBJECT! * //******************************************************************** //DELOBJ EXEC PGM=CONDELE // IF (&C1EN = DVLP) // OR (&C1EN = DVL2) // OR (&C1EN = ACPT) // OR (&C1EN = PROD) THEN //C1LIB DD DSN=NDVLIB.&C1SY..&C1ST..OBJLIB, // DISP=SHR // ELSE //C1LIB DD DSN=NDVLIB.&C1EN..&C1ST..OBJLIB, // DISP=SHR // ENDIF //* //COPY1A EXEC PGM=IEBCOPY, // EXECIF=(&LIB1I(1,2),NE,NO), // MAXRC=04 //SYSPRINT DD SYSOUT=&SYSOUT, // FREE=CLOSE //IN1 DD DSN=&LIB1I, // DISP=SHR //OUT1 DD DSN=&&TEMP1, // DISP=(NEW,PASS), // UNIT=&WRKUNIT, // SPACE=(CYL,(10,10,10)), // DCB=&LIB1I //SYSIN DD * COPY INDD=IN1,OUTDD=OUT1 SELECT MEMBER=((&C1ELEMENT,,R)) /* //* //COPY1B EXEC PGM=IEBCOPY, // EXECIF=(&LIB1I(1,2),NE,NO), // MAXRC=04 //SYSPRINT DD SYSOUT=&SYSOUT, // FREE=CLOSE //OUT1 DD DSN=&LIB1O, // DISP=SHR, // FOOTPRNT=CREATE //IN1 DD DSN=&&TEMP1, // DISP=(OLD,PASS) //SYSIN DD * COPY INDD=IN1,OUTDD=OUT1 SELECT MEMBER=((&C1ELEMENT,,R)) /* //* //COPY2A EXEC PGM=IEBCOPY, // EXECIF=(&LIB2I(1,2),NE,NO), // MAXRC=04 //SYSPRINT DD SYSOUT=&SYSOUT, // FREE=CLOSE //IN1 DD DSN=&LIB2I, // DISP=SHR //OUT1 DD DSN=&&TEMP2, // DISP=(NEW,PASS), // UNIT=&WRKUNIT, // SPACE=(CYL,(10,10,10)), // DCB=&LIB2I //SYSIN DD * COPY INDD=IN1,OUTDD=OUT1 SELECT MEMBER=((&C1ELEMENT,,R)) /* //* //COPY2B EXEC PGM=IEBCOPY, // EXECIF=(&LIB2I(1,2),NE,NO), // MAXRC=04 //SYSPRINT DD SYSOUT=&SYSOUT, // FREE=CLOSE //OUT1 DD DSN=&LIB2O, // DISP=SHR, // FOOTPRNT=CREATE //IN1 DD DSN=&&TEMP2, // DISP=(OLD,PASS) //SYSIN DD * COPY INDD=IN1,OUTDD=OUT1 SELECT MEMBER=((&C1ELEMENT,,R)) /* //* //COPY3A EXEC PGM=IEBCOPY, // EXECIF=(&LIB3I(1,2),NE,NO), // MAXRC=04 //SYSPRINT DD SYSOUT=&SYSOUT, // FREE=CLOSE //IN1 DD DSN=&LIB3I, // DISP=SHR //OUT1 DD DSN=&&TEMP3, // DISP=(NEW,PASS), // UNIT=&WRKUNIT, // SPACE=(CYL,(10,10,10)), // DCB=&LIB3I //SYSIN DD * COPY INDD=IN1,OUTDD=OUT1 SELECT MEMBER=((&C1ELEMENT,,R)) /* //* //COPY3B EXEC PGM=IEBCOPY, // EXECIF=(&LIB3I(1,2),NE,NO), // MAXRC=04 //SYSPRINT DD SYSOUT=&SYSOUT, // FREE=CLOSE //OUT1 DD DSN=&LIB3O, // DISP=SHR, // FOOTPRNT=CREATE //IN1 DD DSN=&&TEMP3, // DISP=(OLD,PASS) //SYSIN DD * COPY INDD=IN1,OUTDD=OUT1 SELECT MEMBER=((&C1ELEMENT,,R)) /* //* //COPY4A EXEC PGM=IEBCOPY, // EXECIF=(&LIB4I(1,2),NE,NO), // MAXRC=04 //SYSPRINT DD SYSOUT=&SYSOUT, // FREE=CLOSE //IN1 DD DSN=&LIB4I, // DISP=SHR //OUT1 DD DSN=&&TEMP4, // DISP=(NEW,PASS), // UNIT=&WRKUNIT, // SPACE=(CYL,(10,10,10)), // DCB=&LIB4I //SYSIN DD * COPY INDD=IN1,OUTDD=OUT1 SELECT MEMBER=((&C1ELEMENT,,R)) /* //* //COPY4B EXEC PGM=IEBCOPY, // EXECIF=(&LIB4I(1,2),NE,NO), // MAXRC=04 //SYSPRINT DD SYSOUT=&SYSOUT, // FREE=CLOSE //OUT1 DD DSN=&LIB4O, // DISP=SHR, // FOOTPRNT=CREATE //IN1 DD DSN=&&TEMP4, // DISP=(OLD,PASS) //SYSIN DD * COPY INDD=IN1,OUTDD=OUT1 SELECT MEMBER=((&C1ELEMENT,,R)) /* : : :
External Security Interface (ESI)
Originally, using the External Security Interface was an option and in my opinion, it was always folly to not take advantage of this software. Without going into a long lecture about security, suffice it to say that it is a key component of effective configuration management.
Security and Configuration Management have 2 components: physical security and functional security.
Endevor does not supply physical security. This is the security that specifies who can read/write to the different high-level indexes at a site and is handled at every site by whatever proprietary security software they have (i.e. RACF, ACF2, TOP-SECRET).
Functional security is the component that determines, once in Endevor, who is allowed to do what to which systems. Your choices are to either set up Endevor Native Security tables or interface with your current on-site security software. It makes sense to most shops to continue leveraging their current on-site security software; it provides a single point of administration and continues to leverage the investment they have already made in security at their site. If you use the Endevor Native Security tables, you must remember to reflect any general changes in system security there as well as in your “standard” shop software. Also, this means a component of your site’s software security requirement is NOT being managed by your site’s security software. This can be a favourite target for security auditor’s to hit.
This is the heart-and-soul of Endevor. Without Extended Processors, you can’t compile, generate, check, crossreference, or any of the other cool neat stuff Endevor can do for you. In essence, without Extended Processors, Endevor becomes nothing more than a source repository; a toothless tiger; a fancy version of Panvalet.
Automated Configuration Manager (ACM)
If Extended Processors are the heart-and-soul, then ACM is the brains. ACM is the piece that allows you to automatically monitor the input and output components of elements as they are being processed by an Extended Processor. ACM, then, allows effective impact analysis and ensures the integrity of your applications. The information ACM captures is what package processing uses to verify that a developer is not missing pieces when they create a promotion package for production.
The following case study is an investigation I conducted on the manner in which COMMENTS are reflected in the MCF of Endevor. It serves to illustrate that there’s much more to Endevor than meets the eye!
The customer is making use of EXIT02 in order to cause special processing to take place when an element is being promoted or otherwise worked on and the COMMENT contains the word “EMERGENCY”.
When there is no source change to the element, the customer has determined that the COMMENT field does not contain the comment they had entered into Endevor. Instead, the previous comment (or “level” comment) is the only one seen by the exit program. This is resulting in the customer having to make a “dummy” change to the source for the sole purpose of having the “EMERGENCY” word be contained in the COMMENT field for the exit.
One of the first things to understand about Endevor is that there is MORE than just one comment associated to an element. In fact, there are as many as 5, depending on the reason for the comment and what it is being associated with. Consider the following COPYBOOK named JRDCOPY4. This copybook is being created for the very first time and has never existed in Endevor before. Endevor is creating v1.0 of the copybook. The screen that adds the element to Endevor might look like the following:
Note that the comment I have recorded says “V1.0 BASE COMMENT”. After a successful execution of the action, the Endevor Master Control File (MCF) for the element contains the following:
Note the highlighting done in the Element Master displays; Endevor has replicated the comment across 5 different areas. These are 5 distinct and unique areas within Endevor that contain comments and are not the same field. Each field displays what it contains at different times in Endevor. In this instance, because we have only done one (1) thing, there is only one comment to display.
The next event I do is to MOVE the element to the next stage. I would then build the screen as follows:
When Endevor does the move, the MCF for the element now contains the following:
Note that the comment that changed is NOT the comment that was associated to the element when I created it; rather, the comment is associated with a unique comment field in the MCF that contains comments associated to the last action done.
The next event that may occur is to work on this element. To do so, I would execute a RETRIEVE (or a QuickEdit session). The retrieve I execute may look as follows:
The MCF for the element would now contain the following information:
For the RETRIEVE action, there is a specific comment field area in the MCF that contains the information and it has been updated with the RETRIEVE COMMENT accordingly.
I will now make a few changes to the element and add it back into Endevor with the following screen:
The MCF associated to the element now contains the following in THIS stage (note that the MCF information in the next (target) stage still contains the original comments as indicated in figures 8 and 9).
Note that these are the comments associated to the element at this location where the changes have been made. The RETRIEVE comment is blank because this is NOT where I did my RETRIEVE! This is Stage “T” and, if you will review figures 7, 8 and 9, you will see that the RETRIEVE that I did was at Stage “Q”.
The next event I want to do is to MOVE the element to Stage “Q”. My MOVE screen would look as follows:
The changes that took place to the MCF comment fields are in the following screens:
Several things are important to note at this stage.
In order to ensure all the comment fields show their purpose, I will now cause a specific GENERATE action to take place against the element in stage “Q” to see which comments change. I would expect the comment I make to be reflected in the “LAST ACTION” comment and the “GENERATE” comment. The screen I use looks as follows:
The results in Endevor now are exactly as I had hoped:
To re-iterate, the comment associated to a change is the “CURRENT SOURCE” comment. The comment associated to activity or actions taking place in Endevor is the “LAST ELEMENT ACTION” comment.
In the customer’s scenario, they have an element for which no changes to the element are detected. To recreate the scenario, I begin by retrieving the element again.
The results in the MCF are as follows:
As I would expect, only the RETRIEVE comment has been changed.
Now I will add the element back into Endevor with NO CHANGES. This exactly replicates the condition at the customer where they are adding elements in with the EMERGENCY comment. In my case, I won’t use “emergency” but a comment that continues to identify what I am doing as follows:
Note the message in the top-right corner “NO CHANGES DETECTED”. If I query the MCF, the following information shows where the comment was contained.
This is the exact result I would hope Endevor would contain as the comments are in the correct place and Endevor is ensuring the wrong comments are not associated to the wrong things.
The next thing I want to do is MOVE the element with no changes back to the next stage. I would use a screen as follows:
Note again the message in the top-right corner that shows no changes were detected. If I query the MCF, the comment fields that have been affected are shown as follows:
There results are exactly what I would expect. Each comment is contained in its appropriate area. Endevor is maintaining the integrity of the right comment to the right action.
Since we have established that Endevor is maintaining comments for the right things in the right places, the next thing to investigate is what is available to each of the exits during processing. In the case of the customer having this problem, the exit being invoked is EXIT02.
EXIT02 is invoked by Endevor before executing any actions. In other words, in Endevor, this exit is passed information before Endevor has actually done anything. All source is still where it is and no movement (for example) has begun.
During investigation of the issue, Technical Support asked the customer to provide an EXIT trace so that the information could be confirmed. The following is an extract of that trace that was provided:
Based on understanding how, when and where Endevor stores comments, this trace makes complete sense. The source comments (as reflected in the ELM* fields) does not change because the source has not changed. This is correct.
The REQCOMM comment, which reflects the comment associated to the action being done, correctly shows the comment associated to the action that is being requested.
The solution to the problem the customer is having is actually very simple although does require a change to their exit program.
The problem is that the exit program is looking at the wrong comment field for the wrong reason. The comment field being looked at by the program is likely the “CURRENT SOURCE” comment.
The comment field the program SHOULD be looking at is for activity that is taking place against the element. This field will always contain the comment to trigger the event such as EMERGENCY that the client is looking for since it always contains the comment regardless of whether there are source changes or not.
Simply put, the program must be modified to look at field REQCOMM (if written in Assembler) or REQ-COMMENT (if written in COBOL) and not look at any of the ELM* fields for the “EMERGENCY” word. This is the only change required by the customer to ensure their solutions keeps working as designed.
No change is required in Endevor.
Some time ago, I polled the Endevor community to discover who might be using Endevor to manage and control the changes that the systems programming area does.
This document contains the original question and responses (without editing aside from removal of name and company). I thought you might find the content interesting and thought provoking….!
“Who might have their systems programming people also under Endevor control? Also, what components of Systems Programming do they have under control – i.e. all aspects, just parmlibs, just “their” programs, etc. I am in the process of designing and putting together a presentation on putting virtually all aspects of systems programming under Endevor control and I am curious as to the “state of the nation” today. “
Besides don’t you know that the SMP installer does its own configuration management? (at least that’s the excuse the systems programmers give me).
I have tried to get some of the Endevor install into Endevor as a foot in the door, but have failed. If nothing else after the install creates the system libraries I would like Endevor to do the copies from LPAR library to LPAR library so when I need one thing changed they don’t copy the whole library and along with it those LPAR specific modules that then break the ‘TO’ instance of Endevor. I will try again when (and if) 7.0 SP1 ever goes GA. We have just outsourced most of our systems programming so who knows. Any ammunition I can get would be a great help.”
So, that’s a big no for Endevor control for systems as long as I’m at this site. Of course, we are breaking one of the number one rules of Endevor (never let the programming staff administer Endevor), so we may just be the exception. Good luck with the presentation.”
In addition, we have a couple of pieces of software managed by Endevor as well.
For example, we use Endevor to manage the Endevor software. A new release gets installed in a development environment. Then, we load all modules associated with the Endevor software into a Product Environment and use Endevor to migrate the new release through the testing state and onto production. This same philosophy is used whenever a PTF is applied to Endevor. We apply the PTF in development, migrate any changed load modules, source etc. through Endevor into our test states, test the ptf, then move it on to Production. This also helps use to track any changes we have made to panels, defaults table etc.
The majority of the software installed by us is not managed by Endevor but we have been trying to recommend it as the route to go. We just put QuickStart under Endevor’s control last month.”
All JCL used to run our scheduled Production jobs are in Endevor. I had our procs and parms in at one time, but our database group that is ‘in charge’ of those balked, so I had to take them out, although the boss over all of us had wanted EVERYTHING in Endevor. I had intended on doing exactly that, including C-lists, database segment definitions, and PSB’s. Alas, they are not (yet).”
At xxxxxx, the z/OS team leader wanted all items under Endevor control. We had entries for just about all aspects (including SYS2 and SYS3 libraries – all CICS regions’ JCL, control cards etc.) except SYS1 libraries. We were working towards converting all components of both in-house and purchased software tools (i.e. programs, JCL, control cards etc.) to Endevor. Unfortunately, the bank was bought by xxxxx before we were able to complete that transition. 😦 Keep in mind that the Endevor administrators (myself included) were systems programmers and reported directly to the z/OS team leader who also served as our backup – in the event we were unavailable. My manager’s exposure and high level of comfort with the product played a major role in driving the team to get systems components under Endevor control. Everyone had to learn how to use the tool – no excuses.
My position at a subsequent job as Endevor administrator was in the operations area for an insurance company. They had/have as “little as possible” under Endevor control and if the Systems people had their way, they would take it all out of Endevor and perform their mundane, space hogging, risk laden process of back up member A, rename previous backup of member A, rename member A, copy in member B etc. etc. etc…. It is next to impossible to go back more than one level of change or to determine the exact nature of the change and the approval process is tied in with the change (change record) tool, but there is no fool proof means to reconcile the items that were actually changed with the items referenced in the change record. Most of the systems programmers have no desire to learn how to use the product and they are not obligated to do so – unless the element currently exists in an Endevor library. There didn’t seem to be any rhyme or reason as to what was put under Endevor. I think in total there were a couple of applications – programs, JCL etc., and a few unrelated jobs and control cards. My guess is that there was a programmer that was comfortable with the product (he had administrator authority) and so he setup his applications and then just left them there.”
Our ‘competitor’, xxxxxxxx, purportedly does not require them to change the way they work. You define the libraries/datasets to be monitored and audited and it just sits there tracking activity. Then when you want to report on access and change you run the report and ‘”hey presto”. Also, if you wish to rollback to an earlier version/backup it provides this capability. The real clincher selling point (it seems) is that it was written by a System Programmer for Systems Programmers (this has been mentioned to me a couple of times).
Anyway – I’ve told them that I’m not going to give up – that I’m going to get the Product Manager to evangelise why they should use the incumbent product and save spending $’s (well – at least on a competitor’s product). “