Commenting on COMMENTS

The following case study is an investigation I conducted on the manner in which COMMENTS are reflected in the MCF of Endevor. It serves to illustrate that there’s much more to Endevor than meets the eye!

Problem:

The customer is making use of EXIT02 in order to cause special processing to take place when an element is being promoted or otherwise worked on and the COMMENT contains the word “EMERGENCY”.

When there is no source change to the element, the customer has determined that the COMMENT field does not contain the comment they had entered into Endevor. Instead, the previous comment (or “level” comment) is the only one seen by the exit program. This is resulting in the customer having to make a “dummy” change to the source for the sole purpose of having the “EMERGENCY” word be contained in the COMMENT field for the exit.

Investigation:

Endevor Behaviour

One of the first things to understand about Endevor is that there is MORE than just one comment associated to an element. In fact, there are as many as 5, depending on the reason for the comment and what it is being associated with. Consider the following COPYBOOK named JRDCOPY4. This copybook is being created for the very first time and has never existed in Endevor before. Endevor is creating v1.0 of the copybook. The screen that adds the element to Endevor might look like the following:

comment01

Note that the comment I have recorded says “V1.0 BASE COMMENT”. After a successful execution of the action, the Endevor Master Control File (MCF) for the element contains the following:

comment02

comment03

Note the highlighting done in the Element Master displays; Endevor has replicated the comment across 5 different areas. These are 5 distinct and unique areas within Endevor that contain comments and are not the same field. Each field displays what it contains at different times in Endevor. In this instance, because we have only done one (1) thing, there is only one comment to display.

The next event I do is to MOVE the element to the next stage. I would then build the screen as follows:

comment04

When Endevor does the move, the MCF for the element now contains the following:

comment05

comment06

Note that the comment that changed is NOT the comment that was associated to the element when I created it; rather, the comment is associated with a unique comment field in the MCF that contains comments associated to the last action done.

The next event that may occur is to work on this element. To do so, I would execute a RETRIEVE (or a QuickEdit session). The retrieve I execute may look as follows:

comment07

The MCF for the element would now contain the following information:

comment08

comment09

For the RETRIEVE action, there is a specific comment field area in the MCF that contains the information and it has been updated with the RETRIEVE COMMENT accordingly.

I will now make a few changes to the element and add it back into Endevor with the following screen:

comment10

The MCF associated to the element now contains the following in THIS stage (note that the MCF information in the next (target) stage still contains the original comments as indicated in figures 8 and 9).

comment11

comment12

Note that these are the comments associated to the element at this location where the changes have been made. The RETRIEVE comment is blank because this is NOT where I did my RETRIEVE! This is Stage “T” and, if you will review figures 7, 8 and 9, you will see that the RETRIEVE that I did was at Stage “Q”.

The next event I want to do is to MOVE the element to Stage “Q”. My MOVE screen would look as follows:

comment13

The changes that took place to the MCF comment fields are in the following screens:

comment14

comment15

Several things are important to note at this stage.

  • The BASE comments never change. They will always reflect the original comment that was placed into Endevor when the element was first created.
  • The RETRIEVE comment has now been dropped from stage “Q” MCF. This is because we have now moved back to the original place that I did my RETRIEVE.
  • The CURRENT SOURCE comment reflects the comments associated with the change. This is the field that is updated when a change is detected in the actual source lines of the program.
  • The LAST ELEMENT ACTION comment reflects the comment associated to the last action executed, in this situation “MOVE”.
  • The GENERATE comment reflects the same as the CURRENT SOURCE comment because I have not done any additional “generate” aside from the one that is done automatically when you “add/update” an element.

In order to ensure all the comment fields show their purpose, I will now cause a specific GENERATE action to take place against the element in stage “Q” to see which comments change. I would expect the comment I make to be reflected in the “LAST ACTION” comment and the “GENERATE” comment. The screen I use looks as follows:

comment16

The results in Endevor now are exactly as I had hoped:

comment17

comment18

To re-iterate, the comment associated to a change is the “CURRENT SOURCE” comment. The comment associated to activity or actions taking place in Endevor is the “LAST ELEMENT ACTION” comment.

In the customer’s scenario, they have an element for which no changes to the element are detected. To recreate the scenario, I begin by retrieving the element again.

comment19

The results in the MCF are as follows:

comment20

comment21

As I would expect, only the RETRIEVE comment has been changed.

Now I will add the element back into Endevor with NO CHANGES. This exactly replicates the condition at the customer where they are adding elements in with the EMERGENCY comment. In my case, I won’t use “emergency” but a comment that continues to identify what I am doing as follows:

comment22

comment23

Note the message in the top-right corner “NO CHANGES DETECTED”. If I query the MCF, the following information shows where the comment was contained.

comment24

comment25

This is the exact result I would hope Endevor would contain as the comments are in the correct place and Endevor is ensuring the wrong comments are not associated to the wrong things.

  • The BASE comment remains as originally coded.
  • The LAST ELEMENT ACTION and GENERATE comments indicate that the action was executed and the comment associated to the action
  • The CURRENT SOURCE comment has not changed and should not change because the source did not change. This comment field, based on what Endevor does, should only change if the source itself changes.

The next thing I want to do is MOVE the element with no changes back to the next stage. I would use a screen as follows:

comment26

comment27

Note again the message in the top-right corner that shows no changes were detected. If I query the MCF, the comment fields that have been affected are shown as follows:

comment28

comment29

There results are exactly what I would expect. Each comment is contained in its appropriate area. Endevor is maintaining the integrity of the right comment to the right action.

Exit Behaviour

Since we have established that Endevor is maintaining comments for the right things in the right places, the next thing to investigate is what is available to each of the exits during processing. In the case of the customer having this problem, the exit being invoked is EXIT02.

EXIT02 is invoked by Endevor before executing any actions. In other words, in Endevor, this exit is passed information before Endevor has actually done anything. All source is still where it is and no movement (for example) has begun.

During investigation of the issue, Technical Support asked the customer to provide an EXIT trace so that the information could be confirmed. The following is an extract of that trace that was provided:

comment30

Based on understanding how, when and where Endevor stores comments, this trace makes complete sense. The source comments (as reflected in the ELM* fields) does not change because the source has not changed. This is correct.

The REQCOMM comment, which reflects the comment associated to the action being done, correctly shows the comment associated to the action that is being requested.

Solution:

The solution to the problem the customer is having is actually very simple although does require a change to their exit program.

The problem is that the exit program is looking at the wrong comment field for the wrong reason. The comment field being looked at by the program is likely the “CURRENT SOURCE” comment.

The comment field the program SHOULD be looking at is for activity that is taking place against the element. This field will always contain the comment to trigger the event such as EMERGENCY that the client is looking for since it always contains the comment regardless of whether there are source changes or not.

Simply put, the program must be modified to look at field REQCOMM (if written in Assembler) or REQ-COMMENT (if written in COBOL) and not look at any of the ELM* fields for the “EMERGENCY” word. This is the only change required by the customer to ensure their solutions keeps working as designed.

No change is required in Endevor.

Advertisements

Systems Programming under Endevor Control

Some time ago, I polled the Endevor community to discover who might be using Endevor to manage and control the changes that the systems programming area does.

This document contains the original question and responses (without editing aside from removal of name and company). I thought you might find the content interesting and thought provoking….!

Question posed:

“Who might have their systems programming people also under Endevor control? Also, what components of Systems Programming do they have under control – i.e. all aspects, just parmlibs, just “their” programs, etc. I am in the process of designing and putting together a presentation on putting virtually all aspects of systems programming under Endevor control and I am curious as to the “state of the nation” today. “

Responses:

  • “It is the same old story, but now they have SMPE so the argument has be very solid in order for us lowly Endevor admins to convince the big Systems Programmers.”
  • “You must be kidding! I consider myself lucky that I get to assemble my own routines (C1DEFLTS, BC1TNEQU, etc…) in Endevor and have them copied to the systems libraries so I have footprints to check when things go south.

Besides don’t you know that the SMP installer does its own configuration management? (at least that’s the excuse the systems programmers give me).

I have tried to get some of the Endevor install into Endevor as a foot in the door, but have failed. If nothing else after the install creates the system libraries I would like Endevor to do the copies from LPAR library to LPAR library so when I need one thing changed they don’t copy the whole library and along with it those LPAR specific modules that then break the ‘TO’ instance of Endevor. I will try again when (and if) 7.0 SP1 ever goes GA. We have just outsourced most of our systems programming so who knows. Any ammunition I can get would be a great help.”

  • “Hi John, I am one of the systems programmers here at xxxxxxxxxxxx and the Endevor Administrator and there is no way that I would put our systems under Endevor. I can’t say that we would enjoy bringing Endevor into the mix if we had a problem with a parmlib member during an IPL.

So, that’s a big no for Endevor control for systems as long as I’m at this site. Of course, we are breaking one of the number one rules of Endevor (never let the programming staff administer Endevor), so we may just be the exception. Good luck with the presentation.”

  • “Although we have some older systems programmers still using Librarian to maintain their personal JCL files, none use Endevor for this purpose (including myself) and none of our z/OS datasets are maintained by either product. SMP/E is a requirement for all app’s that can be installed that way here, so that trumps Endevor. Encouraging systems programmers to use Endevor has been a tough sell. We plan to migrate all our scheduler JCL off Librarian to Endevor probably next year and even then, I doubt many system programmers will show any interest in using Endevor. It makes sense, but doesn’t happen..”
  • “Most in-house written exits, batch jobs etc. used by systems programmers are under the control of Endevor. We also store alot of the parmlibs, syms, includes etc. under Endevor.

In addition, we have a couple of pieces of software managed by Endevor as well.

For example, we use Endevor to manage the Endevor software. A new release gets installed in a development environment. Then, we load all modules associated with the Endevor software into a Product Environment and use Endevor to migrate the new release through the testing state and onto production. This same philosophy is used whenever a PTF is applied to Endevor. We apply the PTF in development, migrate any changed load modules, source etc. through Endevor into our test states, test the ptf, then move it on to Production. This also helps use to track any changes we have made to panels, defaults table etc.

The majority of the software installed by us is not managed by Endevor but we have been trying to recommend it as the route to go. We just put QuickStart under Endevor’s control last month.”

  • “It would have to be without processors, I think, because you would want it to be as simple as possible. I should say that it really wouldn’t be much of a problem, except for the first one that popped into my head, namely trying to fix a problem during an ipl. If we can find a way to work with our data during ipl’s it would be fine. But, obviously, SOX is going to make us audit the system in far different ways than we do right now, but I don’t think Endevor (in it’s current form) is a good solution for systems.   I shouldn’t have said “never”, but definitely the current way of using Endevor for application source is not going to be viable for our systems. Thanks!”
  • “It is nice to see someone else exploring this question.  My position is Endevor has a place for in-house written “things.”  Let SMP/E do the work is was designed for.  In-house written mods for system elements belong with SMP/E. For purposes of this discussion sys-progs work with items/elements that need a separate LPAR to upgrade/test.  Totally in-house programs and other things might fit within the Endevor umbrella. The question always comes back to testing.  How does one relate a stage1 SOL to a TEST lpar?  In a pure application arena I oppose using Endevor as a “junk drawer.”  By this I mean when one does not know where to store something just “put it in Endevor.” “
  • “I’m not sure what all you include in ‘system’ programs.  Because most state agencies use xxxxxxxx, I think any true systems programmers (that work for the state) would be there.

All JCL used to run our scheduled Production jobs are in Endevor.  I had our procs and parms in at one time, but our database group that is ‘in charge’ of those balked, so I had to take them out, although the boss over all of us had wanted EVERYTHING in Endevor.  I had intended on doing exactly that, including C-lists, database segment definitions, and PSB’s.  Alas, they are not (yet).”

  • “Hi John – We have our in-house written infrastructure code managed in Endevor.  Our primary goal was to get all the “language code” converted, (Assembler, COBOL, etc), this goal has been met.  Over the years we have been chipping away at getting other types of code converted, we’re in good shape here too.  I am happy to say that we are getting requests from the systems programmers, asking…how can Endevor handle this type of code, and of course we always come up with a nice solution.  Please let me know if you have any other questions.”
  • “I have joined the wonderful world of consulting, so bear in mind that the information I am providing is from past employers, but I thought it might be helpful or useful if you get a low percentage of responses.

At xxxxxx, the z/OS team leader wanted all items under Endevor control.  We had entries for just about all aspects (including SYS2 and SYS3 libraries – all CICS regions’ JCL, control cards etc.) except SYS1 libraries.  We were working towards converting all components of both in-house and purchased software tools (i.e. programs, JCL, control cards etc.) to Endevor.  Unfortunately, the bank was bought by xxxxx before we were able to complete that transition.  😦  Keep in mind that the Endevor administrators (myself included) were systems programmers and reported directly to the z/OS team leader who also served as our backup – in the event we were unavailable.  My manager’s exposure and high level of comfort with the product played a major role in driving the team to get systems components under Endevor control.  Everyone had to learn how to use the tool – no excuses.

My position at a subsequent job as Endevor administrator was in the operations area for an insurance company.  They had/have as “little as possible” under Endevor control and if the Systems people had their way, they would take it all out of Endevor and perform their mundane, space hogging, risk laden process of back up member A, rename previous backup of member A, rename member A, copy in member B etc. etc. etc….  It is next to impossible to go back more than one level of change or to determine the exact nature of the change and the approval process is tied in with the change (change record) tool, but there is no fool proof means to reconcile the items that were actually changed with the items referenced in the change record.  Most of the systems programmers have no desire to learn how to use the product and they are not obligated to do so – unless the element currently exists in an Endevor library.  There didn’t seem to be any rhyme or reason as to what was put under Endevor.  I think in total there were a couple of applications – programs, JCL etc., and a few unrelated jobs and control cards.  My guess is that there was a programmer that was comfortable with the product (he had administrator authority) and so he setup his applications and then just left them there.”

  • “When I was the Endevor person in charge at xxxxx (seems like it was many, many years ago), we had some of the parmlib members under Endevor’s control (mainly in the network area) and set up the processors to generate some of the network executables (we had multiple sets depending on what the target system volume was). We also had all of the system programmers JCL in Endevor (including IDMS startup) and most of the IDMS homegrown utilities source, but that was about it. Have a nice weekend.”
  • “John, the only things from the Systems side of the house that is under Endevor control are items where we might need a history. Otherwise the systems programmers are controlled by a separate change control system.”
  • “The issue we’re facing, as I see it, is around resistance of change to existing work practices by the Host Systems group and what they see as an ‘intrusive’ solution that requires effort to configure.

Our ‘competitor’, xxxxxxxx, purportedly does not require them to change the way they work.  You define the libraries/datasets to be monitored and audited and it just sits there tracking activity.  Then when you want to report on access and change you run the report and ‘”hey presto”.  Also, if you wish to rollback to an earlier version/backup it provides this capability.  The real clincher selling point (it seems) is that it was written by a System Programmer for Systems Programmers (this has been mentioned to me a couple of times).

Anyway – I’ve told them that I’m not going to give up – that I’m going to get the Product Manager to evangelise why they should use the incumbent product and save spending $’s (well – at least on a competitor’s product).   “

Thoughts on Integrating EGL from IBM with Endevor

The following article is specific to a tool from IBM known as Enterprise Generation Language. I provide the information not so much as a solution specific to EGL but rather as a model on tenets I believe are critical to effective source and configuration management for z/OS systems, the main one being “ultimately, the TRUE source (not just the generated source or derived source) needs to be captured for auditing and management purposes”. It’s not good enough for “models” on distributed platforms to be “the source” and then import what it creates in its completeness as an application; I believe to truly safeguard an application on z/OS, I must be able to recreate that application from the “stuff” I’ve stored… and my place of storage for applications is Endevor.

 

In the past, I was asked to investigate the options in integrating Enterprise Generation Language (EGL) for z/OS from IBM into Endevor. What are the choices that an Endevor site has in securing applications so that the same integrity Endevor gives to “native” code can be secured in “generated” code?

Based on my research, I have been able to determine the following:

Findings:

  • Unlike other CASE tools that generate code for execution on the z/OS system, the EGL ecosystem requires the target language to be generated on the workstation. Other CASE tools (such as CA GEN) provide the option of generating the code on z/OS.

egl1

  • One of the “choices” during COBOL code generation is to have the code automatically delivered, compiled, and otherwise made ready on z/OS from the Enterprise Developer on the workbench.

egl2

Note in this flow that at one point you can specify PREP=Y. This instruction on the workstation causes the generated COBOL, JCL, and if necessary, BIND statements to be transferred to the mainframe for execution. Otherwise, all built routines remain on the workstation for delivery to the z/OS platform based on how you want to send it there.

  • All sites contacted or from whom I have been able to get information have indicated that they are storing their EGL source in a distributed solution (either Clearcase or Harvest) and are storing the z/OS source in Endevor. The mechanism for storing the generated source in Endevor (i.e. manual or automatic) has not been determined.
  • Given the fact that sites ARE saving something that is referenced as EGL source and storing it in their distributed solution, this gives evidence (as well as reference in the EGL manuals) that there IS EGL source that needs to be stored.

Unknowns:

  • Is there a name or label or title or something in the EGL source that correlates to the generated z/OS elements? This is key to providing a quasi-automatic solution.

Design Options:

  • EGL in Distributed Solution/Manual delivery of z/OS components.

This option appears to be the most prevalent amongst those sites that are using EGL. Note that one of the other indicators from my research is the lack of sites using or implementing EGL at this time. While this may change in the future, there is limited experience or “current designs” to draw upon. This solution would, as the title implies, store the EGL in a distributed SCM solution, do the generation on the workstation, FTP or otherwise transmit the generated source to the mainframe, and then ADD/UPDATE the source into Endevor for the compilation.

Note that the transmission of the source generated on the workstation and the ADD/UPDATE of the source into Endevor can be accomplished today without signing onto the mainframe by accomplishing this step through Change Manager Enterprise Workbench (CMEW).

  • EGL in Distributed Solution/Automatic Delivery of z/OS components

In this scenario, the EGL would still be stored in the distributed SCM solution. However, if you specified PREP=Y, then the source would automatically be delivered and compiled by and in Endevor.

This scenario would require research and modification of the IBM provided z/OS Build Server. Based on the research conducted to-date, the z/OS Build Server is a started task that invokes the site-specific compile, link and bind processes. This process could, theoretically, be modified to instead execute an Endevor ADD/UPDATE action that would result in the source automatically being stored and compiled/linked/bound by Endevor instead of the “default” process provided by IBM.

  • EGL in Endevor Complete

In this scenario, the z/OS generated components remain on the workstation. All components, including the EGL source, COBOL source, link statements and anything else created by EGL, are then merged into a single “element” with each different type of source perhaps being identified with a separator line of some sort (maybe a string of “***********”). The ADD/UPDATE process of Endevor would then execute the different source components through their appropriate compile/link/bind programs as appropriate i.e. the first step in the processor would create temporary files that unbundled the different source types. These temporary files would then be the source that is generated.

Note: In order for any of the following designs to work, discovery of the previously documented “unknown” must be determined. These designs will only work if there is “something” in the EGL source that can be directly tied to generated z/OS components.

  • EGL in Endevor / EGL Delivery to z/OS

In this scenario, code generation would take place on the workstation and PREP=Y would execute as provided by IBM with no modifications (other than site-specific ones) to IBM’s z/OS Build Server. This will result in the COBOL, link, and BIND source being delivered to PDS on the mainframe and compiled there.

Assuming the delivery of the components to z/OS can be done to “protected” libraries, the EGL source could then be ADD/UPDATEd into Endevor using CMEW. The ADD/UPDATE process would then query the EGL source and automatically copy or otherwise bring in the COBOL, Link, and Bind source created and delivered earlier. The load modules created would be ignored; they would be recompiled again under Endevor’s control.

There are a variety of other options and designs and hybrids/combinations on the above ideas that I can think of. However, this paper should serve as the beginning of a discussion concerning which model or architecture best suits the needs of the site.

A Few More Simple Tips

Get Yourself A Dumb ID

A very helpful hint for the Endevor Administrator is to have a second userid for the system that has limited access to facilities along the same lines as the most restricted developer.

This ID is very useful for ensuring the changes you make in Endevor work for everyone. Your normal userid tends to have “god-like” access to everything. Therefore, everything always works for you!

The “dumb ID” reflects your users and is a good verification check to ensure changes you are making truly are “transparent”.

Use supportconnect

It is surprising how many sites are unaware of the excellent support provided by CA for all of its products through the Internet. Everyone using CA products should be registered and using the facility found at supportconnect.ca.com.

Through effective use of supportconnect, the Administrator has an opportunity to be more than just “reactive” to Endevor problems; they can actually be “pro-active”.

For instance, supportconnect provides the ability to list all current problems and fixes for the Endevor product. The Administrator can scan down that list, recognize circumstances that fit his/her site, and apply a fix before the problem is actually encountered.

Conversely, they can sign-up to be alerted to times when new solutions are posted, thereby being made aware of problems and fixes without even having to request a list.

Other features of the tool are the ability to create and track issues, as well as your site-specific product improvement suggestions.

Positioning the Endevor Administrator in an Organization

One of the struggles companies often face with Endevor is trying to define where the administration for the system best resides. While not offering a definitive answer to this question, I provide the follow excerpt from a report written for an Australian firm where I have endeavored (no pun intended) to provide at least some opinion.

Correctly positioning Endevor Administration in large sites can be as strategic as the decision to implement the product itself. Endevor represents a logical extension of change management, and as such should be autonomous. The dilemma presented is to be accountable for implementing company standards and procedures, but at the same time be responsive to the requirements of the true client – the developer. Initially the answer to this appears to be straightforward until you consider what the actual line management should be for the Endevor Administrators themselves. General practice is to locate Endevor Administrators in one of these areas;

  • Application Development
  • Change Management / Operations
  • Systems Administration

Development – this is one of the best areas for Endevor Administrators to be physically located, because they are working with their principal stakeholder. The major drawback with belonging to this reporting line is that they are responsible for maintaining company standards and procedures, which can potentially be compromised by a management request. Care must also be taken here to take the role seriously, because it usually decays into a caretaker or part-time role that no one is really interested in. Even though Endevor Administration is within the application development structure, developers should not be the ones managing Endevor.

Change Management / Operations – usually set up as a discrete business unit within Operations, Change Management walks the balance between maintaining corporate standards and being attentive to developers’ requirements, but with reduced risk of compromising their responsibility through a management request. Sites that select this option will usually have full time resources committed to the role, and consequently enjoy the benefits of that decision.

Systems Administration – although a realistic choice through technical requirements, positioning Endevor Administrators within this area is the least advantageous. The risk here is that they will see their role as enforcers of process first, before they take developers’ requirements into account. Traditionally they will not commit full time resources to the role, so users will miss out on features and functionality as new ‘best practices’ emerge.

In summary, the optimum could be to physically locate the Endevor Administrators with application developers, but their reporting line could be to Change Management / Operations or even Audit. No matter where Endevor Administration is located and what the reporting line, it is most important that the role is full time and taken just as seriously as that of the System Security team.

Catch Alls – Some Short-and-Sweet Tips

Optional Features Table

A good practice to get into is to regularly review the values and “switches” that have been placed in the Optional Features table supplied with Endevor. The source is found on the installation TABLES file and is member ENCOPTBL.

Each optional feature is documented as well as the various parameters that may be provided when activation the option.

Avoid Using Endevor within Endevor

Endevor processors allow you to iteratively invoke Endevor from within Endevor. This is a common practice when you need to retrieve something related to the element being worked on in a processor. For instance, you may want to check the submission JCL against the PROC being processed, or perhaps component information is needed to ensure complete processing occurs.

Before deciding to include “Endevor within Endevor (C1BM3000)” in your processor, make sure what you want to do can’t be done with one of the myriad of utilities specifically designed to execute in processors. Specifically, CONWRITE has had many extensions added to it that allow retrieval of elements and retrieval of component information.

A utility will always execute neater, cleaner, sweeter, and faster than Endevor-within-Endevor.

Move Commonly Used JCL Statements to the Submission JCL

A very simple technique to improve Endevor performance and reduce the number of service units required by the facility is to move (or copy) commonly used JCL statements from the processor to the submission JCL.

Specifically, either move or copy the declarations for SYSUT3, SYSUT4, SYSUT5, etc. into the skeleton member XXXXXXX so that they are defined automatically by the OS390 operating system at submission time. Then, when Endevor requires the DDNames and allocations, they are already done, thus saving overhead time required for Endevor to perform the dynamic allocation.

While this does not sound like a “big deal”, in actuality it can make a significant difference. A typical COBOL processor, for example, will need to allocation each of the SYSUTX Ddnames twice; once for the compile step and again for the linker step. If you are compiling 50 programs in an Endevor job, the allocations and de-allocations can occur over 300 times!

Based on personal experience, I would put the SYSUTX statements in BOTH the processor and the submission JCL. This is based on experimentation done that established a baseline CPU usage with the statements just in the processor. The first alteration removed the statements from the processor and placed them only in the submission JCL. This resulted in a drop in CPU usage. I then placed the statements in both the processor AND the submission JCL. This resulted in a further drop in CPU usage (lower than the first!). Therefore, no harm is done (in fact, good may be the result!) by having the statements in both locations, so I would leave it in both!

Some Available Traces

One of the problems with the Endevor documentation is that the different traces (and “hidden abilities”) that are available are scattered throughout different sections. Therefore, this list has been (and is being) constructed to try to capture all the different traces in one location.

  • EN$SMESI
    • ESI SMF record trace
  • EN$TRALC
    • Allocation service trace
  • EN$TRAUI
    • Alternate ID trace
  • EN$TRESI
    • ESI trace
  • EN$TRXIT
    • Exit trace
  • EN$TRITE
    • If-Then-Else trace
  • EN$TRAPI
    • API Trace
  • EN$TRLOG
    • Logon and logoff information
  • EN$TRSMF
    • Writes SMF records (needs to write to a dataset)
  • EN$TRFPV
    • Component Validation Trace
  • EN$AUSIM
    • AUTOGEN in simulation mode
  • EN$TROPT
    • Site Options Report (imho this should be a regular report not a trace)
  • EN$TRSYM
    • Symbolic resolution trace
  • EN$DYNxx
    • NOT A TRACE. A method where dynamically allocated datasets (eg done by REXX in a processor) can be monitored by Endevor

Take the time to search through the manuals looking for “EN$”. You might be surprised at the things you discover that you never knew you had!

The Endevor ESI Look-Aside Table

Many sites are unaware or have inadvertently disabled Endevor’s ESI Look-Aside Table (LAT). As the manual states:

“The security look aside table (LAT) feature allows you to reduce the number of calls to the SAF interface and thereby improve performance. The result of each resource access request to SAF is stored in the LAT. ESI checks the LAT first for authorization and if the resource access request is listed ESI does not make a call to SAF.

“Note: Do not use the LAT feature in a CA-Endevor/ROSCOE environment.”

Always ensure you have allocated a value to the LAT variable in the C1DEFLTS table as this is a simple (and supplied) manner of improving Endevor performance. Leaving the value blank or assigning a zero to the field will turn the function off, resulting in superfluous calls to your site security software during foreground processing.

The values that can be assigned to the LAT field are 2 to 10, with each number representing 4K page sizes of storage. A good starting value is 4.

Unlimited Allocations

Another vexing problem that large shops run into is the fixed number of dynamic allocation that MVS allows to be performed for a single job. As of the writing of this paper, that limit was set to 1,600. In the event you job requested more than 1,600, the system would abend the job with an S822 abend code.

On the surface, it appears to be very easy to exceed this number in the normal course of processing within Endevor. Since Endevor jobs execute as a single step with program NDVRC1, and since a package or batch job could easily hold 5,000 programs, the mathematics alone would seem to indicate the job will abend early in the process.

Consider a simple load module MOVE processor; a processor that moves the DBRM, Object, and Listings from one stage to the next. Each program being moved will require 2 allocations each for the DBRM, Object, and Listing libraries, 3 each of the SYSUT3 and SYSUT4 working libraries, 3 SYSIN allocations, and 3 SYSPRINT allocations. This works out to a total of 18 allocations per program. Therefore, theoretically, in our package of 5,000 programs, the system should fail us at program number 89, since during the processing of that program we will exceed the 1,600 allocation limit (program 89 x 18 allocations = 1602 allocations).

However, in reality, that doesn’t happen. In fact, Endevor will merrily continue on its way until program number 534. Although further along than program 89, the package is still not complete… and why here? Why not program 89?

The answer lies in the manner in which Endevor allocates (and de-allocates) datasets during execution. After the execution of every program in the package/batch request, Endevor de-allocates all the datasets that were used by the processor for that element. In this way, the 1,600 limit is not reached early in the processing. In essence, each program gets to start with a clean slate.

However, this is not true for any datasets destined for SYSOUT (e.g. SYSPRINT). Endevor does NOT release these dynamic allocations and, instead, accumulates them as the job executes. Therefore, the 3 allocations done for each program for SYSPRINT in my example are accumulative, so that when I reach program 534, I have once again hit the 1,600 allocation ceiling (Program 534 x 3 SYSOUT allocations = 1,602 allocations).

There are a couple of ways to resolve this problem, but I believe the best way is as follows.

For every processor, insert a new symbolic called something like DEBUG. Do conditional checking to see if you really need to see the output; after all, the majority of time, the output contained in SYSOUT is not germane to the task at hand. You only need to see it if you are debugging a problem. Consider the following sample processor.

//DLOAD01S PROC DEBUG=NO,
:
//S10 EXEC PGM=IEBGENER
//SYSUT1 DD DSN=MY.LIB.LIB1,
// DISP=SHR
//SYSUT2 DD DSN=&&TEMP1,
// DISP=(NEW,PASS),
// UNIT=&UNIT,
// SPACE=(TRK,(1,1),RLSE),
// DCB=(LRECL=80,BLKSIZE=6160,RECFM=FB)
// IF &DEBUG = ‘NO’ THEN
//SYSPRINT DD DUMMY
// ELSE
//SYSPRINT DD SYSOUT=*
// ENDIF
//SYSIN DD DUMMY
//*
:
:

The default value for symbolic &DEBUG is NO. Since the SYSPRINT will resolve DUMMY, the dynamic allocation will not occur and you will never incur the incremental count towards 1,600.

Again, I recommend this approach because it is seldom that you need the output from the different SYSOUTs in your processors unless you are debugging a problem. This approach allows you to “turn on” the output when you need it, but otherwise suppress it when you don’t. To turn on the output, just change the value of &DEBUG symbolic to something other than ‘NO’.

The second half of this solution is the FREE=CLOSE clause. This statement forces the system to drop the allocation of the device or dataset when the step finishes. Endevor does this automatically for every dataset it uses except SYSOUT. You can code the drop for the SYSOUT yourself.

However, be careful if you decide to place the clause in every SYSOUT without also analyzing which of the SYSOUTs you really need. It is entirely likely that you will flood your system’s JOES (job output) with SYSOUT data if you do not exercise discretion.