Getting Started
Part Two - Challenge #01

Background:

Congratulations on making it to Part 2. Now comes the exciting stuff! In this first challenge, you will be setting up your data sets for the rest of Part 2. You will also learn to reset your data sets back to their original state in case you need to start over.

As you continue to work through Part 2, you will gain a deeper understanding of what JCL is and how it is regularly used. However, for now, it is important to understand that JCL is simply a set of statements that you code to tell the z/OS operating system about the work you want it to perform.

Without further ado, let’s get started!

A reminder about contest IDs: The challenges described within use CC##### as a placeholder to describe instances where you should insert your contest user ID. Each country and geographic region participating in the contest will have their own unique prefix and each contestant will have their own unique numeric identifier. During this contest, wherever you see "CC#####", please change this to your specific country code and user ID number or the challenges will not work as designed.

For example: If a challenge tells you to change CC##### to your user ID, and you are participating in the China contest with an ID of CN99999, replace CC##### with CN99999.

The list of user ID prefixes for each country's particular user ID format are as follows:

Region Prefix
India IN#####
Japan JA#####
China CN#####
UKI GB#####
DACH ZC#####
Italy IT#####
US & Canada US#####
SSA AR#####
BeNeLux BE#####
CEE HU#####
Spain ES#####
Russia & Ukraine RU#####
MEA ME#####
Brazil BR#####
Learning System AU#####

Your challenge:

Prepare yourself and your environment for the exciting challenges that are coming your way.

Issue the following command from the ISPF Command Shell command line, found at option 6 in the ISPF Primary Option Menu:

SUBMIT 'ZOS.PUBLIC.JCL(P2)'

Don't forget the quotes! You will see a notification that the job was submitted along with job ID. Remember to press Enter when you see ***. This job will take a moment to complete. You will see a notification message MAXCC=0000 when the job successfully finishes. Hit Enter after a minute or two if you don't see it.

Now go see what goodies we've placed for you. Go to the DSLIST panel (ISPF option 3.4) and enter your user ID in the Dsname Level field. Hit Enter, and you'll see additional data sets that were not in Part 1.

What was that? You just submitted for your very first JCL job which populated your Part 2 data sets and members with some mainframe contest goodness.

You'll notice that you now have a data set called 'CC#####.P2.OUTPUT'. This data set is very important -- it will eventually contain all of your work to be graded for Part 2. From here on out, we will refer to this data set as OUTPUT or P2.OUTPUT. You'll need to keep its full name in mind to complete many of the Part 2 challenges successfully.

Note: In these instructions, when we use the word "Type", we do not want you to press Enter. If we use the word "Enter", we may mean for you to type something in and then press Enter.

Enter E to the left of P2.OUTPUT.

On the primary command line, enter:

SELECT #01

This creates a new member named "#01" in the P2.OUTPUT PDS and automatically opens the editor. Once there, on the first blank line enter:

cc##### was here 2016. wooo hooo, mainframer!!!

Use your user ID in place of cc#####. The wooo hooo, mainframer!!! part is optional, depending on your level of enthusiasm at this point. Original expressions of mainframe zeal are also acceptable. Any combination of upper and lower case is acceptable.

F3 back out, and you will see that #01 has been created in OUTPUT.

You have now successfully created a brand new member in a partitioned data set. This may not sound like much, but it's an important first step!

Note on resetting data set contents: If you need to reset any of your data sets that were provided by the contest team, check the refrence page here.

Next: Challenge #02

Intermediate ISPF Editor
Part Two - Challenge #02

Background:

ISPF is a panel-driven interface that is commonly used by z/OS programmers and operators. Included in ISPF is an editor that has many features, but can be difficult for a novice to use.

Since editing text is so important, we have set up several challenges for you in the ISPF editor. There are many times you will need the ability to edit text effectively: editing source code, altering data, generating documentation, etc. and we wanted to show you some nifty tricks in the ISPF editor.

Reminder: The editor has two types of commands. Primary commands are entered in the primary command field, and line commands are entered in the column to the left of each line.

You can type in multiple commands at once, and then press Enter. Pressing Enter results in the execution of all typed primary and line commands simultaneously.

Note: We are careful to distinguish between Type and Enter in these challenges. Instruction to "type" means that you should not press the enter key after keying the command. We will use the word "enter" to indicate that the enter key should be pressed after keying the command.

Your Challenge:

Successful completion of the challenge will create member #02 in P2.OUTPUT

Enter E to edit the CC#####.SEQ data set and follow the instructions below to complete this challenge.

Note: You may encounter an "Edit Entry" panel when opening a data set. Simply press Enter to proceed to the ISPF edit session.

See the "EDIT" in the top-left corner? This indicates that the data set is opened in edit mode. Any changes that are made to the data set will be saved upon normal exit. Observe the line numbers to the left under 'Command ===>'. A combination of primary and line commands can be typed on the screen, and pressing enter will execute the typed commands.

  • First, tell the editor to delete the first seven lines of text. To do this, we'll use the line command "D", followed by the number of lines we want to delete.

    Type D7 in the first line command area.
  • Next, let's select a block of lines to move with the line command "MM". MM is a block command, so you have to type it twice: once on the first line to move, and again on the last line. We want to move the lines 8 through 29 to the bottom.

    Type MM in both line 8 and line 29 command areas.
  • Finally, tell the editor that you'd like to jump to the bottom of the data set with the primary command "BOTTOM".

    Enter BOTTOM in the primary command field.

You'll notice that the editor informs you that a "MOVE/COPY is pending" in the top-right corner. This is because you have not indicated yet where to move the block. Additionally, delete command is still pending as well, because all commands that are entered at once must complete together.

  • Indicate to the editor where to move the block with the "A" command. "A" is used in a line command area to mean "After this line".

    Type A in the line 83 command area.
  • Jump back to the top with the primary command "TOP".

    Enter TOP in the primary command line area.

As a result of the above, 7 lines were deleted, a block of lines were moved to the bottom, and the top of the data set is now displayed.

Next, you'll investigate the ISPF editor profile.

  • Use the primary command "RESET" to renumber the lines and remove any informational messages from the editor.

    Enter RESET in the primary command line area.
  • Next, display the ISPF editor profile with the "PROFILE" primary command.

    Enter PROFILE in the primary command line area.

    The ISPF editor has several profile options that can be changed as desired. Pressing F1 will display the editor tutorial with additional information on the various profile settings. F3 will exit the tutorial and return to the editor.
  • Remove the =PROF> informational messages with the "RESET" command.

    Enter RESET in the primary command line area.

Take a moment to observe the following:

  • line 4 is lower case alphabetic
  • line 9 is UPPER case alphabetic
  • line 14 contains the decimal numbers 1234567890
  • line 19 contains the hexadecimal numbers 0 to F
  • line 24 contains special characters

Extended Binary Coded Decimal Interchange Code "EBCDIC" is an eight-bit character encoding used mainly on IBM mainframe and IBM midrange computer operating systems. Similar to ASCII, EBCDIC has been in use for many decades.

Notice anything weird starting on line 27? While the IBM mainframe stores data by default in EBCDIC, often times data will get stored in ASCII format. ASCII stands for American Standard Code for Information Interchange.

Everything inside a computer is eventually stored as bits (1 or 0), so an ASCII and EBCDIC are codes that map a numerical 8 bit value to a character such as 'a' or '@' or an action of some sort.

  1. The ISPF editor is happy to display ASCII characters for you, you just need to tell it to do so! Use the primary command "SOURCE ASCII" to switch from EBCDIC view to ASCII view.

    Enter the primary command SOURCE ASCII.

    Now the EBCDIC characters look weird and the ASCII characters are readable. Neat!

Let's take a closer look at exactly what is going on here. The editor has the ability to display the hexadecimal representation of each byte. Not to bore any of you with the math, but hexadecimal is often used to display raw data as it only takes 2 characters between 0 and F to represent any unique 8-bit value.

  • Turn on the hexadecimal display with the primary command "HEX ON".

    Enter HEX ON in the primary line command area.

    Now each line in the data set takes up 3 lines. The first line, with the line number is the visible representation of the data, and the next two lines show the hexadecimal values for each byte of data. Using F8 and F7 in conjuction with SOURCE ASCII and SOURCE EBCDIC, you should be able to investigate the data in detail.

After investigating the data, enter a few commands to change the editor back to it's inital state:

  • Enter the primary command HEX OFF.
  • Enter the primary command RESET.

Finally, type in a few more commands:

  • Enter the primary line command BOTTOM.
  • Type the line command CC on lines 57 and 65.
    The CC command is used in pairs and designates multiple lines to copy.
  • Type the line command OO on lines 67 and 75.
    The OO command is used in pairs and designates the area of text to overlay the copied block into.
  • Press Enter.

It's entirely possible that you'll make a mistake at some point during this challenge. Use the primary line command CANCEL to exit the editor without saving any changes, and you can try again. If you accidentially save your data set and it contains errors, you can reset the contents by performing the following steps:

  1. Edit the data set.
  2. Type D999 in the line command area for line 1.
  3. Enter COPY 'ZOS.PUBLIC.P2.SEQ'.

After successfully performing the required edits, press F3 to exit and save your work. Then enter the primary line command:

TSO SUBMIT 'ZOS.PUBLIC.JCL(CH2)'

This job will execute, copying the contents of CC#####.SEQ into P2.OUTPUT(#02).

Remember:
If you mess up your data set you can restart using the guidance given in Resetting a Data Set Reference Page.
In addition to ISPF Editor help tutorial via the F1 key, you may also want to check out the ISPF Editor Command Summary Page.

Congratulations! You are done with this challenge and may move on to the next one.

Next: Challenge #03

More ISPF Editor
Part Two - Challenge #03

Background:

This is a continuation of the previous challenge where we continue to explore the ISPF editor.

Additional information regarding the commands found in this challenge are located on the ISPF Editor Command Summary Page.

Your challenge:

Edit CC#####.DATA(LINUXONE) and enter the primary line command RESET.
RESET will clear the ==MSG> warnings and give you more space to work in.

In the primary and line command fields, perform the following actions, as indicated:

FieldAction
Line 15Type MM
Line 23Type MM
Line 1Enter B
Line 29Enter D4
Line 18Type M4
Line 4Enter A
PrimaryEnter C ALL # ' '
PrimaryEnter C ALL @ +
PrimaryEnter C ALL & :

The text image should appear in a shape that resembles something like a cylon...
Actually this is the latest and greatest hardware offering from IBM for running Linux:

Remember: If you mess up, you can restart using the guidance given in Resetting a Data Set Reference Page.

To complete this challenge, copy the unscrambled text image to your P2.OUTPUT data set as member name #03 by performing the following steps:

  1. Enter the primary command RESET
  2. On line 1, enter the line command C99
  3. Enter the primary command REPLACE P2.OUTPUT(#03)
  4. Press F3 to save and exit

Success! Move on to the next challenge now.

Optional:

You have an opportunity to provision a Red Hat, SUSE, or Ubuntu Linux running on an industrial strength server at no cost. You can join many hundreds of open source community developers and students that are already taking advantage of this opportunity.

Looking for a new adventure?
We have ONE: We have the LinuxONE Community Cloud!

Next: Challenge #04

Program Execution in the Backgroud, aka z/OS Batch
Part Two - Challenge #04

Background:

50 years ago, a very clever mechanism was designed that enabled programs to execute using different input data and output destinations without the need to change the program. That mechanism is called Job Control Language, or JCL.

Controlling job execution is the most important function of JCL. While it is a programming language, it's not intended for application programming. Instead, JCL is used to stack and queue the execution of other programs and to identify the input and output pipes for each program.

Many programs can be prepped, staged, and executed in-order and at scheduled intervals. This sort of processing is often referred to as "batch" processing, as you can envision a batch, or collection, of work being performed as a single unit.

Your Challenge:

Use the editor to view and submit JCL to the system for execution. This JCL will execute a job in the background, allowing you to continue to work. When the job completes, z/OS will inform you. This sort of workflow is also called "background processing".

Useful Tip: When ISPF panel option numbers are prefixed with =, this treats the panel option numbers as if they are being entered on the ISPF Primary Option Menu. So, if =3.4 is entered on some panel other than the ISPF Primary Option Menu, then the result will be to jump directly to 3.4 Dslist Utility panel from the current ISPF panel.

Edit CC#####.JCL, and on the PDS directory primary command line, enter SELECT NEWJCL.

Since NEWJCL did not previously exist, the result of the select command is to create a new, empty member named NEWJCL. The editor will open the new member and present blank input lines.

  • On the primary command line enter COPY 'ZOS.PUBLIC.JCL(MYJOB)'.
  • Then on the primary command line enter SUBMIT.
  • Press F3 to save and exit.

The result of the above steps is to copy MYJOB from ZOS.PUBLIC.JCL to NEWJCL and submit that JCL for execution. Upon successful execution of the JCL, P2.OUTPUT(#04) is created.

That was pretty easy! You just submitted JCL for z/OS to read and execute a nifty program called "SORT" in a batch. Verify your success by viewing CC#####.P2.OUTPUT and select member #04.

Next: Challenge #05

SDSF & Viewing JCL Job Output
Part Two - Challenge #05

Background:

The System Display and Search Facility (SDSF), is a panel driven interface that is commonly used to perform actions like viewing system activity and resources, entering z/OS system commands, and viewing input, execution, and output job queues.

Enter the ISPF navigation shortcut =SD to jump directly to the SDSF Primary Option Menu. See the COMMAND INPUT ===> prompt at the top? This is the SDSF primary command prompt.

Enter the SDSF primary command DA to open the "display active jobs" panel, and then enter the primary command SET DISPLAY ON .

Observe that the display filter values for PREFIX, DEST, OWNER, and SYSNAME are now showing near the top. You can control what data is displayed by manipulating these filters.

To clear all the filters, enter the following SDSF primary commands:

  • PREFIX
  • OWNER
  • SYSNAME
  • DA

To only display your jobs, enter the following SDSF primary commands:

  • OWNER CC#####
  • ST
  • DA

The DA command should display a single entry: your contest ID for the active TSO session because OWNER is set to CC#####.

Note: When output queue has multiple jobs with the same JOBNAME, then the JOBNAME with highest JOBID number is the most recently processed.

To view the z/OS System Log, enter LOG

.

Enter the command COLS to view system log columns. Columns 2 through 57 contain metadata about the log message and starting in column 58 is either a system message or system command. To the right of any system messages you will find a brief log message. Full descriptions of log messages can be located on the internet by performing a search on the system message.

Enter the following SDSF commands while in the System Log:

  • TOP
  • RIGHT 26

Use the F8, F7, F10, and F11 keys to scroll around, just like in the ISPF editor.

Next, let's explore the z/OS display commands. These are useful for gaining valuable data about the running system. Enter / in the SDSF primary command input to open the System Command Extension panel. This is similar to the =6 TSO command input in ISPF, only commands entered into this panel will be executed as z/OS system commands.

Many z/OS system commands exist and each has a rich set of command operands. Enter each of the following commands, one at a time, in the System Command Extension panel:

  • D TS,L
  • D TCPIP,,NETSTAT,HOME
  • D TCPIP,,NETSTAT,BYTE
  • D T
  • D A,L
  • D M=CPU

Next, enter the following SDSF commands while in the log:

  • BOTTOM
  • LEFT MAX
  • RIGHT 37

Use F7 to page up, watching in column 57 for your user ID. Once located, observe the output to the right - you will see the commands you just entered and the response given.

JCL Output

SDSF is also used to view the output of JCL execution.

JCL consists of statements that tell z/OS which program name to execute followed by program inputs and outputs. Filename references are coded into z/OS programs; however, these filenames are abstract names without any association to a real resource. The purpose of JCL is to associate program filenames to physical system resources.

This association is achieved using JCL DD statements.

JCL statement types:
  • EXEC
  • DD

JCL statements begin with // in columns 1 and 2, followed by an internal name, then the statement type, EXEC or DD. For example, to execute the SORT program, the following JCL could be used:

//S1       EXEC PGM=SORT
//SORTIN   DD DSN=&SYSUID..DATA(HESF),DISP=SHR

Note: The &SYSUID. in JCL will automatically substitute to CC##### when the job is initiating.

Assuming that the SORT program includes the statement "OPEN SORTIN", the JCL DD statement is used to associate the real resource with SORTIN.

The format of a JCL DD statement follows:

//SORTIN DD Resource

SORTIN is a Data Definiton Name (DDNAME) and must match the SORT program internal filename reference. Resource describes the real resource such as a PDS. See DSN=&SYSUID..DATA(HESF) in the example above. Submitting JCL for system interpretation and execution can be accomplished in many different ways, some of which will be explored now.

Your Challenge:

Take the described actions to submit JCL jobs JOB01, JOB02, JOB03, and JOB04 using 4 separate ways:

From any SDSF or ISPF panel, enter the following primary command:

TSO SUBMIT JCL(JOB01)

Next, use the ISPF Command Shell (=6) panel to submit another JCL job:

SUBMIT JCL(JOB02)

Next, from the data set directory list (=3.4), browse into your CC#####.JCL PDS.

Enter SUBMIT to the left of member JOB03.

Finally, submit JCL from ISPF editor Browse, View, or Edit mode. Open your CC#####.JCL(JOB04) member in Browse, View, or Edit mode:

Enter SUBMIT in the primary command line.

There are many more submit options, but these are enough for now!

Have you noticed that after each submit an unsolicited message was displayed? This message contains information like the JCL jobname (JOBnn) and a uniquely defined JOB number JOB#####.

Let's take a look at the output from the JCL jobs you have just submitted. Return to the SDSF Primary Option menu (=SD) and enter the following SDSF primary command to reset your filters:

prefix;dest;owner;st

Next, enter the following commands to filter jobs owned by you, and only display their status:

  • OWNER CC#####
  • ST
SDSF STATUS DISPLAY ALL CLASSES               
COMMAND INPUT ===>                            
NP   JOBNAME  JobID    Owner    Prty Queue    
     TEST008  TSU07324 TEST008    15 EXECUTION
     JOB01    JOB07343 TEST008     1 PRINT    
     JOB02    JOB07344 TEST008     1 PRINT    
     JOB03    JOB07345 TEST008     1 PRINT    
     JOB04    JOB07346 TEST008     1 PRINT    

Notice the characters NP? The NP column to the left of each JOBNAME is used to issue actions against specific jobs. Some of the commands that can be issued are:

  • S : Selects the entire output for browsing.
  • ? : Allows you to select individual JCL DDNAMEs for browsing. Use S against the DDNAME to view individual output.
  • P : Purges (deletes) the job output.
  • SJ : Selects JCL that was used to submit the job.

At this time, enter SJ to the left of JOB01.

While viewing the JCL of JOB01 enter HILITE JCL on the primary command line. This will highlight the JCL reserved words: JOB and EXEC. The JCL JOB statement is primarily used to give the JCL JOB a name, in this case the jobname is JOB01. The JCL EXEC statement is used to tell z/OS to execute a something, here we are executing the program IEFBR14. IEFBR14 is a simple program that starts execution and immediately returns (exits). There are no virtual filenames inside the source code of IEFBR14, therefore no DDNAMEs are required here.

Press F3 to return to the SDSF status display panel and enter S to the left of JOB01. The entire job output is displayed, including system messages.

Press F3 and enter ? to the left of JOB01. Now the JCL job DDNAMEs are displayed. Hold up! We just told you that there were no abstract filename statements inside the IEFBR14 code, and you clearly saw that JOB01 had no DD statements coded. But here are 3 DDNAMEs. What gives?

The reason these DDNAMEs exist is that z/OS dynamically creates these 3 DDNAMEs for every job that gets submitted. They are used to hold the job log summary, JCL interpretation output and various system messages.

Enter S to the left of each DDNAME and view their contents, then press F3 to return to the SDSF status display panel.


Select JOB02's output. Notice that the EXEC statement here includes an optional STEPNAME, namely "STEP1" before the EXEC statement. A STEPNAME is used to identify blocks of JCL and is used in more advanced JCL techniques.


Select JOB03's output. Notice that JOB03 has multiple EXEC statements. It should be clear now that JCL jobs can execute more than one program, with each program being contained within a step. Multiple EXEC steps are executed in order. STEP1 must finish before STEP2 can begin, and so on.


Select JOB04's output. Notice that JOB04 failed to execute due to a JCL error. At the bottom of the output is the error message IEFC605I indicating that something is wrong on JCL statement number 3.

Press F3 then enter SJ on JOB04. SJ permits you to alter the JCL directly in the output queue. Enter HILITE JCL on the command line to help identify the error.

All JCL must be UPPERCASE with a few exceptions. Correct the error and enter SUBMITon the command line to re-submit the JCL for interpretation and execution.

F3 will return to SDSF queue.

Observe another JOB04 entries exist. Recall that the highest JOB##### job number is the most recently executed.

Useful Tip: Enter P to the left of any unwanted JCL jobs. This will purge them from the output queue, never to be seen again.

Once JOB04 successfully executes, enter ? on the successful job output and select the JESJCL DDNAME. Observe that the program executed in step 3 is SORT. Next, look closely at line 5, where the input to the SORT program is defined:

//SORTIN DD DSN=ZOS.PUBLIC.DATA(HESF),DISP=SHR

Note line 6 contains the SORTOUT DD statement. This is where SORT will be writing it's output to:

//SORTOUT DD DSN=&SYSUID..P2.OUTPUT(#05),DISP=SHR

The line immediately after 6 indicates that z/OS substituted something in the previous statement. This is where &SYSUID gets resolved to CC#####.

//SYSPRINT and //SYSOUT on lines 7 and 8 are coded to DD SYSOUT=*. SYSOUT=* is the job log.

Finally on line 9, //SYSIN DDNAME references the physical resource "*", where * indicates whatever follows is stored in the input queue and passed to executing program.


Press F3and return back to JOB04 DDNAME list and enter INPUT ON on the SDSF primary command line. Observe that more system generated DDNAMEs are now being displayed.

Select SYSIN STEP3.

 SDSF OUTPUT DISPLAY JOB04    JOB07363  DSID   
 COMMAND INPUT ===>                            
********************************* TOP OF DATA *
  SORT FIELDS=(20,2,CH,A)                      
******************************** BOTTOM OF DATA

What you are viewing is the non-JCL data that followed the //SYSIN DD * in the JCL stream. This data was stored in the queue for the system to access when the SORT program executed and describes the fields that the SORT program should evaluate and how they should be sorted.

The actual data input and output for JOB04 SORT is a college campus survey of hair color, eye color, and sex combination frequency. The output of SORT was written to P2.OUTPUT(#05) as described in the //SORTOUT DD DSN statement and will be used to evaluate successful completion this challenge.

Feel free to check your P2.OUTPUT data set for the output, then move on to the next challenge!.

Next: Challenge #06

JCL Jobs and SDSF Revisited
Part Two - Challenge #06

Background:

In this challenge, JCL and SDSF is revisited from a different perspective. The different wording and repeated explanation of JCL and SDSF gives you, the contestant, a more concrete understanding of the concepts.

Working in ISPF directly is also known as "foreground" processing, and this is a very useful way to perform tasks. But when a program takes a long time to execute, running it in the foreground means you can't do anything else with your interactive session until the program completes. The solution to this is to submit long-running programs to process in the background. Then you can continue to work interactively while your program runs. z/OS will let you know when your long-running program has completed.

In mainframe-speak, a "job" is 1 or more programs that run in the background. In order to cause work to run in the background (that is, to submit a job), you need to instruct z/OS through JCL. When you submit your JCL, it is passed to the Job Entry Subsystem (JES). JES then allocates the necessary resources for your job, then executes the work when the resources are available. z/OS background job processing is commonly referred to as "batch" processing.

Understanding JCL is a very important skill to have, so you'll be seeing more of it throughout the rest of Parts 2 and 3. Plus, IBM customers are always impressed by applicants who know something about JCL, so you can brag on your resume! Woot!

JCL is quite different from any other programming language. Most systems and applications programmers will find a piece of existing JCL code that does something very close to what they want to do, and then make a few little changes to it so that it fits their needs. You're going to do the same thing in this challenge.

As a systems programmer, you will be doing lots of work with JCL. If you make a simple JCL error, such as forgetting a comma or putting a character in column 72, your job will, in most cases, end very quickly, and the system will inform you of a JCL error. Even very experienced system programmer will make plenty of JCL errors. They know the system does a good job of explaining the mistake. So, they may not spend much time checking JCL syntax before submitting, because the system will check it for them.

Your challenge:

Submit JCL to execute a SORT program. This JCL will fail with a simple error. Success is to correct the JCL error.

Note: While most of z/OS is not case sensitive, you must use UPPERCASE letters in JCL (with a few exceptions such as comments and UNIX file system paths).

Edit CC#####.JCL(BADJCL). On the ISPF Editor primary command line, enter SUB (this is short for submit).

This will send your JCL, which you can also call your job, to JES for processing. JES manages all jobs submitted to the z/OS system. JES will put your job on an initiator queue, which helps decide when to submit your job for execution, and allocates the resources needed for the job. When your job runs, JES manages output from your job (and all other jobs that are submitted) in a special set of data sets called the spool.

Press enter to see the results of your submission. Note the "JCL ERROR" inside the system generated message? This is not good news: your job failed to run. Press enter again to dismiss the message and return to your JCL.

Let's check the output in SDSF try to identify the error. To get to SDSF from the edit session, enter =SD on the ISPF editor primary command line. This will jump directly to SDSF Primary Option Menu. At the SDSF Primary Option Menu, enter the following command:

OWNER CC##### ; PREFIX ; ST

The above stacked SDSF commands will result in display of all CC#### output with any jobname prefix.

Enter S to the left of BADJCL.

SDSF is now displaying entire BADJCL output.

To view all of the output, use the function keys F7 and F8 to scroll up and down, and F10 and F11 to scroll left and right.

The JCL job output assigns line numbers to the JCL statements read. When a JCL error occurs, then a specific JCL error message near the bottom of the output will start with the line number most closely associated with the error.

Hint: The messages displayed for the error encountered are fairly cryptic. If you'll recall, the error here has been encountered previously in the contest, and there exists an ISPF Editor line command: UC. Entering UC in the appropriate location will correct the error in one easy step.

Once you have identified the problem in the JCL, jump back to the ISPF Editor (=3.4) and correct the JCL error in CC#####.JCL(BADJCL).

Once corrected, enter SUBMIT and return to SDSF status queue (=SD;ST). Remember that entering P to the left of unwanted jobs will purge those jobs. This will be useful in later challenges.

Important: To get credit on this challenge, only the SORTOUT DDNAME output from a successful execution should be written to P2.OUTPUT(#06).

To copy the SORTOUT output to your P2.OUTPUT data set, enter ? to the left of your successful BADJCL job and select S the SORTOUT DDNAME. If you see sorted records in the output, then things are good! Press F3 to return to the list of DDNAMES. Enter XDC to the left of SORTOUT DDNAME. XDC is used to write the selected output to a specified data set name.

On the XDC menu, enter the following:

  • Data set name: P2.OUTPUT
  • Member to use: #06
  • Disposition: (SHR)

Accept all the other defaults, then Enter.

Return to your P2.OUTPUT data set and you should find a new member #06. Guess what? You just knocked out another challenge!

Note on purging job output: z/OS will automatically purge jobs older than 24 hours, so XDC anything you'd like to keep long-term.

Next: Challenge #07

JCL DDNAME and the Program Filename Relationship
Part Two - Challenge #07

Background:

As you've seen in the previous challenges, JCL instructs the operating system to:

  • find a program to execute
  • allocate the input and output resources needed by the program
  • execute the program using the provided input and output resources

The EXEC statement

Every batch job or started task that gets executed in z/OS has at least one execute (EXEC) statement. Here's an example of perhaps the most minimal JCL possible:

//MYJOB JOB
// EXEC PGM=IEFBR14

Here's the same example JCL with an optional stepname of STEP1:

//MYJOB JOB
//STEP1 EXEC PGM=IEFBR14

It is possible to execute more than one step. Here is an example of a job that has multiple steps, each one executing a different program:

//MYJOB JOB
//STEP1 EXEC PGM=MYPGM
//STEP2 EXEC PGM=MYPGM2
//STEP3 EXEC PGM=MYPGM3

The DD statement

The JCL Data Definition (DD) statement is the key to understanding the purpose of JCL. Inside z/OS programs, the developer chooses virtual names to represent input and output (I/O) sources. In fact, z/OS programs DO NOT "hard-code" physical filenames into the source code. To map virtual I/O to real I/O, the JCL DD statement is used.

For example, consider a hypothetical program called "PAY". Inside the source code for PAY, the programmer uses the virtual name "PAYROLL" to designate a source of input. In the course of execution of PAY, PAYROLL is opened and its contents are read into memory for further processing.

The JCL that is used to execute PAY would need to specify what PAYROLL actually represents. This is done with a DD statement. Here's an example of JCL that might be used to execute the PAY program:

//PAYRUN JOB
//STEP1 EXEC PGM=PAY
//PAYROLL DD ...physical input filename and attributes...

The physical input filename and attributes designate the location of the resource and any access control methods or other information required.

For example, //PAYROLL DD DSNAME=ZOS.PUBLIC.DIV.PAYROLL,DISP=SHR maps PAYROLL to an MVS data set named ZOS.PUBLIC.DIV.PAYROLL. The disposition (DISP) attribute indicates that the physical file name already exists and the program wants shared (SHR) access; meaning other programs can read ZOS.PUBLIC.DIV.PAYROLL while this step is being executed.

DISP=OLD indicates that the resource already exists and the program wants exclusive access to the data set: no other program can read or write to the data set name during program execution.

Most batch jobs contain several DD statements. Imagine that the PAY program includes instructions to read PAYROLL and write to PAYSTUB. Since we're now dealing with two separate virtual resources, PAYROLL and PAYSTUB, the JCL used to execute PAY needs to map resources for both.

Here's an example of what this might look like:

//PAYRUN JOB
//STEP1 EXEC PGM=PAY
//PAYROLL DD DSNAME=ZOS.PUBLIC.CORP.PAYROLL,DISP=OLD
//PAYSTUB DD SYSOUT=*

Therefore, the same program can execute reading different physical input and writing different physical output without changing the program source code. Only the JCL needs to change.

Take note of the SYSOUT=* in the JCL.

The SYSOUT=* DD is a special physical resource known as the JES spool. The JES spool is a special data set that contains JCL queues such as input, output, and execution queues.

Did you know?
SPOOL is an acronym for Simultaneous Peripheral Operations On-Line. Often spools are used as holding areas for temporary data, or as a place to send logging and debugging messages. Whenever you look at the status of a job in SDSF, you are actually looking at the output spool.

Your challenge:

Modify some JCL to write a program's output into a new data set.

Enter the following ISPF primary command:

TSO SUBMIT JCL(PAYROLL)

This submits member PAYROLL from your CC#####.JCL library. Open SDSF with =SD; ST.

Change the SDSF filter settings to show only your personal output by entering PREFIX; OWNER CC#####.

Tab to the left of PAYROLL in the NP column and enter S to select the output for viewing.

Scan through the output and then press F3 return to to the previous screen.

This time, enter ? to the left of PAYROLL, then tab to the left of PAYSTUB in the NP column and enter S to select the PAYSTUB DDNAME output for viewing.

PAYSTUB contains is output which was written to the JES spool using the JCL data definition SYSOUT=* and you are viewing the output in this JES spool.

Now, you need to modify the JCL to write PAYSTUBs output to a newly allocated data set instead of writing to the JES spool. Edit CC#####.JCL(PAYROLL) and look at the last four lines. Notice how they begin with "//*"? Any line in JCL that begins with these characters is treated as a comment and is not executed.

Delete the line containing //PAYSTUB DD SYSOUT=* and uncomment the last four lines by removing the * in column 3. It's important that there is no space between the // and PAYSTUB.

Notice that the last four lines are actually a single JCL DD statement continued on multiple lines. JCL is pretty particular about how you form these sorts of line continuations and you care read all about it in the z/OS MVS JCL Reference book.

This change to the JCL will result in PAYSTUB output to be written to a 'NEW' data set named CC#####.PAYSTUB. Because this is a brand new physical data set, z/OS needs additional data set attribute information after the JCL DD reserved word.

Once the JCL is changed, submit it for execution. Then return to SDSF and display your jobs. You should find a second PAYROLL job with an incremented JobID number. View this new jobs output and confirm that it executed successfully. If any error was encountered, the job output will include messages helping to identify syntax error that needs to be corrected.

If PAYROLL ran successfully, meaning you see a Return Code (RC) of 0, then jump over to the Data Set List Utility =3.4 and enter CC#####.PAYSTUB in the Dsname Level field.

Edit E or View V the CC#####.PAYSTUB data set and confirm that names, pay rate, and pay amount are present; then perform the following actions:

Copy the contents of the data set by typing C99 on line 1's command area. Then in the primary command line enter REPLACE P2.OUTPUT(#07). This results in all the CC#####.PAYSTUB lines to be copied to your P2.OUTPUT(#07) member.

Confirm that P2.OUTPUT(#07) contains the information and you're ready to move on to the next challenge!

Next: Challenge #08

MVS Data Set Names, Stored Data, and Attributes
Part Two - Challenge #08

Background:

z/OS has a massive variety of techniques for storing and accessing data. Due to the long history of technology advancements without deprecation of earlier technology insured investment protection and upward compatibility. In other words, every new version of the operating system should support applications and data storage techniques that successfully ran on previous versions of the operating system. Original data storage techniques were never deprecated; however, newer techniques to store and retrieve data were added over the years.

The most critical data in the world is stored in z/OS data sets and in order for you to claim z/OS experience, you must include knowledge of the difference between these fundamental storage types:

  • sequential data set (SEQ)
  • partitioned data set (PDS)
  • partitioned data set extended (PDS/E)
  • virtual storage access method data set (VSAM)

Some fundamental actions that one would perform with these types of data sets are:

  • Viewing different data set types using ISPF.
  • Copy data between different data set types using JCL.
  • Knowledge of EBCDIC, ASCII, and Packed Decimal data formats.
  • Allocating sequential data sets and partitioned data sets (non-VSAM) using JCL.
  • Defining VSAM data sets using JCL.
  • Writing data into non-VSAM and VSAM data sets using JCL.

Here are some references that will come in handy during this challenge:

Your challenge:

Open your CC#####.JCL and locate the following members. These are required to complete this challenge successfully:

  • DSNAMES
  • SEQ2SEQ
  • SEQ2PDS
  • SEQ2PDSE
  • SEQ2VSAM

Packed Decimal is commonly used for storing numbers used in arithmetic processing. Packed Decimal arithmetic operations improves computer performance.

As you saw in an earlier challenge, EBCDIC and ASCII are encoded differently. Now you can add a third encoding format for decimal values. Here's a side by side comparison of the different formats:

DataEncodingStored Value
abcde EBCDIC x'8182838485'
abcde ASCII x'6162636465'
25873 EBCDIC x'F2F5F8F7F3'
25873 ASCII x'3235383733'
25873 Packed Decimal x'25873C'

Note the right most hexadecimal digit in packed decimal is always one of three values. "C" indicates positive, "D" indicates negative, and "F" indicates unsigned. Also, it doesn't make sense for there to be a packed decimal representation of "abcde" because packed decimal is used solely for numeric data.

At this time, open CC#####.JCL(DSNAMES) and read the following description of the programs and data definitions:

  • IEFBR14 is a system utility that is useful for allocating sequential and partitioned data sets.
  • A DD statement exists for each of the three data set types. Take a close look at the DD operands to determine the data set name, type, attributes and space allocation.
  • IDCAMS is a system utility program used to define VSAM data sets.
  • DEFINE is the IDCAMS control statement to determine the data set name, attributes, and space allocation.
Submit the DSNAME JCL now.
TSO SUBMIT JCL(DSNAMES)

DSNAMES will create multiple data sets for you. Use SDSF to view the DSNAMES job output and determine what the names of the newly created data sets are. There are new data sets for each type in SEQ, PDS, PDS/E, and VSAM. You will need the names of these data sets to execute the remaining JCL in this challenge. Here is some information that will help you identify the relevant parts from the DSNAMES output:

  • Sequential
    Sequential data set organization is referred to as Physical Sequential (PS) and allocated in JCL with the DD operand "DSORG=PS".
  • Partitioned
    Partitioned data sets organization is referred to as Partitioned Organization (PO), allocated using JCL DD operand "DSORG=PO".
    Partioned Organization Extended (PO-E) is allocated using JCL DD operand "DSORG=PO" with "DSNTYPE=LIBRARY".
Note: Both PO and PO-E are commonly referred to as 'libraries', implying that they contain data of a similar type. However, the JCL DD operand "DSNTYPE=LIBRARY" is used exclusively for allocating PDS/E (PO-E) data sets.
  • VSAM, Virtual Storage Access Method
    VSAM organized data sets have performance benefits over sequential and partitioned data sets. DB2 tablespaces are stored in a VSAM organization where DB2 itself formats and manages the storage. Unix file systems are also stored in VSAM data sets and Unix formats and manages the internal storage.

VSAM is defined using the system utility IDCAMS. IDCAMS can define, delete, and rename VSAM data sets. It can also import and export data using the "REPRO" control statement and print data using the "PRINT" control statement.

At this time, open CC#####.JCL(SEQ2SEQ) for editing. This JCL copies some "in-stream" data in the SORTIN DD into a sequential data set. Note on line 4, we've left the DSN= parameter incomplete. You will need to type in the sequential data set name that was created by the DSNAMES job. Once you have done this, submit the job. Review the job execution in SDSF and take note of the data set referenced by the SORTIN DD statement. Then use =3.4 to open and view the contents of this data set.

Next, edit CC#####.JCL(SEQ2PDS). This job copies the contents of a sequential data set into a partitioned data set. Using the same sequential data set that was just copied into by SEQ2SEQ, amend line 4. Then submit the job and review the output. Ensure the job has ran successfully before continuing.

Edit CC#####.JCL(SEQ2PDSE). This JCL will copy the aforementioned sequential data set to the PDSE that was created by DSNAMES. Amend line 4 as needed, submit the job, and review it's output in SDSF. Continue once the job has ran successfully.

Edit CC#####.JCL(SEQ2VSAM). This JCL copies the same sequential data set into the VSAM data set that was created by DSNAMES. Again make your amendment to line 4, and take note here -- line 5 has an incomplete DSN as well. This needs to be set to the VSAM data set from DSNAMES. Once ready, submit the job and review the output in SDSF. Once satisifed, continue on with this challenge.

The ISPF editor is not able to view or edit VSAM data due to his complex data structure. However, there exists another interactive utility named File Manager (FM) that provides this capability. Use the ISPF primary command =F followed by option 2 to open the File Manager Edit Entry Panel.

Enter the fully qualified name for your DSNAMES created VSAM data set in the "Data set/path name" field, enclosed with single quotation marks. Just like the ISPF editor, the FM editor also has the ability to display hexadecimal values. Enter the primary commands HEX ON and HEX OFF to toggle hexadecimal character representation display.

The data inside the VSAM data set should be identical to the data found in the sequential data set.

From inside the FM editor on line 1, type in C99, then enter the primary command REPLACE '/z/cc#####/charset'. Make sure that cc##### is all lowercase. Set the Owner attribute to 6, Group to 0, and Other to 0, then press enter. Observe the message "Data set replaced" in the top-right corner.

Press F3 a few times to return to the ISPF Primary Option Menu, then jump to =3.4. Just like in part one, type in the full path to your USS home directory (/z/cc#####). Recall that your ID in your USS path is all lowercase.
Tab down to the line command area next to the filename "charset" and open it for viewing. Confirm that the contents matches the other data sets in this challenge, then press F3 to return to the z/OS UNIX Directory List.

Again, next to the "charset" filename, enter the line command C to open the Copy From z/OS Unix File dialog. In the To Name field, type in P2.OUTPUT(#08) and press enter.

Review the P2.OUTPUT(#08) member and confirm that it contains the same data as has been passed around repeatedly in this challenge. If all is well, you may move on to the next challenge!

Next: Challenge #09

Part Two
#9 - Disk Storage Management, ISMF

Background:

Physical direct access storage used by z/OS today is provided on hardware such as the IBM DS8880 that can scale from 3 terabytes to more than 3 petabytes of physical storage capacity. A single mainframe, and indeed, even each LPAR on the mainframe can have many physical hardware storage controllers attached, each of which could potentially scale to more than 3 petabytes. Thus, a single mainframe is capable of processing an enormous amount of big data.

IBM provides on z/OS a suite of Data Facility Storage Management Subsystem (DFSMS) products to manage both user-managed storage (NONSMS) and Systems Managed Storage (SMS). Mainframe storage administrators use DFSMS to automate and simplify storage and data management tasks using SMS-managed storage.

SMS managed storage has several benefits. For example, when an application or user wants to allocate new data sets, instead of being required to know ahead of time which volume serial numbers can be used, and finding volumes with enough free space, SMS can do those tasks based on criteria defined by the storage administrator. SMS management also provides tools that help a storage administrator control allocation of storage between hundreds of different applications and users.

A storage administrator uses ISMF to manage DASD, tape and optical storage attached to a z/OS system or collection of z/OS systems called a sysplex. The storage administrator defines SMS constructs which define characteristics associated with or assigned to data sets, objects and volumes. Some of the constructs are:

  1. Data classes - data set properties automatically assigned by DFSMS when a data set is created.
  2. Storage classes - Availability, Accessibility and Performance requirements.
  3. Management classes - Data migration, backup and retention attributes.
  4. Storage Groups - a list of storage volumes with common properties.

The storage class constructs allow you to simply specify a storage class and how much space you need, or the storage administrator can define routines that automatically assign a storage class based on criteria like data set name. The storage administrator also defines associations between storage classes and storage groups.

On TSO/ISPF, both storage administrators and users, such as programmers, can use the Interactive Storage Management Facility (ISMF). ISMF is option (IS) on the contest system. There are two main options provided by ISMF, Storage Administrator Mode and User Mode. The default mode on the contest system is User Mode, and is the mode that will be used for this challenge.

Your challenge:

In this challenge, you will use ISMF as a user of the system. Even though you are not able to directly modify storage groups, you are able to list volumes that are associated with storage groups. First, you will list all the SMS managed volumes on the contest system. Next, you will generate a report of volumes for a storage group, and include details about the amount of free space available.

Then you will list information about your own data sets, allocate a new SMS managed data set on a volume in the storage group set up for the contest, and generate a report with selected information about your new SMS managed data set. Finally, you will store the two reports you created in your P2.OUTPUT library.

Use the F1 (Help) option to navigate the ISMF panels, and to find additional information for this challenge.

z/OS User manuals to help you with this challenge:

STEP 1: Use ISMF option 5 "Storage Class" to generate a list of all the SMS managed volumes on the contest system.

  1. First, submit CC#####.JCL(ISMFALC). This job allocates physical sequential output data sets where you will store reports created by ISMF.
  2. Next, jump to the ISMF panel option 5 (=IS.5), the STORAGE CLASS APPLICATION SELECTION. Enter * nex to Storage Class Name.
    This will generate a list of all SMS storage classes on the contest system. Note the storage classes that start with "DB" are used later in the contest by a DB2 relational database.
  3. Press F3 twice to return to the ISMF Primary Option Menu and select option 2, Volume - Perform Functions Against Volumes.
  4. Select Option 1 - DASD, and press enter. On the VOLUME SELECTION ENTRY PANEL, specify the following options:
          
    Select Source to Generate Volume List . . 2 (1 - Saved list, 2 - New list) 1 Generate from a Saved List Query Name To List Name . . Save or Retrieve 2 Generate a New List from Criteria Below Specify Source of the New List . . 2 (1 - Physical, 2 - SMS) Optionally Specify One or More: Enter "/" to select option Generate Exclusive list Type of Volume List . . . 1 (1-Online,2-Not Online,3-Either) Volume Serial Number . . * (fully or partially specified) Device Type . . . . . . . (fully or partially specified) Device Number . . . . . . (fully specified) To Device Number . . . (for range of devices) Acquire Physical Data . . Y (Y or N) Acquire Space Data . . . Y (Y or N) Storage Group Name . . . (fully or partially specified) CDS Name . . . . . . .
  5. Press the enter key. You will notice DEFAULT PRIMING DONE at the top. The following fields are primed, because you specified option 2 - SMS, as the source of the new list.
           
    Storage Group Name . . . * (fully or partially specified) CDS Name . . . . . . . 'ACTIVE'
  6. Verify that CDS Name says 'Active' and Storage Class Name is *, then press the enter key again.

    Scroll right. Look for STORAGE GRP NAME in column 23 to see how the volumes are subdivided into various storage groups.

    Press F3 to go back to the VOLUME SELECTION ENTRY PANEL. Change the Storage Group Name * to the name of the storage group that was associated with the four MTM* volumes and press enter.

    For more information about ISMF Volume and Data set selection lists, see Understanding format and content of the lists.

STEP 2: Generate a report that would help a user or storage administrator find how much capacity is in a storage group, so they can determine whether additional capacity is needed.

  1. At the top of the VOLUME LIST panel (from the end of step one), position your cursor over the List option and press the enter key. A pull-down list will be shown. Type the number associated with Print, and press the enter key.
  2. On the VOLUME PRINT ENTRY panel specify:
    Select Format Type . . . . . 1 (1 - Standard, 2 - Roster)
    Report Data Set Name . . . .ISMF.REPORT1.OUTPUT
    Replace Report Contents . . . Y (Y or N)
  3. Next, specify the numbers for following tags. Type the tag numbers, separated by spaces in the order that they appear in the list below:

    You will need to scroll forward and backward to locate the tag numbers requested in the list below. However, be careful, it is easy to accidentally overtype the default values.

    • Free Space
    • Free DSCBs
    • Device Number
    • Percent Free Space
    • Storage Group

    Press enter to get to the PRINT JOB SUBMISSION ENTRY PANEL

  4. On the PRINT JOB SUBMISSION ENTRY PANEL, you should see PANEL PRIMING NOT DONE at the top right side of the panel. Change the first option to 1 Submit Job for Background Processing, then press enter. This will open the ISMF DATA SET PRINT EXECUTE STATEMENT ENTRY PANEL. Press enter again and you'll receive a message indicating that a job was just submitted.

    The job will execute and the output of which will be the requested report. This report is written in your ISMF.REPORT1.OUTPUT data set. Press F3 until you area back to the ISPF Primary Option Menu, then jump to =3.4 and view the contents of your ISMF.REPORT1.OUTPUT data set. Confirm that it contains the columns you requested.
Note: You'll find an additional column was added: VOLUME SERIAL. This is expected, and for good reason: the volume serial or "VOLSER" is always relevant information!

STEP 3: Allocate an SMS-managed data set, and generate an ISMF report to list information about it.

A permanent new data set, such as most of your CC##### data sets, can be allocated on a NONSMS volume, without the need to specify a volume serial number if volumes with the "storage" attribute exist. There is no further granularity based on criteria such as data set name.

For system-managed data sets, the device dependent volume serial number (volser) and unit number information is not required, because the volser is assigned within a storage group selected by the Automatic Class Selection (ACS) routines. For more information, see z/OS V2R2 DFSMS Introduction - ACS routines.

For more information about the Data Class, Management Class, Storage Class and Aggregate Group applications of DFSMS, see Using Data Facility Storage Management Subsystem (DFSMS).

  1. Take a look at your CC#####.JCL member ISMFDS. This job allocates an SMS-managed data set, and uses a Data Class named MTMSEQ.

    Often a Data class is automatically assigned according to Automatic Class Selection routines set up by the storage administrator. However, as you see in your ISMFDS job a Data Class set up by the storage administrator can be specified in JCL. A SMS managed data set is not required to have a Data Class.

    Don't submit this job just yet, you'll need to perform some other actions first.

  2. From the ISMF PRIMARY OPTION MENU (=IS), select Option 4 - Data Class.
  3. Keep the defaults listed on the DATA CLASS APPLICATION SELECTION panel and press enter.
  4. From the DATA CLASS LIST panel, type DISPLAY in the LINE OPERATOR column next to MTMSEQ and press enter.
  5. On the first DATA CLASS DISPLAY panel, you will see some of the attributes that would be used during a new data set allocation, if the MTMSEQ Data Class was selected.

                        
    CDS Name . . . : ACTIVE Data Class Name : MTMSEQ Description : SEQUENTIAL DATA SETS Recfm . . . . . . . . . : FB Lrecl . . . . . . . . . : 80 Override Space . . . . . : NO Space Avgrec . . . . . . : K Avg Value . . . . : 1 Primary . . . . . : 1 Secondary . . . . : 1 Directory . . . . : Retpd Or Expdt . . . . . : Volume Count . . . . . . : 1 Add'l Volume Amount . . :

    A Data Class like MTMSEQ may (or may not) describe initial SPACE attributes. If Override Space is set to "YES", then the Data Class will override any SPACE settings specified in JCL. A value of "NO" allows the user to override the Data Class attributes inside JCL.

  6. Open CC#####.JCL(ISMFDS) for editing again.
  7. Update the JCL to specify the storage class name that has the same name as the storage group you specified at the end of STEP 1 on the VOLUME SELECTION ENTRY PANEL. Then submit the JCL.
  8. Look closely at the data set named in JCL DD DD1 definition. Type this data set name into =3.4 and enter an I next to the data set to open the information panel showing the data set attributes.

    You may notice even though MTMSEQ specifies only 1 KB, the data set occupies 54 kilobytes. In case you are wondering why, it's because the minimum allocation for a data set is 1 track.

  9. From the ISMF PRIMARY OPTION MENU (=IS) enter 1 to open the Data Set - Perform Functions Against Data Sets.
  10. Specify the following options on the DATA SET SELECTION ENTRY PANEL:
    		  
    1 Generate from a Saved List Query Name To List Name . . Save or Retrieve 2 Generate a new list from criteria below Data Set Name . . . ** Enter "/" to select option Generate Exclusive list Specify Source of the new list . . 2 (1 - VTOC, 2 - Catalog) 1 Generate list from VTOC Volume Serial Number . . . (fully or partially specified) Storage Group Name . . . . (fully specified) 2 Generate list from Catalog Catalog Name . . . Volume Serial Number . . . (fully or partially specified) Acquire Data from Volume . . . . . . . Y (Y or N) Acquire Data if DFSMShsm Migrated . . Y (Y or N)
  11. Press the enter key.
  12. Scroll right and left to see the information listed for your data sets.
    Note: You may need to set your Scoll option again on this panel.

    Notice the differences between your SMS-managed data set and your other data sets, such as VOLUME SERIAL, LAST BACKUP DATE, STORAGE CLASS NAME and DATA SET ENVIRONMENT.

  13. Use F3 to go back to the DATA SET SELECTION ENTRY PANEL, and specify the following:
              
    1 Generate from a Saved List Query Name To List Name . . Save or Retrieve 2 Generate a new list from criteria below Data Set Name . . . ** Enter "/" to select option Generate Exclusive list Specify Source of the new list . . 1 (1 - VTOC, 2 - Catalog) 1 Generate list from VTOC Volume Serial Number . . . (fully or partially specified) Storage Group Name . . . . MTM (fully specified) 2 Generate list from Catalog Catalog Name . . . Volume Serial Number . . . (fully or partially specified) Acquire Data from Volume . . . . . . . Y (Y or N) Acquire Data if DFSMShsm Migrated . . Y (Y or N)

    Press enter and you will be presented with all of your data sets that part of the MTM storage group.

  14. Generate another report using the List pull down option 9 for Print ....
  15. On the DATA SET PRINT ENTRY panel specify:
    Select Format Type . . . . . 1 (1 - Standard, 2 - Roster)
    Report Data Set Name . . . .ISMF.REPORT2.OUTPUT
    Replace Report Contents . . . Y (Y or N)
  16. Specify Tags to be Printed in this order:
    • DS Organisation
    • Record Format
    • Record Length
    • Volume Serial Number
    • Creation Date
    • Storage Class Name
    • DS Environment
    • Entry Type

Press enter, and just as before, on the PRINT JOB SUBMISSION ENTRY PANEL, change the first option to 1 to submit the job in batch. Then press enter until you get the message about the job being submitted.

Return to ISPF =3.4 and view ISMF.REPORT2.OUTPUT to confirm that it contains the requested information. Just like in the first report, the first column contains extra information here: "DATA SET NAME". This is expected.

STEP 4: Submit JCL(ISMFP2) to copy the two generated reports into your P2.OUTPUT data set.

Note: The sequential report data sets and the SMS managed data set that were created during this challenge are deleted upon successful exectution of ISMFP2.

Take a look at P2.OUTPUT(#09) to see the results and congratulations! You now know how use ISMF to perform SMS and non-SMS data management tasks on z/OS! Feel free to move on to the next challenge now.

Next: Challenge #10

Data Set Space and Disk Storage Extents
Part Two - Challenge #10

Background:

It is fundamental to understand the way z/OS manages disk storage of newly allocated data sets. z/OS operations, developers, and systems staff members all use this knowledge in their daily tasks.

When z/OS data sets are stored onto a disk, the initial amount of space required is called a "primary extent". A primary extent is a contiguous area of space on a disk and the size can range from just a few bytes to filling the entire disk.

This is a different approach from how storage space is managed on your PC. Generally file allocation and storage is unlimited within the constrains of the OS and normally there are no other limits to the size of a single file. With z/OS data sets, size limitations are set on both the file and the disk device. This can be very beneficial to the stability of the system. For example, imagine a run-away z/OS task that was writing to a disk. This is a controlled situation as the task would eventually terminate when the specified data set size limit is reached. On your PC, this sort of behaviour might very well consume all available space on your hard drive!

When a z/OS data set is allocated, the total amount of disk space allowed to be used is the size of the primary extent plus the size of optional "secondary extents". One or more secondary extents can be allowed and these secondary extents are automatically allocated when the primary extent is full or a previous secondary extent is full.

When secondary extents are defined, the number of secondary extents is dependent upon the type of data set. In other words, all data sets have only one primary extent plus zero or more secondary extents. How many secondary extents depends on the type of data set.

Here's a list of each type of data set and the max number of extents for each type:

Data Set Type Max Extents
Sequential 16 total = 1 primary + 15 secondary
Sequential Extended 123 total = 1 primary + 122 secondary
PDS 16 total = 1 primary + 15 secondary
PDS/E 123 total = 1 primary + 122 secondary
VSAM 255 total = 1 primary + 254 secondary
VSAM (extent contstraint removed) n+1 total = 1 primary + n secondary

So, how does one know where an individual data set extent begins and ends?

The answer is that each disk device has a Volume Table of Contents (VTOC). The VTOC is a single extent written to the disk device when first initialized and formatted. Without any further changes to the VTOC, the disk has one large free extent that covers the entire device. As data sets are allocated to the disk, the VTOC is amended to indicate where each extent starts and stops. Conceptually, the VTOC is similar to the File Allocation Table (FAT) in some PC disks.

Your challenge:

Amend JCL that allocates a sequential data set with no secondary extents and copies data into this newly allocated sequential data set.

You will change the JCL to allocate secondary extents, enabling the target data set to expand enough to hold all the records from the source data set.

Run the following ISPF primary command:

TSO SUBMIT JCL(COPYJCL)

Observe that an SD37 abnormal end (abend) occurred. Go ahead and take a moment to perform an internet search on the term "SD37 abend". Don't worry about reading up on it, we simply want you to see that SD37 abends are commonly encountered.

Edit CC#####.JCL(COPYJCL) and look closely at line 6. See the JCL DD SPACE= operand?

The first argument "TRK" indicates that the primary and any seconday extent space will be specified in units of track. Depending on the model of the disk being used, a track will contain a number of bytes. For our system, 3390-n disk devices allow for 56,664 bytes per track.

The first number after TRK is the primary extent size. The second number is the number of secondary extents allowed. Since the JCL is specifying a single track for the primary extent and zero secondary extents, the new data set will only be allowed to consume 56,664 bytes.

As indicated by the SD37 abend, this is clearly not enough space for the source data, so you must change the secondary extents allowed from zero to one. Do this now, and rerun the job.

Note: You may receive a MAXCC of 4. This is okay and can safely be ignored. Also, the target data set is deleted in the 3rd step of the JCL. This is to conserve space, so don't be too alarmed if you cannot locate it. If you want to take a look at the source data set, feel free to browse it in =3.4.

If the job run was successful, then P2.OUTPUT(#10) will contain disk storage information. You are not expected to understand this data now. A Part 3 challenge will explain the information in P2.OUTPUT(#10) in more detail. Feel free to move on to the next challenge now.

Next: Challenge #11

DB2 Relational Database and SQL
Part Two - Challenge #11

Background:

Relational Database Management Systems (RDBMS) are used to manage a massive amount of data being simultaneously accessed by thousands of people, internet browser initiated tasks, network devices and other software applications.

Structured Query Language (SQL) is a common and easy to learn language used by programmers to access the data that is stored in any RDBMS. Once you have learned to use SQL, this knowledge can be used to communicate with any other RDBMS with minimal changes to the language syntax.

There are 4 categories of SQL:

  1. DML -- Data Manipulation Language
    • SELECT, UPDATE, INSERT and DELETE (CRUD applications)
  2. DDL -- Data Definition Language
    • CREATE, ALTER and DROP data base structures (DB architecture)
  3. DCL -- Data Control Language
    • GRANT and REVOKE privileges (security)
  4. TCL -- Transaction Control Language
    • COMMIT and ROLLBACK work

DB2 for z/OS SQL can be executed a variety of ways, including through ISPF panel, JCL, GUI tools, FTP, and programming language APIs like Scala.

By far, the most popular RDBMS in z/OS is called DB2. Don't ask what happened to DB1... Just kidding! DB2 has been around since 1983. You can read more about it.

Here are some excellent sources to learn the capabilities of SQL:

Your challenge:

Use JCL to insert a row into an existing DB2 Table, then select that row from the table and write it out to a PDS member.

A few needed facts about SQL:

  • A dash dash '--' in columns 1 and 2 indicate that the line is a comment.
    It can be useful to keep multiple SQL statements in a single data set, then comment and uncomment specific statements as needed.
  • A semi-colon ';' is used as SQL statement terminator in DB2 for z/OS

Press F3 until you are at the ISPF Primary Option Menu. Look closely for an option titled D2 DB2I Perform DB2 Interactive functions

Use the ISPF primary command =D2 to jump to the interactive DB2 Panels.
The following should be displayed:

DB2I PRIMARY OPTION MENU SSID: DBBG COMMAND ===> Select one of the following DB2 functions and press ENTER. 1 SPUFI (Process SQL statements) 2 DCLGEN (Generate SQL and source language declarations) 3 PROGRAM PREPARATION (Prepare a DB2 application program to run) 4 PRECOMPILE (Invoke DB2 precompiler) 5 BIND/REBIND/FREE (BIND, REBIND, or FREE plans or packages) 6 RUN (RUN an SQL program) 7 DB2 COMMANDS (Issue DB2 commands) 8 UTILITIES (Invoke DB2 utilities) D DB2I DEFAULTS (Set global parameters) X EXIT (Leave DB2I)

If DBBG is not present in the top-right, then take the following actions:

  1. Enter option D to open DB2I DEFAULTS.
  2. Enter DBBG in the DB2 NAME field.
  3. When DB2I DEFAULTS PANEL 2 is displayed, press enter.
  4. DBBG should now be present in the top-right corner.

Next, from DB2I PRIMARY OPTION MENU enter 1 to open SPUFI (Process SQL statements)

Note: SPUFI is an acronym for SQL Processing Using File Input, and yes, everybody calls it "Spoofy".

Modify the panel fields as follows:

1 DATA SET NAME.... ===> SQL(INSERT)
4 DATA SET NAME.... ===> SPUFI.OUTPUT
5 CHANGE DEFAULTS...===> NO

Then press enter. You may encounter the following dialog, which can be ignored:

DSNE345I  WARNING: DB2 DATA CORRUPTION CAN RESULT       
                   FROM THIS SPUFI SESSION BECAUSE THE  
                   CCSID USED BY THE TERMINAL IS NOT THE
                   SAME AS THE CCSID USED BY SPUFI      
                                                        
                   - TERMINAL CCSID: 37                 
                   - SPUFI CCSID   : 1047               
                   NOTIFY THE DB2 SYSTEM ADMINISTRATOR. 
                                                        
PRESS:  ENTER to continue                               
        END to return                                   		
		

Press enter to proceed, and the SQL(INSERT) data set member will open in the ISPF editor. Currently only line 2 will execute and will return a result set that contains all the rows from the WC.FAVORITE DB2 table.

Press F3 to exit edit session and return to the SPUFI panel, and then press enter to process changed SQL. Note that you didn't actually change any of the SQL, just that the option to do was was made available to you prior to execution.

Following execution, the result set is displayed and you can use the standard F7, F8, F10 and F11 keys to browse around. Once satisifed, press F3 to close the output, returning once again to the SPUFI panel. Press enter now, and you will be back in the editor for SQL(INSERT).

Look closely at lines 7, 12, 18, 23, 27 and 29. Each of these need to change to include your personal favorites. When you are ready to execute the statements, uncomment the particular statement. Be sure a comma exists after each favorite, and you may want to comment out line 2 to reduce the amount of output in your result set.

Your task is to use SPUFI to insert your favorite color, month, time of day (dawn or dusk) and your user ID. Use the statements provided in SQL(INSERT) as a model to insert your own favorites into the table, using F3 and enter to execute.

Once the insert is successful, take a look at line 31 in SQL(INSERT). Execution of this statement returns a result set that includes only your entry. Comment out all the other lines and change cc##### to your user ID before attempting to execute line 31.

The last step in this challenge is to edit CC#####.JCL(SQLJCL), edit it to change the hard-coded "cc#####" to your user ID, and then submit the job.

Submitting this JCL will select the record with your ID from the WC.FAVORITE DB2 table and write the result set to P2.OUTPUT(#11).

Browse or view P2.OUTPUT(#11) and verify that it contains your inserted row, then move on to the next challenge!

Next: Challenge #12

SQL Table Join
Part Two - Challenge #12

Background:

You will work with 3 relational tables in the this challenge, then join 2 of the tables to produce a desired result set.

Your challenge:

Several DB2 for z/OS tables exist to be used in this exercise:

  • WC.CURRENCY
  • WC.CTYCODE
  • WC.UNIV

The prefix "WC" is the schema or owner of the table and the table names are CURRENCY, CTYCODE, and UNIV. Both the schema and table name are needed when writing SQL statements.

View the rows and columns in the above tables to prepare:

  • list all rows and columns from a table of universities
  • count number of rows in a table of universities
  • list result set based upon a university name pattern
  • list result set based upon a university country code
  • list result set of university country currency name

To accomplish the above, use DB2 Interactive functions panel =D2 and the SPUFI panels. Inside SPUFI, open SQL(SELECT).

Observe all the possible SQL select statements which can executed once uncommented.

ONLY in the event you want SQL to write more than 250 lines (the default), then change line 5 on the SPUFI panel to YES.

            
SPUFI SSID: DBBG ===> Enter the input data set name: (Can be sequential or partitioned) 1 DATA SET NAME ... ===> SQL(SELECT) 2 VOLUME SERIAL ... ===> (Enter if not cataloged) 3 DATA SET PASSWORD ===> (Enter if password protected) Enter the output data set name: (Must be a sequential data set) 4 DATA SET NAME ... ===> SPUFI.OUTPUT Specify processing options: 5 CHANGE DEFAULTS ===> YES (Y/N - Display SPUFI defaults panel?) 6 EDIT INPUT ...... ===> YES (Y/N - Enter SQL statements?) 7 EXECUTE ......... ===> YES (Y/N - Execute SQL statements?) 8 AUTOCOMMIT ...... ===> YES (Y/N - Commit after successful run?) 9 BROWSE OUTPUT ... ===> YES (Y/N - Browse output data set?)

Line 3 in the SPUFI DEFAULTS controls the maximum lines returned. In some cases, such as a request for a result set of all 9000+ universities in the world, you may want to increase this value. Additionally, the output data set SPUFI.OUTPUT may need to be manually deleted and changes made to the default SPACE UNIT and PRIMARY SPACE to allocate a larger SPUFI output data set. A recommendation would be SPACE UNIT of CYL and PRIMARY SPACE 3 and SECONDARY SPACE 1.

            
CURRENT SPUFI DEFAULTS SSID: DBBG ===> 1 SQL TERMINATOR .. ===> ; (SQL Statement Terminator) 2 ISOLATION LEVEL ===> CS (RR=Repeatable Read, CS=Cursor Stability, UR=Uncommitted Read) 3 MAX SELECT LINES ===> 10000 (Max lines to be return from SELECT) 4 ALLOW SQL WARNINGS===> NO (Continue fetching after sqlwarning) 5 CHANGE PLAN NAMES ===> NO (Change the plan names used by SPUFI) 6 SQL FORMAT....... ===> SQL (SQL, SQLCOMNT, or SQLPL) Output data set characteristics: 7 SPACE UNIT ...... ===> CYL (TRK or CYL) 8 PRIMARY SPACE ... ===> 3 (Primary space allocation 1-999) 9 SECONDARY SPACE . ===> 1 (Secondary space allocation 0-999) 10 RECORD LENGTH ... ===> 4092 (LRECL=Logical record length) 11 BLOCK SIZE ...... ===> 4096 (Size of one block) 12 RECORD FORMAT ... ===> VB (RECFM=F, FB, FBA, V, VB, or VBA) 13 DEVICE TYPE ..... ===> SYSDA (Must be DASD unit name) Output format characteristics: 14 MAX NUMERIC FIELD ===> 33 (Maximum width for numeric fields) 15 MAX CHAR FIELD .. ===> 80 (Maximum width for character fields) 16 COLUMN HEADING .. ===> NAMES (NAMES, LABELS, ANY or BOTH)

Use SPUFI to create a result set containing the full country name and university name of all the universities in the Czech Republic. The selection criteria is COUNTRY = 'Czech Republic'.

The result set must be the output of a single select statement joining both the WC.UNIV and the WC.CTYCODE tables. The result set will be 28 rows, one for each university in the Czech Republic.

The first line output line of the result set looks like:

 COUNTRY           UNIVERSITY
---------+-----   -------+---------+---------+---------+---------+-       
Czech Republic     Academy of Performing Arts, Film and TV Fakulty            
            
Note to our friends in the Czech Republic: If our table data is missing your school, please send us a nice email containing the SQL insert statement(s) to add the appropriate information to the database. Thanks!
A few hints to help you out:
  • Research DB2 SQL INNER JOIN syntax.
  • COUNTRY_CODE fields exist in both tables.
  • The COUNTRY field in WC.CTYCODE provides the 'country' full name in the result set.

Use CC#####.SQL(UNIV) to contain the single SQL select statement to produce the names of universities from the Czech Republic. Initially, UNIV has a few select statements to view the data from the 2 relevant tables. You will need to comment these out and create a new select statement that joins the results together.

After successful execution, edit the SPUFI.OUTPUT data set and confirm that it has the desired result set. This result set data must be copied into P2.OUTPUT(#12). Use the line command C99 on the first line and the primary line command REPLACE P2.OUTPUT(#12) to do this now.

When replacing P2.OUTPUT(#12) the following message may be safely ignored and bypassed by pressing the enter key:

Data set attributes are inconsistent. Truncation may result in the right-most positions of some records if replace is performed.

Successful completion of this challenge has data in CC#####.P2.OUTPUT(#12).

Next: Challenge #13

Core Business Programming Language
Part Two - Challenge #13

Background:

Imagine that a new programming language needs to be developed to handle the most critical data in the world economy. This language must be easy to learn, easy to maintain, easy to debug, and auditors need to be able to read and understand it. The programming language compiler needs to generate highly optimized code for processing speed. The language needs to be upwardly compatible for many years into the future to protect application programming investments, enhancements, and applied tuning. The programs should run with new releases of the operating system without need to change code or recompile.

Given these requirements, a COmmon Business-Oriented Language would most likely get developed. With the tech industry's propensity for acronyms, we might even give the programming language a short name like... "COBOL".

This is exactly what happened in 1959.

Due to the stability and importance of COBOL applications, they tend to be very long-lived. It is not rare to hear about business critical applications written in COBOL 10 to 30 years ago, still in production today.

Do not underestimate the significance of including COBOL on a resume. The core business logic of many large enterprises is written in COBOL and you can count on having COBOL on your resume grabbing the attention of many recruiters.

Your challenge:

  • Become familar with coding a COBOL program.
  • Compile and execute a z/OS COBOL program in batch.
  • Execute COBOL program interactively.
  • Debug and correct COBOL syntax problems.

Here's an example of a very basic COBOL program:

IDENTIFICATION DIVISION. PROGRAM-ID. Simple. PROCEDURE DIVISION. DISPLAY "COBOL is simple". STOP RUN.

At this time, you will compile and execute the above COBOL program by submitting JCL. Execute the following command from any ISPF primary command line:

TSO SUBMIT JCL(SIMPLE)

View the job output in SDSF: =SD ; ST

The first step in CC#####.JCL(SIMPLE) executes IGYWCL. This is a system provided 2-step procedure that runs the COBOL compile, reading the source code from CC#####.SOURCE(SIMPLE), then runs the link-edit, storing the executable load module in CC#####.LOAD(SIMPLE).

The second step in CC#####.JCL(SIMPLE) executes PGM=SIMPLE from CC#####.LOAD(SIMPLE).

It's also possible to execute COBOL programs in the foreground, which can be useful if the program requires user input. Perform the following actions:

  • Enter the ISPF primary command =6 to jump to the ISPF Command Shell panel.
  • Enter the command CALL LOAD(SIMPLE).

One of the first things you need to know about programming in COBOL is how to structure the code and what each piece of language comprises the overall grammar. Divisions may contain Sections which may contain Paragraphs which may contain Sentences which may contain Statements.

There are four Divisions in every COBOL program:

  1. Identification Division.
  2. Environment Division.
  3. Data Division.
  4. Procedure Division.

The Identification, Environment, and Data Divisions are used to declare the inputs, outputs, record field types and descriptions. The Procedure Division contains all of the execution logic instructions.

COBOL has a fixed format for each line in the source code. Each line has 72 columns and are sub-divided for specific purposes:

Columns Purpose
1 - 6 Sequence Numbering
7 Indicator Area
8 - 11 Area A
12 - 72 Area B

The Indicator area serves a few purposes, but the most common one being that if you place an asterisk * in column 7, this indicates that the line is a comment.

Area A is used to store all COBOL divisions, sections, paragraphs and some special entries.

Area B is used to store all COBOL statements.

Let's look at a COBOL that will read from a file and write to a file. Remember, abstract program filenames in z/OS can be linked to real resources via JCL DDNAME definitions.

View the data set CC######.SOURCE(CBLRDWR). Ensure you have turned on syntax highlighting with the editor command HILITE AUTO. While reviewing the CBLRDWR source code, return to the above information and see if you can locate the various components of syntax and grammar.

Take note of the filenames in the CBLRDWR source.

Jump to data set list utility panel =3.4 and view V CC######.JCL(CBLRDWR). Read the CBLRDWR JCL given the filenames coded in the COBOL program.

SUBMIT JCL(CBLRDWR), then jump to SDSF =SD ; ST to review the job's output. You may want to enter the SDSF command INPUT OFF to suppress unnecessary system generated DDNAMEs in the output such as JCLIN, $INTTEXT, and EVENTLOG.

Notice that the program failed to compile due to an error identified by message ID IGYPS2122-S in the job output. The text following the message ID explains the problem. You may also notice additional abnormal end (abend) error messages. Don't worry if you cannot understand the gibberish, just look for the messages that are spelled out in English. Once you've located the cause of the error, edit SOURCE(CBLRDWR) and correct the error, then submit JCL(CBLRDWR) to compile, link and execute again, then review the output in SDSF. Repeat this cycle as many times as is neccessary for the COMPILE step to complete successfully.

Once the program successfully compiles, note that the program execution fails. The execution failure is not due to a COBOL or JCL syntax error as that would be detected by system prior to execution.

The execution problem is related to one of the fundamental reasons JCL exists. We explained it previously. Review the output to identify the problem, then edit JCL(CBLRDWR) to correct the problem. Repeat this cycle as many times as is neccessary for the RUN step to complete successfully.

Successful execution will result in the creation of P2.OUTPUT(#13). Review it for accuracy and move on to the next challenge. Great job!

Next: Challenge #14

USS and MVS switcheroo
Part Two - Challenge #14

Background:

As more and more young people are coming to work on the z/OS platform, they bring with them skills and conceptual ideas from other operating systems. One very popular operating system today is called Linux. Perhaps you've heard of it before?

Linux users are very familiar with the byte oriented command line interface (CLI) and the most commonly used shell for these users is something called the Bourne Again Shell, or "bash". Thankfully, we have bash in z/OS USS today, and many of you who have used bash in Linux will feel right at home here.

Not only is bash a nice interface to work in, it's also a programming language! For this reason, you can write series of bash commands in a file and then execute that file as needed.

Your challenge:

In this challenge, you explore the Unix System Services (USS) environment that you briefly interacted with in part one and delving deeper into bash programming.

You will need both your TN3270 client to connect to ISPF and your ssh client to connect to USS. Go ahead and connect to and log into both environments now.

At the USS prompt, enter the command you learned in part one to copy the file /z/public/hilow into your ~/bin directory.

hilow is a simple number guessing game that uses the read command to collect input from the user and compare it to a randomly selected number between 0 and 99, giving the user feedback and continuing to prompt for the answer until the guessed number matches the randomly selected number.

Once you have a copy of it, check to make sure that the executable bits are turned on and then try to execute it:

TEST008:/z/test008 > ls -la bin/hilow
-rwxr-xr-x   1 TEST008  IPGROUP      623 Sep  7 10:28 bin/hilow
TEST008:/z/test008 > which hilow
/z/test008/bin/hilow
TEST008:/z/test008 > hilow
/z/test008/bin/hilow: line 7: unexpected EOF while looking for matching `)'
/z/test008/bin/hilow: line 29: syntax error: unexpected end of file

Aha! There exists at least one problem inside the script that bash dilligently reports as a missing close parenthesis on line 7. You must edit the hilow file and make the correction.

If you are familiar with the vi editor, you are welcome to use that directly from the bash prompt to edit. Sorry kiddo, no vim, emacs or nano here!

Teaching you vi is beyond the scope of this contest, so for the rest of you, leave the ssh shell connection open and switch to your TN3270 ISPF session. Open the Data Set List Utility (=3.4) and in the Dsname Level field, enter the path to your bin directory.

Dsname Level . . . /z/CC#####/bin
Note: Your unix home directory is all lowercase!

The ability to browse the USS file system directly inside the ISPF data set listing is a farily new feature in ISPF, so be sure to tell all your older z/OS friends about it!

Tab down to the command field next to the hilow file, and enter E to edit it. If you see an EDIT Entry dialog, simply press the Enter key again and the file will be opened for editing.

In the primary command field, enter HILITE AUTO then tab down to line 7.

Notice something strange here? The bash notation $(( ... )) means to perform some arithmetic and return the result. Here we use $$ to get the process ID (a semi-random integer value), and perform modular division over the biggest value (100). This results in generating a semi-random number between 0 and 99.

Fix the error and press F3 to save the file, then return to your bash prompt and attempt to run the hilow program again.

Here's a sample run after correcting line 7:

TEST008:/z/test008 > hilow
Guess? 50
You guessed 101
... smaller!
Guess? 25
You guessed 101
... smaller!
Guess? 1
You guessed 101
... smaller!
Guess? -1
You guessed 101
... smaller!
Guess? uhoh
You guessed 101
... smaller!
Guess? CEE5206S The signal SIGINT was received.

Something is still wrong, but not with the syntax. We have a runtime error! Press CTRL+C to send an interrupt signal and break out of the program.

Return to ISPF and edit the hilow program again. The bash command read is used to store the input from the user into a variable. Take a moment to think critically about what is happening here, and make the required change. Use F3 to exit the edit and save the file, then run the script again at the bash prompt.

Here's a successful run:

TEST008:/z/test008 > hilow
Guess? 50
You guessed 50
... bigger!
Guess? 75
You guessed 75
... smaller!
Guess? 62
You guessed 62
... smaller!
Guess? 56
You guessed 56
... bigger!
Guess? 59
You guessed 59
... bigger!
Guess? 60
You guessed 60
Right!! Guessed 60 in 6 guesses.

At this point, you can return to ISPF and run the following command to submit your work on this challenge:

TSO SUBMIT JCL(BASHJCL)

You'll find the output from this challenge is stored in P2.OUTPUT(#14). Well done! You are quickly reaching the end of part two!

Next: Challenge #15

REXX, a Popular Mainframe Scripting Language
Part Two - Challenge #15

Background:

Many highly experienced mainframe technicians use a scripting language that is unfamiliar to others in the non-mainframe world even though this language can be installed on the other platforms. Just like all workstations have a variety of shell scripting languages, the powerful mainframe scripting language is REXX.

REstructured eXtended eXecutor (REXX), was designed for ease of learning and reading, even though its full name sounds like quasi rocket science stuff.

If you end up in a job as a z/OS System Programmer or System Administrator in a large enterprise, the chances of quickly running into REXX code is very high. The good news is that it is true that while this scripting language is very powerful, it is also easy to learn and use. It is not necessary to memorize all the REXX commands, functions, and capabilities because references and example code are everywhere on the internet.

Here are some websites you can reference when working with REXX:

Having "REXX" on your resume, similar to having "COBOL" or "JCL", can be an attention grabber. An employer that observes "REXX" on a resume will immediately think this is a potential mainframe System Programmer or System Administrator.

Your challenge:

It is time to have some fun!

Enter the ISPF command TSO XTBL, and within the running program, try entering joke, then help, then try all.

Remember: The source code for this program is located in CC#####.REXX.CLIST(XTBL). Reviewing this code as an example of correct REXX syntax along with internet searching can be helpful in completing this challenge.

Now you will evaluate REXX code that has a problem for you to correct. Jump to the data set list utility panel (=3.4) and edit CC#####.REXX.CLIST. Observe that there are a number of members here.

Enter EX to the left of XTBL to execute the REXX program. This is the same program you ran before with the TSO command. Use the end command to exit XTBL.

Enter EX to the left of HOROSCOP to execute the REXX program. This program has several problems that must be corrected.

The second screen immediately shows a problem:

			
COMMAND SA NOT FOUND 16 *-* sa 'When where your born? ' +++ RC(-3) +++

However, the routine continued to function.

Enter 1 and observe another problem. This one causes the program to abend.

Execute the HOROSCOP program again. This time enter 13. Yes, there's a third problem.

Execute the HOROSCOP program again. Choose option 4 and observe yet another problem. Go ahead and end the HOROSCOP program now.

Edit the HOROSCOP member and correct all the problems. REXX syntax is easy to understand, take your time and read the source code carefully. Do not forget the reference links we gave you at the start of this challenge, and you can use the XTBL program as a reference as well.

Each time you attempt a correction, use F3 to save the HOROSCOP member and return to the directory listing. Then enter EX next to HOROSCOP to try out your modification.

Once you have corrected all the problems, observe the horoscope for selection 1 and 2 are identical. This is another problem to correct. A hint to help you debug your REXX code can be found in the 2 lines of selection 1 output.

Note: In the event you want to restore HOROSCOP to it's original state, you can find a copy in 'ZOS.PUBLIC.P2.REXX.CLIST(HOROSCOP)'. Feel free to take a copy!

Once these problems are corrected, enter TSO SUBMIT JCL(HOROSCOP). This JCL will execute your HOROSCOP program in batch and create P2.OUTPUT(#15). Upon successful execution, member #15 will have the Aquarius and Taurus horoscopes contained within.

Helpful Hint #1: Close examination of the REXX routine shows that the Horoscope text message are PDS members in 'ZOS.PUBLIC.DATA'. This is significant to correcting the the problem with TAURUS horoscope. Do NOT edit 'ZOS.PUBLIC.DATA' - Only view or browse 'ZOS.PUBLIC.DATA'. The system will not permit any changes to this data set.
Helpful Hint #2: REXX return statement is significant to execution flow.

Just one more challenge to go! Stay frosty!

Next: Challenge #16

Report System Log Activity AT IPL Time Using REXX
Part Two - Challenge #16

Background:

z/OS is highly customizable using system parameters that are read during startup. z/OS startup is called an IPL, Initial Program Load. Like a PC boot, z/OS IPL locates disk storage that contains the IPLTEXT while a PC locates disk storage that contains the MBR, Master Boot Record. During startup of any operating system, parameters are read from disk files that result in specific behavior of the operating system.

z/OS has more system parameter options than any other operating system due to the 50+ years of technology advancements. These system parameters are available as a result of business demand to shape the behavior of z/OS to meet the specific needs of their business. In addition, the vast majority of these system parameters can be dynamically changed.

The z/OS used in this contest has a system parameter structure that is significantly more complex than most large production systems. Why? Because this z/OS is created from a model which is used to create dedicated z/OS environments for software companies around the world that build and maintain z/OS software products used by large production systems around the world.

Therefore, do not be intimidated by the system parameter structure of this contest system because most other systems have a significantly simplier system parameter structures.

z/OS has many components that are actually a collection of executable modules. All software products such as COBOL, DB2, TCPIP, etc are considered components, and each of these are a collection of executable modules.

A key to understanding z/OS is the "message id". z/OS is very good at providing messages about normal and abnormal activity. Every z/OS core component and software product component has a unique 3 character identifier.

The general rule is every component executable module is prefixed by an assigned unique 3 characters AND messages written by the component are prefixed with the same assigned unique 3 characters. Therefore, the and experienced z/OS person can quickly recognize "what" component is writing a message and "why" the message is being written.

Highly experienced z/OS System Programmers and z/OS System Administrators will read the system log (SYSLOG) for messages when an abnormal situation is reported for them to resolve.

System Programmers and System Administrators look up these messages in manuals to get a full description of message beyond the summary message text written to the system log (SYSLOG) when they are uncertain about "why" a message was written and "what" action to take. If the component message is reporting an abnormal situation, then the full description of the message in the manual will include a recommended action.

Keep in mind that z/OS environments are responsible for the most critical second to second transaction activity required by the world economy. Therefore, being able to identify, manage, and resolve abnormal processing situations quickly is mandatory.

We will just restate again, employers are clamoring to find students with exposure to z Systems technology. They want to pair young people with highly experienced technicians, who are near the end of their careers. These are highly responsible and highly paid jobs for those of you who can prove yourselves capable of managing the responsibilities of these mission-critical operating environments.

Your challenge:

In this final challenge of part two, you will:

To begin with, you must learn to identify the z/OS Unix System Services (USS) unique 3 character component identifier.

Your first action is to review a list of the unique 3 character prefixes assigned to the various z/OS and z/OS product components information at the following URL:

z/OS Message Directory

The above URL is a list for the majority of the z/OS component unique 3 character prefix identifiers. Locate the z/OS Unix System Services unique 3 character message ID prefix.

This z/OS Message Directory table includes a "Document Title" column. The z/OS Unix System Services component "Document Title" is a specific z/OS MVS System Messages document. This z/OS MVS System Messages document is 1 of numerous volumes for component messages. Select the associated z/OS MVS System Services document and scroll forward to the z/OS Unix System Services messages. Select the 'messages' heading. Locate the entire message ID for FILE SYSTEM name WAS SUCCESSFULLY MOUNTED.

Now that you know the entire z/OS UNIX System Services ""message ID associated with FILE SYSTEM name WAS SUCCESSFULLY MOUNTED., you are ready to complete the challenge.

A copy of the z/OS System Log (SYSLOG) is available that includes system messages written during IPL, Initial Program Load.

REXX code is available to read this SYSLOG which write a simple report about specific IPL activity.

Jump to data set list utility panel =3.4 and edit CC#####.REXX.CLIST

Observe CC#####.REXX.CLIST PDS directory has a number of members. Enter EX to the left of IPLMSG to execute this REXX routine.

The IPL report is missing a count for the number of UNIX files successfully mounted on the last line.

Modify the IPLMSG REXX routine to include the number UNIX files successfully mounted.

In the event you want to look at the SYSLOG data set read by the IPLMSG REXX routine, please DO NOT edit the data set. Your ID is unable to make changes to this data set and attempting to edit it will result in other REXX executions to hang waiting for you to exit the edit session. Browse (B) or View (V) the data set instead. In the event that you edit this data set, your session will be cancelled to enable other REXX routines to run. If this occurs, simply log on again.

Each time you attempt a correction, press F3 to save and return to list of directory members where EX to the left of IPLMSG will execute the modified REXX routine.

Format of lines in the z/OS System Log



After adding a count to the last line, submit JCL(IPLMSG). This JCL will execute your IPLMSG REXX routine in batch and create P2.OUTPUT(#16). Member #16 will include the last line with the count.

One final step and you can consider yourself a part two finisher! Go on to the final instructions in the next 'challenge' to score your work.

Next: Challenge #17