Senin, 25 Juni 2018

Sponsored Links

2. Securitization of Loans - An Overview
src: docs.oracle.com

Transaction Processing Facility (TPF) is a real-time IBM operating system for mainframe computers derived from the IBM System/360 family, including zSeries and System z9.

TPF provides fast transaction processing, high volume, high-throughput, handling large and continuous loads from simple transactions across large geographically dispersed networks. The world's largest TPF-based system is easily able to process tens of thousands of transactions per second. TPF is also designed for highly reliable, sustainable operations (24ÃÆ' â € "7). It is not uncommon for TPF customers to have continuous online availability for a decade or more, even with improved systems and software. This is due in part to the capabilities and the multi-mainframe operating environment.

While there are other industrial strength processing systems, notably CICS and IBM's own IMS, the TPF specifically is an extreme volume, a large number of concurrent users, and a very fast response time. For example, processing VISA credit card transactions during peak holiday shopping seasons.

The TPF PARS passenger reservation application, or its international version, IPARS, is used by many airlines.

One of TPF's principal optional components is a high-performance, special-purpose database facility called TPF Database Facility (TPFDF).

The cousin near TPF, the ALCS transaction monitor, was developed by IBM to integrate TPF services into the more common MVS mainframe operating system, now z/OS.


Video Transaction Processing Facility



Histori

TPF evolved from the Air Service Control Program (ACP), a free package developed in the mid-1960s by IBM in association with major North American and European airlines. In 1979, IBM introduced the TPF as a substitute for ACP - and as a software product pricing. The new name shows the scope and its greater evolution to non-airline-linked entities.

TPF is traditionally an IBM System/370 assembly language environment for performance reasons, and many TPF assembler applications persist. However, newer TPF versions encourage the use of C. Another programming language called SaberTalk was born and died in the TPF.

IBM announced the release of the current TPF release, dubbed z/TPF V1.1, in September 2005. Most significantly, z/TPF added 64-bit addressing and mandated the use of 64-bit GNU development tools.

The GCC and DIGNUS Systems/C and Systems/C compilers are the only supported compilers for z/TPF. The Dignus compiler offers a reduction in source code changes when switching from TPF 4.1 to z/TPF.

Maps Transaction Processing Facility



Users

Current users include Saber (reservation), VISA Inc. (authorized), American Airlines, American Express (authorization), [DXC Technology] SHARES (reservations - formerly EDS, HPES), Holiday Inn (reservation center), Amtrak, Marriott International, Travelport (Galileo, Apollo, Worldspan, Axess Japan GDS ), Citibank, Air Canada, Trenitalia (reservations), Delta Air Lines (reservations and operations) and Japan Airlines.

3. Transactions Input
src: docs.oracle.com


Operating environment

Paired tightly

TPF is capable of running on multiprocessor, ie on systems where there is more than one CPU. Inside LPAR, the CPU is referred to as stream instruction or just I-stream . When running in LPAR with more than one I-stream, TPF is said to be running closely . TPF adheres to the concept of SMP; there is no concept of NUMA-based differences between memory addresses.

The depth of CPU ready list is measured when each incoming transaction is received, and queues for the I-stream with the lowest query, thus maintaining a continuous load balancer between available processors. In the case where the configuration is loosely coupled is populated by multiprocessor CPC s ( Central Processing Complex , ie physical machine packaged in one system cabinet ), SMP occurs in the CPC as described here, whereas the interconnection resource sharing goes as described below Loosely coupled , below.

In the TPF architecture, all memory (except for the 4KB prefix area) is shared among all I-streams. In cases where memory-population data should or should be kept separated by I-stream, programmers typically allocate storage areas to a number of subsections equal to the number of I-streams, then access the desired ones. I-stream the associated area by taking the base address of the allocated area, and adding it to the product from the relative amount of I-stream times the size of each sub-section.

Loosely coupled

TPF is capable of supporting multiple mainframes (whatever size itself - be it a single I-stream to multiple I-streams) connected and operating on a public database. Currently, 32 IBM mainframes can share the TPF database; if such a system operates, it will be called 32-way loosely coupled . The most easily integrated systems are two IBM mainframes that share one DASD (Direct Access Storage Device). In this case, the control program will be loaded into the core and any programs or notes on the DASD can potentially be accessed by one of the mainframes.

To make access serialization between data records on loosely coupled systems, a practice known as record locking should be used. This means that when a mainframe processor obtains a hold on a recording, it must prevent all other processors from obtaining the same holding and communicating with the requested processor they are waiting for. In a tightly coupled system, it's easy to manage between I-streams through the use of Hold Table Record . However, when a key is obtained outside the TPF processor in the DASD control unit, an external process must be used. Historically, note locking was performed in the DASD control unit via RPQ known as LLF (Limited Locking Facility) and then ELLF (extended). LLF and ELLF are both replaced by Multipathing Lock Facility (MPLF). To run, grouped (loosely coupled) z/TPF requires MPLF in all disk control units or alternative locking devices called Coupling Facilities.

Processor shared notes

The absolute recording must be managed by the record locking process is the process that the processor shares. In TPF, most access records are performed using recording types and ordinal . So if you have defined the record type in the 'FRED' TPF system and gave it 100 notes or ordinal, then in the processor shared schema, the ordinal '5' ordinal '5D' record will resolve to the same file address in DASD - Obviously requires the use of a locking mechanism note.

All processor shared notes on the TPF system will be accessed through the exact file address that will resolve to the exact same location.

Processor unique notes

The processor's unique record is one that is defined in such a way that any expected processor within a loosely coupled complex has a 'FRED' record type and may be 100 ordinal. However, if a user on 2 or more processors checks the file address that records the type 'FRED', the ordinal '5' decides to, they will record the different physical addresses in use.

How Regulations are Classified | Cornell Small Farms Program
src: smallfarms.cornell.edu


TPF attribute

What is a TPF not

TPF is not general purpose operating system (GPOS). TPF's custom role is to process transaction input messages, then return an output message on a 1: 1 basis at a very high volume with a short, short maximum timeout. TPF never offers a direct graphical display facility; character messages intended to be a mode of communication with human users. These facts explain the need to rearrange certain expectations common to end-users of GPOS and developers.

TPF does not have built-in graphical user interface (GUI) functionality: to implement it on a host will be considered an unnecessary and potentially dangerous real-time system resource redirection. The TPF user interface is command-line driven with a simple text-display terminal scrolling up.

No mouse, windows, or icons on TPF Prime CRAS ( Computer room agent set - best considered as "operator console"). All work is done through the use of the command line, similar to UNIX without X. There are several products available that connect to Prime CRAS and provide graphical interface functions to TPF operators, such as TPF Operating Server . Graphical interface for end users, if desired, must be provided by an external system . Such systems analyze character content (see Screen scrape) and change the message to/from the desired graphic form, depending on the context.

Being a special-purpose operating system, TPF does not host compilers/assemblers, text editors, or implement desktop concepts as might be expected to be found in GPOS. The TPF application source code is generally stored in external systems, and is also built "offline". Starting with z/TPF 1.1, Linux is a supported build platform; the executable program intended for z/TPF operations should observe the ELF format for s390x-ibm-linux.

Using TPF requires knowledge of the Command Guide because there is no support for the online directory "help" or "man"/help facility where users may be used to it. Commands that are created and sent by IBM for TPF system administration are called " functional messages " - commonly referred to as " Z-messages ", since they all begin with the letter "Z". Other letters are provided so customers can write their own orders.

TPF implements debugging in distributed client-server mode; which is required because of the headless system, the multi-processing nature: stopping the entire system to trap a task would be very counter-productive. The Debugger Package has been developed by a third-party vendor that takes a very different approach to the "break/continue" operation required on the TPF host , implements a unique communication protocol used in traffic between human developers running >> client debugger & amp; server side debug controller , as well as the form and function of the debugger program operation on the client side. Two examples of 3rd party debugger packages are Step by Step Trace from Bedford Associates and CMSTPF , TPF/GI , & amp; zTPFGI from TPF Software, Inc. No packages are fully compatible with others, or with IBM's own offerings. IBM's client debugging offer is packaged in an IDE named IBM TPF Toolkit .

What is a TPF

TPF is highly optimized to allow messages from supported networks to be redirected to other locations, redirected to an application (set of specific programs) or to allow highly efficient access to database records.

Data record

Historically, all data on the TPF system must match the fixed record size (and core blocks) of 381, 1055 and 4K bytes. This is due in part to the physical record size of the blocks located in the DASD. Much of the overhead is saved by freeing up every part of the operating system from breaking large data entities into smaller ones during file operations, and reassembling the same during read operations. Because IBM hardware performs I/O through the use of the channel and channel programs , TPF will produce a very small and efficient channel program to perform I/O - all in speed names. Since the early days also put premium on the size of storage media - be it memory or disk, TPF applications evolved into doing very powerful things when using very little resources.

Today, many of these limits are removed. In fact, just because legacy support is smaller than -4K DASD records are still in use. With DASD technology advancement, reading/writing 4K records is as efficient as 1055 bytes. The same progress has increased the capacity of each device so that no more premiums are placed on the ability to pack the data into the smallest possible model.

Program and residence

TPF also has its segment program allocated as 381, 1055 and 4K bytes records at different points in its history. Each segment consists of one record; with a very comprehensive application that requires perhaps tens or even hundreds of segments. During the first forty years of TPF history, these segments were never linked. In contrast, moving object code (output directly from the assembler) is placed in memory, has a relocatable internal (self-referenced) symbol completed, then the entire image is written for the file to load later into the system. This creates a challenging programming environment in which the related segments can not directly handle each other , with transfer controls between them being implemented as system services ENTER/BACK < i>

In the early days of ACP/TPF (around 1965), memory space was very limited, which gave rise to differences between the resident and core-resident programs - only the most application programs often used written into memory and never deleted ( core-residency ); the rest are stored in files and read on demand, with their backing memory buffer released post-execution.

The introduction of the C language to TPF in version 3.0 was first applied in accordance with the segment convention, including the absence of editing links. This scheme quickly shows itself impractical for anything other than the simplest C program. In TPF 4.1, a completely and entirely related load bb> module is introduced to the TPF. It's compiled with the z/OS C/C compiler using a special TPF header file and linked to IEWL , generating an az/OS-conformant load module, which can not be considered a traditional TPF. segment. TPF loader has been extended to read the unique z/OS-loaded load module format format, then lay out the module load-resident module file into memory; in the meantime, assembly language programs remain limited to the TPF segment model, creating a clear distinction between applications written in assembler and those written in higher level languages ​​(HLL).

In z/TPF 1.1, all source types are conceptually unified and are completely edited links to match ELF specifications. The segment concept is becoming obsolete, which means that any whatever program written in any any source - including Assembler - may now be anything size. Additionally, external references are possible, and separate source code programs that were once segments can now be directly linked together into shared objects. Value points are critical inheritance applications that can benefit from increased efficiency through simple repackaging - calls made between single shared object module members now have shorter path length at run time compared to call the system ENTER/BACK service. Members of the same shared object can now share writeable data areas directly thanks to the copy-on-write function also introduced in z/TPF 1.1; which inadvertently reinforces the terms of the TPF reentrancy.

The file concepts and core-residencies are also made obsolete, because the z/TPF design point is trying to have all programs that stay in memory at all times.

Because z/TPF must maintain a call stack for high-level language programs, which gives the HLL program the ability to benefit from a stack-based memory allocation, it is considered useful to extend the call stack to an optional assembly language program, which can reduce memory pressure and facilitate programming recursively.

All z/TPF execution programs are now packaged as ELF shared objects.

Memory usage

Historically and in step with earlier, core blocks-- memory-- also sized 381, 1055 and 4Ã, K in size. Since the ALL memory block must be this large, most of the overhead for obtaining memory found in other systems is discarded. The programmer only needs to decide what size blocks fit your needs and ask for them. TPF will keep a list of blocks used and submit the first block on the list provided.

The physical memory is divided into sections reserved for each size so that the 1055 byte block always comes from a part and is returned there, the only necessary overhead is adding its address to the list of appropriate physical block tables. No compacting or collecting data is required.

When an application gets more sophisticated requests for increased memory, and after C becomes a part of available memory an indeterminate or large measure is required. This raises the use of heap storage and some memory management routines. To facilitate overhead, TPF memory is broken down into frames-- 4 in KB size (1 MB with z/TPF). If an application requires a number of bytes, the number of adjacent frames is required to fill the given requirement.

Z/TPF EE V1.1 z/TPFDF V1.1 TPF Toolkit for WebSphere® Studio V3 ...
src: images.slideplayer.com


References


Z/TPF EE V1.1 z/TPFDF V1.1 TPF Toolkit for WebSphere® Studio V3 ...
src: images.slideplayer.com


Bibliography

  • Transaction Processing Facility: Guide for Application Programmers <(i Yourdon Press Computing Series) by R. Jason Martin (Hardcover - April 1990), ISBN 978-0139281105

Z/TPF EE V1.1 z/TPFDF V1.1 TPF Toolkit for WebSphere® Studio V3 ...
src: images.slideplayer.com


External links

  • z/TPF (IBM)
  • TPF User Groups (TPF User Groups)

Source of the article : Wikipedia

Comments
0 Comments