application management system - wernand software development professionals
components

PPT is the AMS component responsible for structuring, organising, and controlling applications and their execution flow. It provides a graphical, web‑based interface that allows users to design and build applications without the need for extensive preliminary flowcharts or external diagrams. The design approach is fully top‑down: you begin with the big picture and progressively drill down into individual tasks, which can be simple or highly complex.

PPT enables system architects, designers, technical leads, and developers to define and structure phases, procedures, and procedure dependencies. Complex procedures can be broken down into clear, sequential tasks, making even large systems easy to understand, maintain, and evolve.

At its core, PPT provides a development‑time framework for deconstructing complex applications into manageable units of work that together form a single, recoverable, and well‑organised system.

The Task is the lowest‑level element in this structure — and the only executable unit. A task can be anything with executable permissions, giving AMS the flexibility to integrate virtually any process into the orchestration flow.

 An application defined and organised through PPT is called a Phase.

A single system can contain one or many phases, depending on its complexity.

From an operational perspective, a job may be associated with either a Phase or a Procedure, depending on how complex the workflow is and how procedures depend on one another.

The image on the right illustrates one of the two phases in the system.

 

This phase (or application/job) contains 13 procedures.
                                                         -----------------------------------------------

Procedure dependencies and data flow are represented by arrows connecting the procedures. Clicking the left end of an arrow displays the list of dependencies for that procedure.

Through this interface, users can also access additional operational details such as:

• logs

• run‑time multiplication

• last run time

• average run time

• execution history

 

Clicking on a specific procedure opens a window showing all tasks belonging to that procedure.

 In AMS, tasks are the only executable elements.

A task can be anything runnable and capable of returning a success code — a script, a program, a command, or any executable visible to the system.

Within a procedure:

• tasks can be selected or deselected for execution

• tasks run sequentially according to the defined workflow

• each task contributes to the recoverability and traceability of the system

A job can be executed directly from the Phase window, or it can be scheduled using the Scheduler.

                                                             -----------------------------------------------

 

 

 

 

The SCHEDULER - provides immediate, one‑off, or repetitive scheduling capabilities based on full day/time conditions and/or event triggers. Multiple timing and event entries can be defined simultaneously, allowing even the most complex scheduling requirements to be expressed with ease. Despite its power, the Scheduler interface remains simple and intuitive.

The Scheduler controls the execution of Phases, including all associated Procedures and Tasks.
The AMS Server continuously reads the Scheduler, checking for timing conditions, event flags, and trigger rules in order to determine which Phase should run and when

                                                        -----------------------------------------------

AMS supports two trigger types:

  • Timing Trigger
  • Event Trigger

Both trigger types can be used at the same time, and the number of triggers is virtually unlimited. This flexibility allows users to define sophisticated execution patterns that match real operational needs.
                                                             -----------------------------------------------

 

The MONITOR - provides a wide range of capabilities essential for daily operations in production environments, as well as practical support during development. It enables job and phase scheduling, real‑time monitoring, alerting, and operational control actions such as starting, stopping, and rerunning phases.

Across all applications and systems, the Operational Console collects, integrates, and manages operational metadata. This gives operational teams and business analysts the visibility they need to plan, troubleshoot, and maintain efficient operations.

The Monitor provides a summary of all currently defined jobs, grouped by system and phase. Each job is colour‑coded to indicate its current status:

• Blue – running

• Green – completed

• Red – failed

• Yellow – waiting

• Grey – bypassed

From the Monitor tab, users can drill down into any phase to explore:

• all procedures within the phase

• dependencies between procedures

• detailed job information, including:

• reason for failure

• what a job is waiting on

• completion time or expected completion

• average runtime

• full job log

• individual procedure and task logs

This level of detail makes it easy to understand how a job is progressing, where it may be blocked, and how internal relationships between procedures influence execution flow.

Visualising Dependencies

The monitoring screen also provides a visual representation of the dependencies between procedures within a selected phase. This diagram shows:

• the order in which procedures will run

• which procedures are currently in progress

• which have completed or failed

• how downstream tasks are affected

 

This visual map is invaluable for diagnosing issues, understanding execution flow, and verifying that the orchestration logic behaves as expected.

 

Monitor (Extended Capabilities)
The Monitor goes far beyond simple status reporting. It provides deep operational insight, full scheduling control, and a graphical environment for defining complex execution logic. Together, these capabilities form a complete end‑to‑end operational framework for managing even the most sophisticated applications.

 

Drill‑Down to Procedure and Task Detail

At any point, an operator can drill down from a phase into its procedures, and further into the individual tasks within each procedure. 

 

 

                                                                 -----------------------------------------------

.This gives full visibility into:

  • task‑level execution status
  • logs and diagnostic information
  • runtime metrics
  • failure points and dependency blockers
    This granular view is essential for understanding how an application is progressing and for resolving issues quickly

Integrated Scheduling (Day/Time/Event Based)

The Monitor also includes full day/time and event‑based scheduling capabilities. Applications can be scheduled directly through the interface without writing or maintaining traditional scripts.

 

This eliminates the overhead of external schedulers and ensures that scheduling logic is always aligned with the application’s internal structure.

Graphical Control Flow for Complex Dependencies

Large applications often involve intricate dependencies between tasks. AMS provides a fully graphical environment that allows developers to design and organise these relationships visually.

This environment supports the creation of advanced control flows, which define the detailed logic governing execution order. A control flow is built from a collection of connected Procedures (each representing a Task). The connections between procedures express the dependency rules, such as:

• run Procedure B only after Procedure A completes

• run Procedure C if Procedure A fails

• run Procedure D only if Procedure B succeeds

 

AMS supports a wide range of dependency scenarios, enabling developers to express complex operational logic with clarity and precision.

Phases: Structured, Recoverable Application Units

Using this dependency mechanism, AMS provides a development‑time framework for breaking down a complex application into manageable, recoverable units of work. In AMS terminology, an application defined and organised in this structured way is called a Phase.

A Phase represents a complete, self‑contained orchestration of procedures and tasks. Once defined, it becomes available for:

• scheduling

• monitoring

• operational control through the Monitor tab

 

This creates a consistent and predictable operational model across all environments.

A Sophisticated End‑to‑End Operational Environment

By combining:

• deep monitoring

• drill‑down diagnostics

• integrated scheduling

• graphical dependency modelling

• structured Phases

AMS delivers a comprehensive operational environment that supports everything from development and testing to full production workloads. It ensures that even the most complex applications remain transparent, manageable, and fully recoverable.

                                                                                -----------------------------------------------

 

 

 

 

 

 

 

 

 

 

DQM -Data Quality Module (DQM)

AMS provides two distinct areas where the Data Quality Module (DQM) can be applied. The first area focuses on Data Profiling and Data Validation & Correction, giving users a structured and intuitive way to understand data quality issues and enforce business rules before data enters downstream processes.

Data Profiling and Validation

Within this area, DQM allows users to define:

• Data Profiling requirements

• Data Validation rules

All of this is done through a user‑friendly interface designed to make data quality specification accessible and repeatable.

The Data Profiler often serves as the starting point for building a detailed Data Validation specification. Profiling results highlight anomalies, missing values, and inconsistencies, helping users identify what needs to be validated or corrected.

Iterative Testing Against Real Data

DQM supports repeated testing of validation rules against real datasets before they are integrated into the actual processing workflow. This iterative approach ensures that:

• rules behave as expected

• edge cases are identified early

• validation logic is complete and reliable

Both the testing process and the production Data Validation runs generate:

• summary reports

• detailed reports

These reports list all items that failed validation and provide enough information to pinpoint the exact problem record.

Using Detailed Reports for Downstream Processing

The detailed validation report can be used in subsequent processing steps to:

• filter out invalid records

• correct problematic data

• maintain clean, high‑quality datasets

This makes DQM not just a diagnostic tool, but an integral part of the data preparation pipeline.

 

 

 

                                                                   -----------------------------------------------

DQM as an Integrated Processing Step
The second way to apply DQM is directly within a Task Designer (TD) task as an integral step in the data‑processing flow. This approach uses the Lookup service, enabling powerful cross‑reference validation and automated data correction during transformation.

A DQM‑enabled task can perform:

• multiple cross‑reference checks

• validation against lookup tables or reference datasets

• conditional data correction

• branching logic based on validation outcomes

Just like the standalone DQM process, this task produces two exception reports:

• summary report

• detailed report

Both contain sufficient information to identify and isolate problematic records.

Developers can also configure different processing decisions depending on the validation results—for example, continue, reroute, correct, or stop the flow based on the severity or type of data issue.

Long‑Term Data Quality Monitoring

All DQM reports—whether generated through standalone profiling/validation or through TD‑based Lookup tasks—are stored in a central repository. This allows organisations to:

• track data quality trends over time

• analyse recurring issues

• support audit and compliance requirements

• build long‑term data quality dashboards and reporting

This historical perspective is a key strength of AMS, turning operational validation into a strategic data‑quality asset.

Metadata‑Driven Rules

All DQM specifications, including validation logic and business rules, are stored in meta tables. This ensures:

• transparency

• versionability

• reusability across tasks and phases

• consistent behaviour across environments

Because rules are metadata‑driven, changes can be made without modifying code or redeploying applications.

                                                                                    -----------------------------------------------

 

 

 

 

 

 

 

TD The Task Designer (TD) is an interactive, web‑based graphical interface used to design a wide range of data‑processing tasks. These tasks can include transforming inputs, generating outputs, executing SQL statements, running system commands, and orchestrating complex multi‑step operations.

 

IO Objects and Services

In AMS, all components within a TD task fall into two categories:

• IO Objects – input and output elements

• Services – processing or transformation components

Each type of IO Object and Service is visually distinguished by its colour and icon, making task structures easy to understand at a glance.

IO Objects and Services are connected with directional arrows that represent data flow, defining how data moves from source to transformation to output.

 

Flexible Data Connectivity

IO Objects can represent datasets of virtually any type or location, including:

• tables from almost all major database vendors

• files in most common formats

• local or remote data sources

This flexibility allows TD tasks to integrate seamlessly with diverse environments.

 

Parameters and Reusability

TD tasks can accept environment variables, either:

• defined globally, or

• passed in from the calling procedure

This makes tasks highly configurable and reusable. The same TD task can be invoked from multiple procedures without modification.

                                                                   -----------------------------------------------

Metadata and Transparency

All details related to a TD task are stored in AMS Meta Tables, which are fully accessible for viewing. This ensures:

• transparency

• traceability

• ease of maintenance

 • consistent behaviour across environments

 

Interactive Design Experience

 Objects within the task pane can be freely moved and rearranged. Their positions are saved automatically, allowing users to organise workflows visually in a way that best suits their design style and understanding.

 

IO types:

  • Tables - Local, Remote, Most Vendors
  • Files
    - All Common ASCI types
    - XML
    - XLSX
  • API - JSON
  • SharePoint

 Service types:

  • M2O - Lookup, Funnel, Agregate
  • O2M - Splitter, XL Splitter,
  • Other - Non Select SQL, Select SQL, Time Variant

 

                                                                                -----------------------------------------------

 

 

 

 

 

 

 

 

 

MEAS -Edit Advanced Setup is a module developed as an extension of the PHP My Edit class, providing a powerful and flexible interface to AMS and its application metadata. It offers a rich mechanism for building dynamic, interactive database‑driven forms and functions, enabling users to access and manipulate metadata directly through a highly parameterised, web‑based interface.

Purpose and Capabilities

Edit Advanced Setup is used to:

• create sophisticated forms for interacting with AMS metadata

• define functions and behaviours that support dynamic DB access

• provide a configurable interface for viewing and maintaining metadata

• support rapid development of metadata‑driven screens without coding

Because it is built on top of PHP My Edit, it inherits a strong foundation for CRUD operations while extending it with AMS‑specific logic, security, and flexibility.

Access Control and User Management

This module is also a key mechanism for controlling access to AMS and application metadata. It allows administrators to define:

• which user groups can view or modify specific metadata

• which forms or functions are available to each user

• fine‑grained permissions aligned with operational roles

This ensures that metadata is exposed only to authorised users, maintaining system integrity and compliance.

Part of the AMS Distribution

Edit Advanced Setup is included as part of the AMS distribution and is used internally to build the interface to the AMS metadata layer. This ensures:

• consistency across all metadata‑driven screens

• a unified user experience

• simplified maintenance and extensibility

Form Configuration Interface

On the right is an example of the interface used to specify the properties of a form for a given table. This interface allows developers and administrators to configure:

• field visibility

• validation rules

• display formats

• lookup behaviour

• access permissions

• layout and presentation options

All of these settings are stored in metadata, making the forms fully dynamic and environment‑independent.

Additional Features of AMS

 

AMS includes a number of built‑in features that enhance usability, governance, and the overall metadata‑driven architecture of the platform.

 

Automatic Audit Information

Any record created or modified through MEAS is automatically stamped with:

• the timestamp of the change

• the user ID of the person who initiated it

This ensures full auditability across all metadata operations without requiring manual intervention.

 

 Integrated Help and Documentation Links

Every metadata object in AMS can be assigned a help button (the “?” symbol). This button can be linked to:

• the corresponding chapter in the AMS Manual, or

• a custom/application manual created by the user

This makes documentation instantly accessible and always up to date, supporting real‑time learning and reducing onboarding time.

Users can build their own project or system documentation and link it directly to objects in the metadata layer, creating a seamless connection between implementation and reference material.

 

Dynamic Field Choices

When populating certain fields, AMS can automatically provide a list of valid choices. These can come from:

• another table (lookup method), or

• a predefined list of values

This ensures consistency, prevents invalid entries, and simplifies data entry across the system.

 

Element‑Level Attributes

Each element within a metadata object can have a set of attributes that define:

• what actions are allowed

• how the element behaves

• what constraints or rules apply

This fine‑grained control supports flexible configuration while maintaining strong governance and predictable behaviour.