Data Quality Module (DQM)
The Data Quality Module (DQM) provides two key functional areas: Data Profiling and Data Validation & Correction. Together, they give users a structured, repeatable way to understand data quality issues and enforce business rules before data enters downstream processes.
Data Profiling
The Data Profiler helps users analyse the structure, patterns, and characteristics of their datasets. It is often the first step in defining a detailed Data Validation specification. Profiling results highlight anomalies, inconsistencies, missing values, and other quality issues that need to be addressed.
Data Validation & Correction
DQM allows users to define Data Validation rules through a user‑friendly interface. These rules can be tested repeatedly against real datasets before being integrated into the main orchestration workflow. This iterative approach ensures that validation logic is correct, complete, and aligned with business requirements.
Both the testing process and the production Data Validation runs generate summary and detailed reports. These reports identify all records that fail validation and provide enough information to pinpoint the exact cause of each issue. The detailed output can also be used in subsequent processing steps to filter out or correct invalid records.
Transformation Module
The Task Designer (TD) is an interactive, web‑based graphical interface used to design a wide range of data processing tasks. These tasks can include transforming inputs, generating outputs, executing SQL statements, running system commands, and orchestrating complex data workflows.
IO Objects and Services
In AMS, input and output components are called IO Objects, while processing and transformation components are called Services.
Each type of IO Object and Service is visually distinguishable by its colour and icon, making task structures easy to understand at a glance.
IO Objects and Services are connected with directional arrows that represent data flow. These connections define how data moves through the task—from source, through transformations, to final outputs.
Supported Data Sources
IO Objects can represent datasets of virtually any type or location, including:
• tables from almost all major database vendors
• files in most common formats
• cloud or on‑premises data sources
This flexibility allows TD tasks to integrate seamlessly with diverse environments.
Task Reusability and Parameters
TD tasks can accept environment variables, either defined globally or passed in from the calling procedure.
The same TD task can be reused across multiple procedures, supporting modular design and reducing duplication.
Metadata and Transparency
All details related to a TD task are stored in AMS Meta Tables, which are fully accessible for viewing. This ensures transparency, traceability, and ease of maintenance.
Interactive Design Experience
Objects within the task pane can be freely moved and rearranged. Their positions are saved automatically, allowing users to organise workflows visually in a way that best suits their understanding and design style.