Axional Analytics Suite is a great fit for many businesses.Â Find out if it's the right choice for you!Â Are you looking for a Axional Analytics Suite demo, technical support, product or pricing information or to simply to compare Axional Analytics Suite with competitive options? Quickly get answers and the information you need.
End users love Business Intelligence tools. They provide graphs, moving targets, drill-downs, and drill-through. But much of the work in an analytical environment involves getting the data from operational systems into the data warehouse so that business intelligence tools can display those pretty pictures.
The ETL module is a scalable enterprise data integration platform with exceptional extract, transform, and load (ETL) capabilities. The module is based on a Data Transformation Engine. It operates at a higher level of abstraction than many visual development tools. Generally, developers can accomplish much more with the engine in far fewer statements than they can with 3GLs:
This module consolidates data from different source systems, having different data organizations or formats. It has been specifically designed to move high data volumes and apply complex business rules to bring all the data together in a standardized and homogeneous environment. The key features of this tool are:
The XSQL language allows the definition of complex rules to transform the raw data. Some examples are:
The tool provides with connectors for extraction from multiple Data Sources, from plain text to Databases, to Excel, XML and others.
The data source is extracted by data streaming and can be loaded on-the-fly to the destination database with no intermediate data storage. Advanced loading functions allows in most cases to integrate all ETL phases in a single process, increasing security and reliability.
An intrinsic part of the extraction involves the parsing of extracted data, resulting in a check if the data meets an expected pattern or structure. If not, the data may be rejected entirely or in part:
The transform stage applies a series of rules or functions to the source’s extracted data to derive the end-data to be loaded into the target DB. To tool provides with the capability to:
Load data and anonymize in one step: Anonymization and normalization is performed on-the-fly, during the loading of external data from files. For each line of the source file, the following steps are executed:
The load phase stores the data into the end target, usually the data warehouse (DW) or a stage database.
Inserting new records or updating existing ones can be done searching the primary key columns. If the data loaded has a PK of an existing record, then the record is updated and if it’s a new PK, then the record is inserted. For optimizing performance, two different updating algorithms can be defined depending of the data type. For fact tables, first an insert is tried and if the PK exists then the update is done. For master tables, the first operation tried is the update and if no records are updated, then the insert is done.
The system provides with the capability to handle historical data as well. In this case, all changes in records are kept allowing to reproduce any previous report.
Target systems could be a database connected via JDBC, a file, or ever a streamed data seeded by posting to HTTP, by FTP or by TCP.
Integration with Informix IWA: After loading data in target Informix database, ETL process could automatically call the process for updating Warehouse Accelerator Datamart data.
The tool supports massive parallel processing (MPP) for large data volumes. In any case, one should consider than Simplicity is also performance, and the One Step “Extract, apply transformation functions & load” method is the faster algorithm for ETL procedures.
In our test systems, a file with 1 million records can be extracted, anonymized with whirlpool methods, normalized and loaded into Database table in less than 8 minutes, without using parallelization.