We have seen in several recent client scenarios a common requirement to stage the incoming data into the database tables dynamically. In other words as the data file is read from a file it would be loaded into a table in a target database automatically. Table would be created if it’s not there based on the meta-data of the inbound file and then after the table is created, the data would be loaded into the table as part of the final step of this ETL process.
For example if a client is receiving a HIPAA or HL7 data file and they would like to directly dump the data into a database, each segment of the incoming file would have its own table created and the flow would load the data into the tables directly. If the table is not created prior to the incoming file arrival, then the ETL flow would first create the table automatically and then load the data. If the table already exists then the data would be simply merged into the existing tables.
One of the key reasons for loading the data into a database is to store all the records in a staging database prior to applying further business rules on it before the final output is created. Staging also allows clients to generate reports and dashboards on the incoming data via custom reporting and dashboard features of the Adeptia Suite. It allows the clients to flag each data record with a status code as “open’, ‘processed’, ‘error’ or ‘re-submission’ and to manage the data load into the target. Setting status flags on the records in the staging table allows re-submission of records into the final target if errors result in the initial data runs. Users can correct the records, set the status as re-submission and then re-enter those corrected data records back into the target incrementally.
To learn more about the dynamic table creation, you can read more here
The scenario relates to routing the workflow to another task if it’s determined that the previous task was not performed correctly or the individual missed out on completing the task at 100%, Adeptia can route the workflow to a ‘correction’ flow based on any condition such as error threshold (in this case 30% or 40%).
We would need to design the models that would be invoked if the errors exceed 30% (or whatever thresholds) and what those pre-defined ‘correction’ flows would be.
So if the initial workflow task produces errors and if they exceed 30% then the application needs to identify that error condition and then route the workflow to another flow which would allow user to correct their mistakes.
Now as part of this workflow, a report detailing what those errors are can be generated for the manager so that they can analyze the trends along with the performance of that user working on their tasks.
Here’s an example:
Customer Care process that is started when a customer calls and a representative takes the call.
Step 1: Take Call
- This first task can have several data points that needs to be captured, and what if the call center rep fails to capture those data points (there could be other scenarios as well).
Step 2: Validation / Suggest Correction sub-workflow
- Here set of rules can be applied to determine whether the work completed in the previous step is 100%. If not then that task needs to be rerouted back to the call center rep and prompt for the data points that were not captured correctly. This would be a ‘correction’
workflow to help user correct their mistakes. This flow can also generate reports (and historical reports) to give Managers analysis of user performance and completion trends (similar to KPI dashboards).
Self correction to figure out “why” something was missed in Step 1 can be based on a Rules table that can be used to dynamically determine the right process to take. Adeptia architecture supports dynamic routing and process calls that can be based on what is done in the previous step of the workflow.
Step 3: Proceed to Technician onsite visit scheduling
- Now when the customer support case is routed to a next review step to ascertain if a technician is needed to resolve the customer support complaint all the information is available to complete this workflow.
In the below screenshots we have an example that checks for errors in the previous completed task and if errors are found, user would be assigned a new task which will have the original submitted content and instructions to correct those errors. Here the instructions can be a link to a video as shown below.
One of the key question asked about our B2B functionality is regarding the translation of non-X12 (or non EDI) file as part of the B2B framework. Idea being that a non-X12 file such as flat text, fixed length etc should be easily configurable as part of the B2B Inbound or Outbound relationships with Trading Partners.
Let’s suppose there is a requirement to create an output file based on Mutual of Omaha’s electronic fixed length Eligibility file specification containing records related to those employees and dependents eligible or newly ineligible for coverage under group insurance policy. The setup would require taking data from a back-end source system and then, based on mapping rules, creating the required output format for the receiving Trading Partner.
In order to implement this scenario Adeptia Suite provides an easy to use Trading Partner setup interface to process these types of non-EDI files.
Step 1. Adeptia has a a pre-built Mutual of Omaha’s electronic Eligibility format. This is a fixed length format configured as part of Positional data schema.
Step 2: Create Mapping from source to target. In this case the output would be the Eligibility File format. Here’s a snapshot of the graphical mapping interface.
Step 3: As part of your Trading Partner setup, define an outbound relationship as shown here. You would need to create a trigger which would run this translation automatically.
Adeptia has built-in Trigger & Scheduling capability.
And with completion of these steps your automatic generation of outbound files are generated and sent to your trading partner.
Similar steps are involved for creating an inbound relationship where EDI files such as HIPAA (X12) or non-X12 files like CSV, Fixed length, Excel, XML etc can be captured and translated into your backend target formats.
Run time logs will show status of your transactions as shown here. Here’s an example of an inbound translation status.
This example shows you how to sync up your backend system (such as a database) to Salesforce.
Click here to view a PDF that describes this from an end-user and a developer perspective.
From the End User perspective (Run Time) : A web form triggers different process flows. (There is a simple and clean web screen for business users to select a Salesforce operation.) A form allows for the downloading and uploading of data to and from Salesforce, and for report generation.
From the Developer perspective (Design Time) : Salesforce calls a Web Service running in Adeptia. (When a change in an account record occurs in Salesforce it triggers a Web Service in Adeptia and sends the updated data to Adeptia.) Also, a Report (using iReport) process emails a report based on a selection within the web form.
This example shows how dynamic Web Forms can be used to auto-fill forms based on data entered in one of the form fields. And once the form is submitted, it triggers a workflow that routes the application to the appropriate individuals or business units within an organization for review and approval.
Click here to view a PDF that shows a run-time example and explains how a solution involving web forms and workflow automation can be easily configured in Adeptia following simple steps.
Click here to download the export of the process flows that are used as part of this solution. Please use Adeptia’s migration utility to import the objects to your local Adeptia development environment.
For further information on Adeptia please go to www.adeptia.com
As I think about the various scenarios developers go through in coming up with solutions to handle multiple inbound/source data types one of the typical strategy used by developers is to create unique or individual ETL data flows for different inbound data.
For example, if we take a case of a Fund administration service provider that receives data files daily from its clients and processes that data based on various rules financial rules, one of the typical approach in designing a solution like this would be to create separate ETL data flows for each type of incoming data file that a client is sending and then replicating that framework for other clients. A client can send data in excel, another in csv and another in some other format. And over time these types can change.
Now think about this if you have hundreds of clients each sending their stock trades data in their own native formats.
In Adeptia, the architecture of the orchestration engine allows dynamic binding of activities at run-time. This allows developers to create a single dynamic process template and depending upon the type of inbound data, the rest of the activities in the data flow would be assigned the right activity type at run-time to handle the data correctly.
So for example, the trigger would poll for the new files in a particular folder(s) and depending upon the file name patterns or any other variable, data flow can lookup into Adeptia’s rules repository to find the right match based on that condition and then retrieve the related activity IDs for read, transform and load steps. These activity IDs can be assigned to the data flow steps at run-time thus enabling dynamic binding within one data flow.
As part of managing the process, once the dynamic flow has been designed and tested the developers can focus more on creating the individual read (source schema), transform (mapping) and load (target) activities for the different data files. Promotion of this type of solution from development to production would be simple as developers can select individual objects and move them from one instance to another without having to move entire data flows every time.
Rules repository is backend Adeptia rules database that stores all the configured activities created during design time. This information can be retrieved through a lookup step at run-time in the data flow based on any condition such as file name, file path, trigger type etc.
The dynamic framework is good in cases where there is commonality on how the file gets processed and using one template flow you can process any file type that conforms to those common steps.