Skip to main content

Automated Data Harmonization (ADH) using Artificial Intelligence (AI)

Abstract

Organizations in the business of Information Services deal with very large volumes of data which is collected from a variety of proprietary, as well as, public sources in multiple languages with different formats, naming conventions, and context. Mapping such data into enterprise master data for reporting and prediction is an effort-intensive, time-consuming process which is prone to errors. Machines cannot match these sources and map to master data accurately. Enterprises are eager to automate the human intensive tasks of data harmonization so that their resources can focus on finding the insights to drive the business. We undertook one such automation initiative for a global Market Research Major (MRM) and achieved a significant level of success leveraging Artificial Intelligence (AI) techniques. The Automated Data Harmonization (ADH) solution has been a multi-step approach of Dictionary Matching, Fuzzy Text Similarity, and different Machine Learning techniques. It has been implemented on the Big Data stack for better performance and scalability. In order to streamline the overall business process, runtime rules and workflow has been implemented. The Proof of Concept has yielded an overall F-Score within the range of 82–93% depending on the variation of the dataset. The deployed version is continuing to deliver high accuracy and gained adoption as a core micro-service across the organization. The Business as Usual (BAU) cycle time has been reduced by 80% (from 14 days to 3 days). While the solution is unique and tailored to meet a set of specific business requirements, it can be extended for media metadata standardization across multiple devices, author name and citation resolution in scholarly journals, leads resolution in multi-channel marketing and ad campaigns etc.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3

Abbreviations

ADH:

Automated Data Harmonization

AI:

Artificial Intelligence

API:

Application Programming Interface

HTTP:

Hypertext Transfer Protocol

LCS:

Longest Common Subsequence

LSH:

Locality Sensitivity based Hashing

ML:

Machine Learning

MRM:

Market Research Major

NLP:

Natural Language Processing

IT:

Information Technology

PoC:

Proof of Concept

RDBMS:

Relational Database Management System

REST:

Representational State Transfer

References

  1. Wang, J., Li, G., Feng, J.: Extending String Similarity Join to Tolerant Fuzzy Token Matching. Tsinghua University (2012)

  2. Fisichella, M., Deng, F., Nejdl, W.: Efficient incremental near duplicate detection based on locality sensitive hashing. In: Database and Expert Systems Applications, pp. 152–166. Springer (2010)

  3. Christen, P., Gayler, R.: Towards scalable real-time entity resolution using a similarity-aware inverted index approach. In: Proceedings of the 7th Australasian Data Mining Conference, Volume 87, pp. 51–60. Australian Computer Society, Inc. (2008)

  4. Bilenko, M., Mooney, R.J.: Adaptive duplicate detection using learnable string similarity measures. In: ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM (2003)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anjan Dutta.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Glossary

Agile

Software development methodology built on the principles of ability create and respond to change. Collaboration and the self-organizing team are key to agile development

Apache Spark or Spark

Low-latency processing framework for Big Data for interactive queries and stream processing

Docker

Open source tool designed to create, deploy, and run applications by using containers

Fair Scheduler

MapReduce Scheduler that provides a way to share large clusters

GET API

HTTP method to retrieve resource information

Hadoop

Open Source software framework that allows storage, management and massively parallel processing of Big Data

HBase

Open source non-relational database developed by Apache Software Foundation

HDFS

Distributed file storage data of Hadoop with high-throughput access to application data

Hive

Open source data warehouse software built on top of Apache Hadoop

Java

General purpose programming language

NoSQL

Database that provides a mechanism for storage and retrieval of data and modeled in non-relational form

Oozie

Workflow scheduling system to manage Hadoop jobs

Python

General purpose programming language

SAS

Statistical Analysis software to access, manage, analyze and report data to aid in decision-making

Scikit-learn

Open source software machine learning library for the Python programming language

Sqoop

Sqoop is a command-line interface application for transferring data between relational databases and Hadoop developed by Apache Software Foundation

YARN

Resource management solution of Apache Hadoop

Rights and permissions

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dutta, A., Deb, T. & Pathak, S. Automated Data Harmonization (ADH) using Artificial Intelligence (AI). OPSEARCH 58, 257–275 (2021). https://doi.org/10.1007/s12597-020-00467-4

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12597-020-00467-4

Keywords