Skip to main content

Automated Data Harmonization (ADH) using Artificial Intelligence (AI)


Organizations in the business of Information Services deal with very large volumes of data which is collected from a variety of proprietary, as well as, public sources in multiple languages with different formats, naming conventions, and context. Mapping such data into enterprise master data for reporting and prediction is an effort-intensive, time-consuming process which is prone to errors. Machines cannot match these sources and map to master data accurately. Enterprises are eager to automate the human intensive tasks of data harmonization so that their resources can focus on finding the insights to drive the business. We undertook one such automation initiative for a global Market Research Major (MRM) and achieved a significant level of success leveraging Artificial Intelligence (AI) techniques. The Automated Data Harmonization (ADH) solution has been a multi-step approach of Dictionary Matching, Fuzzy Text Similarity, and different Machine Learning techniques. It has been implemented on the Big Data stack for better performance and scalability. In order to streamline the overall business process, runtime rules and workflow has been implemented. The Proof of Concept has yielded an overall F-Score within the range of 82–93% depending on the variation of the dataset. The deployed version is continuing to deliver high accuracy and gained adoption as a core micro-service across the organization. The Business as Usual (BAU) cycle time has been reduced by 80% (from 14 days to 3 days). While the solution is unique and tailored to meet a set of specific business requirements, it can be extended for media metadata standardization across multiple devices, author name and citation resolution in scholarly journals, leads resolution in multi-channel marketing and ad campaigns etc.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3



Automated Data Harmonization


Artificial Intelligence


Application Programming Interface


Hypertext Transfer Protocol


Longest Common Subsequence


Locality Sensitivity based Hashing


Machine Learning


Market Research Major


Natural Language Processing


Information Technology


Proof of Concept


Relational Database Management System


Representational State Transfer


  1. Wang, J., Li, G., Feng, J.: Extending String Similarity Join to Tolerant Fuzzy Token Matching. Tsinghua University (2012)

  2. Fisichella, M., Deng, F., Nejdl, W.: Efficient incremental near duplicate detection based on locality sensitive hashing. In: Database and Expert Systems Applications, pp. 152–166. Springer (2010)

  3. Christen, P., Gayler, R.: Towards scalable real-time entity resolution using a similarity-aware inverted index approach. In: Proceedings of the 7th Australasian Data Mining Conference, Volume 87, pp. 51–60. Australian Computer Society, Inc. (2008)

  4. Bilenko, M., Mooney, R.J.: Adaptive duplicate detection using learnable string similarity measures. In: ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM (2003)

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to Anjan Dutta.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.



Software development methodology built on the principles of ability create and respond to change. Collaboration and the self-organizing team are key to agile development

Apache Spark or Spark

Low-latency processing framework for Big Data for interactive queries and stream processing


Open source tool designed to create, deploy, and run applications by using containers

Fair Scheduler

MapReduce Scheduler that provides a way to share large clusters


HTTP method to retrieve resource information


Open Source software framework that allows storage, management and massively parallel processing of Big Data


Open source non-relational database developed by Apache Software Foundation


Distributed file storage data of Hadoop with high-throughput access to application data


Open source data warehouse software built on top of Apache Hadoop


General purpose programming language


Database that provides a mechanism for storage and retrieval of data and modeled in non-relational form


Workflow scheduling system to manage Hadoop jobs


General purpose programming language


Statistical Analysis software to access, manage, analyze and report data to aid in decision-making


Open source software machine learning library for the Python programming language


Sqoop is a command-line interface application for transferring data between relational databases and Hadoop developed by Apache Software Foundation


Resource management solution of Apache Hadoop

Rights and permissions

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dutta, A., Deb, T. & Pathak, S. Automated Data Harmonization (ADH) using Artificial Intelligence (AI). OPSEARCH 58, 257–275 (2021).

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: