Detecting near duplicate dataset with machine learning

Authors

  • Marc Chevallier
  • Nicoleta Rogovschi
  • Faouzi Boufares
  • Nistor Grozavu
  • Charly Clairmont

Keywords:

Machine Learning ,Entity Resolution, Record Linkage, Data Quality, Data Integration, Data Profiling

Abstract

This paper introduces the concept of near duplicate dataset, a quasi-duplicate version of a dataset. This version has undergone an unknown number of row and column insertions and deletions (modifications on schema and instance). This concepts is interesting for data exploration, data integration and data quality. To formalise these insertions and deletions, two parameters are introduced. Our technique for detecting these quasi-duplicate datasets is based on features extraction and machine learning. This method is original because it does not rely on classical techniques of comparisons between columns but on the comparison of metadata vectors summarising the datasets. In order to train these algorithms, we introduce a method to artificially generate training data. We perform several experiments to evaluate the best parameters to use when creating training data and the performance of several classifiers. In the studied cases, these experiments lead us to an accuracy rate higher than 95%.

Downloads

Download data is not yet available.

Downloads

Published

2023-07-04

How to Cite

Marc Chevallier, Nicoleta Rogovschi, Faouzi Boufares, Nistor Grozavu, & Charly Clairmont. (2023). Detecting near duplicate dataset with machine learning. International Journal of Computer Information Systems and Industrial Management Applications, 14, 12. Retrieved from https://cspub-ijcisim.org/index.php/ijcisim/article/view/589

Issue

Section

Original Articles