8/26/2023 0 Comments Inverse duplicate detectorMisspellings, different ways of writing names, and even address changes over time, can all lead to duplicate entries in the database for the same person.ĭuplicate detection problems do not scale well. Each record contains information such as first name, last name, and address. Furthermore, records are often made up of multiple attributes, or fields a small error or missing entry for any one of these fields could cause duplication.įor example, one of the data sets we consider in this paper is a database of personal information generated by the Los Angeles Police Department (LAPD). Slight errors in observation, processing, or entering data may cause multiple unlinked nearly duplicated records to be created for a single real-world entity. The results also confirm that our method for automatically determining the number of groups typically works well in many cases and allows for accurate results in the absence of a priori knowledge of the number of unique entities in the data set.įast methods for matching records in databases that are similar or identical have growing importance as database sizes increase. Moreover, in some (but certainly not all) parameter ranges soft term frequency-inverse document frequency methods can outperform the standard term frequency-inverse document frequency method. The results show that the methods that use words as the basic units are preferable to those that use 3-grams. We test our method on the Los Angeles Police Department Field Interview Card data set, the Cora Citation Matching data set, and two sets of restaurant review data. We also discuss two methods to deal with missing data in computing similarity scores. In particular, we introduce a vectorized soft term frequency-inverse document frequency method, with an optional refinement step. We compare various methods for creating similarity scores between noisy records, considering different combinations of string matching, term frequency-inverse document frequency methods, and n-gram techniques. Our method consists of three main steps: creating a similarity score between records, grouping records together into “unique entities”, and refining the groups. This task is complicated by noise (such as misspellings) and missing data, which can lead to records being different, despite referring to the same entity. We consider the problem of duplicate detection in noisy and incomplete data: Given a large data set in which each record has multiple entries (attributes), detect which distinct records refer to the same real-world entity.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |