You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
blocklib filters out records according to the blocking specification. It warns if all records are not included in a block after using a particular blocking schema, but as someone else (another party) may have produced the blocking schema it seems reasonable that the Anonlink service should accept the blocking data as given (even if it doesn't cover 100% of the records). Possibly the service should warn that not all records are covered, or do something else (put the strays into their own block?).
An alternative is clients could filter out the records that are not part of any block and not upload the CLK encodings for those records.
As an example here is a (terrible) blocking schema for the febrl 4 dataset that excludes a few records:
P-Sig: Warning! only 96.42% records are covered in blocks. Please consider to improve signatures
Statistics for the generated blocks:
Number of Blocks: 37
Minimum Block Size: 60
Maximum Block Size: 475
Average Block Size: 217.40540540540542
Median Block Size: 207
Standard Deviation of Block Size: 123.52293072306216
P-Sig: Warning! only 97.1% records are covered in blocks. Please consider to improve signatures
Statistics for the generated blocks:
Number of Blocks: 39
Minimum Block Size: 52
Maximum Block Size: 456
Average Block Size: 210.17948717948718
Median Block Size: 193
Standard Deviation of Block Size: 113.03838933250947
The Anonlink service then fails while importing the encodings:
I agree. It is not up to the entity service to decide if this is a good thing to do or not.
As the data provider have to agree on a blocking scheme beforehand and they get the coverage information from blocklib, they should decide together, if they want to proceed or not.
I am not a big fan of the filtering idea, as that destroys the alignment of the indices of the clks with the corresponding PII. Then you would have to keep a mapping of clk indices, as returned by the server to the local PII indices.
Thus, I vote for taming the server. Execute the run, irrespective of the coverage of the blocks. Maybe provide a warning to the analyst.
blocklib
filters out records according to the blocking specification. It warns if all records are not included in a block after using a particular blocking schema, but as someone else (another party) may have produced the blocking schema it seems reasonable that the Anonlink service should accept the blocking data as given (even if it doesn't cover 100% of the records). Possibly the service should warn that not all records are covered, or do something else (put the strays into their own block?).An alternative is clients could filter out the records that are not part of any block and not upload the CLK encodings for those records.
As an example here is a (terrible) blocking schema for the febrl 4 dataset that excludes a few records:
Blocklib notes that this could be an issue:
The Anonlink service then fails while importing the encodings:
The text was updated successfully, but these errors were encountered: