-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Faster sampling for Postgres? #387
Comments
Makes sense. Bigquery supports it as well, but in beta only. Limitations:
In order to limit the impact of these limitations, we could take the following approach. // overestimate the number of samples by a factor 2
int percentage = 2 * 100 * sampleSize / tableCount;
// table sample has an effect: less than 50% of the number of records will be sampled.
if (percentage < 50) {
// avoid too small samples
if (percentage < 2) {
percentage = 2;
}
query = "SELECT * FROM " + table + " TABLESAMPLE SYSTEM (" + percentage + ") ORDER BY RANDOM() LIMIT " + sampleSize;
// optional: resample without table sample if the number of records is still lower than the sampleSize.
} else {
query = "SELECT * FROM " + table + " ORDER BY RANDOM() LIMIT " + sampleSize;
} |
Yes, that sounds like a good idea. The fact that the sample isn't truly random and is not guaranteed to return the exact number of rows should be ok for this purpose (IMHO). |
@schuemie do you think this is worth taking along with the upcoming 1.0 release of whiterabbit? I have a bit of doubt about the combination of taking a TABLESAMPLE and then order the result by ramdom. I think I get why it is done but wouldn't that cancel part of the performance improvement? |
Currently sampling on Postgres uses
ORDER BY RANDOM() LIMIT
(see here), which means the server first has to order the entire table randomly, and then take the top n rows.Perhaps TABLESAMPLE could be used for better performance? (Similar to MSSQL)
The text was updated successfully, but these errors were encountered: