You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
genscrape is currently architected to be injected on every page the user visits. What this means is the page is scraped (and sometimes AJAX endpoints are hit) even if the user doesn't intend on using the data. RootsSearch is implemented this way. The user initiates an action on that data 0.3% of the time that genscrape is injected. That's a lot of wasted resources for everyone.
Two reasons this is advantageous:
Notify the user when data is available
So we can track how often the extension is used vs how often it could be used
We can still meet those objectives if we just match the current URL against all registered scrapers.
Benefits of not always running:
Consume less resources
Easier implementation for scrapers (never have to support single page apps and listen for URL changes)
It does mean that genscrape and all scrapers will have to be written such that they can be injected or run multiple times on the page. But that shouldn't be a problem when we get rid of all URL change listeners.
The text was updated successfully, but these errors were encountered:
Though this is a breaking change. I'm generally not in favor of breaking changes. Thankfully only products I support are customers so migration is trivial.
genscrape is currently architected to be injected on every page the user visits. What this means is the page is scraped (and sometimes AJAX endpoints are hit) even if the user doesn't intend on using the data. RootsSearch is implemented this way. The user initiates an action on that data 0.3% of the time that genscrape is injected. That's a lot of wasted resources for everyone.
Two reasons this is advantageous:
We can still meet those objectives if we just match the current URL against all registered scrapers.
Benefits of not always running:
It does mean that genscrape and all scrapers will have to be written such that they can be injected or run multiple times on the page. But that shouldn't be a problem when we get rid of all URL change listeners.
The text was updated successfully, but these errors were encountered: