You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When loading/displaying large datasets I have noticed that something within the session, I assume trackedObjects, grows almost continuously. Even as objects are no longer needed and leave scope they remain in trackedObjects.
For large datasets this begins to cause serious performance issues, manipulating roughly 100MB of data uses almost 500MB of memory which to my knowledge can only be reclaimed by calling persistence.clean(). Calling persistence.clean() is an effective solution but it kills all tracking which could break other areas of the codebase that expect to be able to persists data normally.
From what I understand I am using persistencejs correctly and the growth of trackedObjects is a natural consequence of the mechanisms persistencejs uses to keep everything tracked and synchronized. If this is the case I am wondering if maybe there should be some sort of noTrack() or readOnly() filter so that large datasets can be loaded and displayed without permanently residing in memory?
Has anyone else encountered similar issues with 100MB plus datasets? It's possible I'm using the library wrong, but it's definitely something in the persistencejs session that's eating up memory.
The text was updated successfully, but these errors were encountered:
When loading/displaying large datasets I have noticed that something within the session, I assume trackedObjects, grows almost continuously. Even as objects are no longer needed and leave scope they remain in trackedObjects.
For large datasets this begins to cause serious performance issues, manipulating roughly 100MB of data uses almost 500MB of memory which to my knowledge can only be reclaimed by calling persistence.clean(). Calling persistence.clean() is an effective solution but it kills all tracking which could break other areas of the codebase that expect to be able to persists data normally.
From what I understand I am using persistencejs correctly and the growth of trackedObjects is a natural consequence of the mechanisms persistencejs uses to keep everything tracked and synchronized. If this is the case I am wondering if maybe there should be some sort of noTrack() or readOnly() filter so that large datasets can be loaded and displayed without permanently residing in memory?
Has anyone else encountered similar issues with 100MB plus datasets? It's possible I'm using the library wrong, but it's definitely something in the persistencejs session that's eating up memory.
The text was updated successfully, but these errors were encountered: