You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The problem is that when we try to clean events by tags, Cassandra Journal runs a stream with current events by tag and over fetch data by fetching event payload for every event. It takes a lot of time to clean tag_views.
Better make stream that will be fetch events without payload, these data will be enough for deleteFromTagView.
I faced this problem when cleaning events in tag with a lot of events (millions) and the payload is a big.
Can we fix it, please?
Thanks
The text was updated successfully, but these errors were encountered:
I would be very grateful for your help!
I can try, but I need to discuss how to implement it.
Better add a new method or change current currentEventsByTagInternal(...) and use flag for fetching all data or without event payload? @patriknw
I guess, since most things are the same it would be easiest with a flag. In the end it's a different cql (prepared statement) and change in deserializeEventsByTagRow. The payload in the PersistentRepr could be set to NotUsed for this case.
akka-persistence-cassandra/core/src/main/scala/akka/persistence/cassandra/reconciler/DeleteTagViewForPersistenceId.scala
Line 40 in 8006539
The problem is that when we try to clean events by tags, Cassandra Journal runs a stream with current events by tag and over fetch data by fetching event payload for every event. It takes a lot of time to clean tag_views.
Better make stream that will be fetch events without payload, these data will be enough for deleteFromTagView.
I faced this problem when cleaning events in tag with a lot of events (millions) and the payload is a big.
Can we fix it, please?
Thanks
The text was updated successfully, but these errors were encountered: