You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Documentation says that exceeding estimated size will cause rehashing; in practice it crashes like this if you exceed the estimate by more than a little
This is 3.25ea6
Caused by: java.lang.IllegalStateException: ChronicleMap{name=chunks, file=/home/jonathan/Projects/colbert-pq/chunks.dat, identityHashCode=235195640}: Attempt to allocate #2 extra segment tier, 1 is maximum.
Possible reasons include:
- you have forgotten to configure (or configured wrong) builder.entries() number
- same regarding other sizing Chronicle Hash configurations, most likely maxBloatFactor(), averageKeySize(), or averageValueSize()
- keys, inserted into the ChronicleHash, are distributed suspiciously bad. This might be a DOS attack
at net.openhft.chronicle.hash.impl.VanillaChronicleHash.allocateTier(VanillaChronicleHash.java:869)
at net.openhft.chronicle.map.impl.CompiledMapQueryContext.nextTier(CompiledMapQueryContext.java:3115)
at net.openhft.chronicle.map.impl.CompiledMapQueryContext.alloc(CompiledMapQueryContext.java:3476)
at net.openhft.chronicle.map.impl.CompiledMapQueryContext.initEntryAndKey(CompiledMapQueryContext.java:3494)
at net.openhft.chronicle.map.impl.CompiledMapQueryContext.putEntry(CompiledMapQueryContext.java:3987)
at net.openhft.chronicle.map.impl.CompiledMapQueryContext.doInsert(CompiledMapQueryContext.java:4176)
at net.openhft.chronicle.map.MapEntryOperations.insert(MapEntryOperations.java:153)
at net.openhft.chronicle.map.impl.CompiledMapQueryContext.insert(CompiledMapQueryContext.java:4099)
at net.openhft.chronicle.map.MapMethods.put(MapMethods.java:89)
at net.openhft.chronicle.map.VanillaChronicleMap.put(VanillaChronicleMap.java:901)
at org.example.Main.loadFile(Main.java:120)
at org.example.Main.lambda$main$1(Main.java:64)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
at java.base/java.util.AbstractList$RandomAccessSpliterator.forEachRemaining(AbstractList.java:722)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:556)
at java.base/java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:291)
at java.base/java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:759)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:507)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1491)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:2073)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:2035)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:187)
The text was updated successfully, but these errors were encountered:
Thanks for bringing this to our attention. The documentation is definitely misleading here. We will review it. Feel free to make a pull request yourself. It is best practice to provision your Maps to ensure that they are sized for the largest possible number of entries.
Documentation says that exceeding estimated size will cause rehashing; in practice it crashes like this if you exceed the estimate by more than a little
This is 3.25ea6
The text was updated successfully, but these errors were encountered: