You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We also can allow to reference any address even if it does not exist in the current storage.
Then we can create separate file or allocate separate section for each new N links.
Later, we will be able to combine all read and write operations on the combined storage.
Each section of links storage can be allocated in separate heap block or in separated file. And can be always accessed by specific thread. That means that all read and write operations can be spread across multiple threads, and we can distribute the load this way without using any locks at all, just lock free queues to stream requests and results to and from threads.
Each request is mapped to all threads, when results are ready they are reduced to a single result.
In the case of heap allocation, each section (64 MB or any user-defined size) can be allocated separately without need to copy data, so this can save additional CPU resources.
In the case of mmap allocation, there is no need to close files (just new ones will be open), so there is no need to force flush data to disk, this it also saves resources during regular operation of the storage.
All trees in all sections are smaller, so it also helps to scale.
No need to use more than C+1 threads, where C is the number of memory channels in the system.
The text was updated successfully, but these errors were encountered:
We have an ability to set minimum value of internal references range.
Data/csharp/Platform.Data/LinksConstants.cs
Line 138 in 25bcf2c
We also can allow to reference any address even if it does not exist in the current storage.
Then we can create separate file or allocate separate section for each new N links.
Later, we will be able to combine all read and write operations on the combined storage.
Each section of links storage can be allocated in separate heap block or in separated file. And can be always accessed by specific thread. That means that all read and write operations can be spread across multiple threads, and we can distribute the load this way without using any locks at all, just lock free queues to stream requests and results to and from threads.
Each request is mapped to all threads, when results are ready they are reduced to a single result.
In the case of heap allocation, each section (64 MB or any user-defined size) can be allocated separately without need to copy data, so this can save additional CPU resources.
In the case of
mmap
allocation, there is no need to close files (just new ones will be open), so there is no need to force flush data to disk, this it also saves resources during regular operation of the storage.All trees in all sections are smaller, so it also helps to scale.
No need to use more than C+1 threads, where C is the number of memory channels in the system.
The text was updated successfully, but these errors were encountered: