You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Naive (potential) user question here. I'm looking for a good, up to date language detection library for Annif - see this issue. Lingua seems promising, but it seems to require quite a lot of memory, especially when all supported languages are considered - this is pointed out in the README. I tested detecting the language of the example sentence "languages are awesome" and it required 1.8GB of memory. When I chose to preload all models, this increased to 2.6GB.
I tested doing the same with pycld3 and langdetect and their memory usage was much much lower - too little to bother measuring accurately. I don't see anything in the README that would justify using such huge amounts of RAM compared to other implementations. Having the rules is certainly good, but I don't think they use lots of RAM.
I'm wondering if there's some trick that other language detection libraries are performing to reduce their memory requirements? Could Lingua do that too? Or is this just a tradeoff that you have to accept if you want to achieve the high accuracy? For my purposes, although it's nice to have good accuracy, this isn't a top priority. It would also help to be able to choose smaller and faster models with slightly reduced accuracy.
The text was updated successfully, but these errors were encountered:
Repository owner
locked and limited conversation to collaborators
Aug 4, 2022
Naive (potential) user question here. I'm looking for a good, up to date language detection library for Annif - see this issue. Lingua seems promising, but it seems to require quite a lot of memory, especially when all supported languages are considered - this is pointed out in the README. I tested detecting the language of the example sentence
"languages are awesome"
and it required 1.8GB of memory. When I chose to preload all models, this increased to 2.6GB.I tested doing the same with pycld3 and langdetect and their memory usage was much much lower - too little to bother measuring accurately. I don't see anything in the README that would justify using such huge amounts of RAM compared to other implementations. Having the rules is certainly good, but I don't think they use lots of RAM.
I'm wondering if there's some trick that other language detection libraries are performing to reduce their memory requirements? Could Lingua do that too? Or is this just a tradeoff that you have to accept if you want to achieve the high accuracy? For my purposes, although it's nice to have good accuracy, this isn't a top priority. It would also help to be able to choose smaller and faster models with slightly reduced accuracy.
The text was updated successfully, but these errors were encountered: