You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Regarding to quality signal parts, the fasttext based model trained on wiki is only provided in README.
Would it be possible to share palm version(books, wiki, owt) too?
or share the data proportion recipe to train the classifier?
because the classifier behaviors very differently based on the corpus mixture..
Thanks in advance
Cheers,
The text was updated successfully, but these errors were encountered:
Before talking about the mixture version, I might edit my question to "Were there any metrics/standards to select the classifier?"
Even for the 'wiki version' classifier, the score distribution is different compared to the provided meta information and the one I trained(my training data points were from a few thousands to a few milion docs, and the ratio of high quality and CC is also 50%, 50%).
Since there might be several points attributing to the score(e.g, word numbers- so that doc numbers, lr and so on), the classifier output score might be very different even under the wiki source.
Therfore, I guess data mixture version score would act differently even severely.. so before going to the mixture parts I wonder 'how' you guys decide to select the classifier!
Hi, there
Regarding to quality signal parts, the fasttext based model trained on wiki is only provided in README.
Would it be possible to share palm version(books, wiki, owt) too?
or share the data proportion recipe to train the classifier?
because the classifier behaviors very differently based on the corpus mixture..
Thanks in advance
Cheers,
The text was updated successfully, but these errors were encountered: