You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My understanding is that only training dataset has the action of downsampling. For details, see the 'top_n_per_group' function in 'create_train_test_set.py'
There is a statement "For each of the application and traffic classification tasks, the dataset is first stratified split into train set and test set with the ratio of 80:20" in blog post https://blog.munhou.com/2020/04/05/Pytorch-Implementation-of-Deep-Packet-A-Novel-Approach-For-Encrypted-Tra%EF%AC%83c-Classi%EF%AC%81cation-Using-Deep-Learning/.
But in fact the ratio for provided dataset on link
https://drive.google.com/file/d/1EF2MYyxMOWppCUXlte8lopkytMyiuQu_/view?usp=sharing
is 20:80, so test set much bigger than train dataset:
The text was updated successfully, but these errors were encountered: