-
Notifications
You must be signed in to change notification settings - Fork 0
/
changelog.txt
64 lines (39 loc) · 2.55 KB
/
changelog.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
Version 0.2.1: (master)
Major refactor of the data.py API, now utilizes python generates and only makes a conversion to tf.data.Dataset when calling to_tfDataset()
Changing the logging mechanism, now it utilizes a global library logger, instead of the BaseLogger Class (which was depricated)
Change the logger default level to INFO and add an env variable that controls its value
Adding a profiler, that traces the current tf train graph and also write its operations
Version 0.2.0:
Adding multiGPU training with the integration of the horovod library, which handles the communication between the GPU or machines.
Adding a dockerfile for installing the horovod since it can be a hassle to install horovod with the correct dependencies
Version 0.1.9:
Refactor of the validation callback, adding more deafults mode of operation
Integration of a HPO (hyperparameter optimizer) to the polus lib
- Implementing optuna as the default backend for HPO
- Implementing a prune callback
Adding add_lookup_data to the CachedDataloader, which makes a conversion to the CachedDataloaderwLookup instance
Adding deep_copy method to CachedDataloader
Adding tutorials folder with python scripts for examples
- Creation of a classifier_example
Adding more unit tests
Version 0.1.8:
Adding a mask input to the CRF layer for masking impossible transitions (binary potentials)
Adding complex json serializer and deserializer
Adding split_bert_model that splits a TFBertModel into two keras models based on the given TFBertLayer index
Adding PolusModel abstraction that implements the init_from_data function
Version 0.1.7:
Adding tests for some data and models functions
A lot of minor changes not individually discriminated
Refactor and bug fixes
Version 0.1.6:
Introduction of a information retrieval package with trainers focused in dense retrieval
CacheDataLoaderWlookup:
- Extension of the CacheDataLoader but it will additionally save an python object associated with the cached dataset.
Version 0.1.5:
- experimental.data moved to data
CacheDataLoader:
- add clean method that deletes the index and associated files
- in case of error, automaticly cleans any file that was created
training:
- Creation of the BaseTrainer abstraction
- Creation of a ClassifierTrainer instance that holds the logic to train models according to the classification problem