Skip to content

Releases: FederatedAI/FATE-Serving

Release v2.0.2

26 Oct 12:03
Compare
Choose a tag to compare

Fix bugs to increase compatibility with version 1. X

Release v2.0.1

22 Oct 12:11
Compare
Choose a tag to compare

add applyId to support FDN

Release v1.3.3

27 Oct 07:45
Compare
Choose a tag to compare

Major Features and Improvements

  • to secure inter-parties gRPC connection with TLS

Release v2.0.0

24 Aug 08:18
Compare
Choose a tag to compare
  1. For single inference, the guest side and host side of version 2.0. * will be calculated in parallel, thus reducing the time consumption.

  2. Batch inference, which is a new feature introduced in version 2.0 *. To batch submit a batch of data to be predicted in one request, which greatly improves the throughput.

  3. Parallel computing: in version 1.3. * the inference of the guest side and the inference of the host side are serial. From version 2.0, the prediction of the guest side and the host side will adopt the method of parallel inference. The inference of each party can be divided into subtasks according to the number of features and then parallel computing.

  4. Introduce a new component serving-admin, which will provide the visual operation interface of cluster, including model management, traffic monitoring, configuration view, service management and so on.

  5. The new model persistence / recovery mode: when the service server is restarted, version 1.3. * uses the playback push model request to restore the model when the instance is restarted, and version 2.0. * uses the method of directly recovering the memory data to restore the model.

  6. Java SDK. With this SDK, you can use the service governance related functions of Fat-service, such as service automatic discovery and routing.

  7. In the new extension module, the user-defined development part (such as: host side feature acquisition adapter interface development) is put into this module, so as to separate from the core source code.

  8. Support a variety of caching methods. Fat-service strongly relies on redis in version 1.3 *. And no longer relies on redis since version 2.0 *. You can choose not to use cache, use local memory cache, and use redis.

  9. Change the internal prediction process, reconstruct the core code, remove the pre-processing and post-processing components, and use the unified exception handling. The algorithm component is no longer tightly coupled with the RPC interface.

  10. Provide command line tools to query configuration and model information

Release v1.3.2

07 Jul 03:36
Compare
Choose a tag to compare

Major Features and Improvements Add input feature hitting rate counting for HeteroLR and Hetero SecureBoost

Release v1.3.1

16 Jun 06:45
Compare
Choose a tag to compare

change the version of fastjson to 1.2.70

Release v1.3.0

08 Jun 07:20
Compare
Choose a tag to compare

Major Features and Improvements

  • Hetero Secureboosting communication optimization: communication round is reduced to 1 by letting the host send a pre-computed host node route, which is used for inferencing, to the guest.

fix issue #41 #64 #65

Release v1.2.5

22 May 13:51
Compare
Choose a tag to compare

fix issues#60

Release v1.2.4

16 Apr 07:07
Compare
Choose a tag to compare

fix issue#53 issue#51

Release v1.2.3

06 Mar 13:26
Compare
Choose a tag to compare

fix issue 48 ,Improved program exit behavior