Releases: FederatedAI/FATE-Serving
Releases · FederatedAI/FATE-Serving
Release v1.2.2
Fixed the bug that the host side registered service too many times when push the model from fateflow
Release v1.2.1
Major Features and Improvements
- Add prediction module for Factorization Machine
- Fixed a bug that got the local ip error
Release 1.2.0
Major Features and Improvements
- Replace router with a brand new service called serving-proxy, which supports authentication and inference request with HTTP or gRPC
- Decouple FATE-Serving and Eggroll, model is read directly from FATE-Flow
- Fixed a bug that got the remote inference result cache
Release 1.1.2
Major Features and Improvements
- Using metrics components and providing monitoring through JMX
- Host supports binding grpc interface with model information and registering it in zookeeper, and supports routing to different instances through model information.
- Guest adds a grpc interface for model binding. It supports model binding service id and registering it in zookeeper. The caller can route to different instances through service id. The service id is specified by fate_flow, which can uniquely represent a model.
Release 1.1.1
Major Features and Improvements
- Support indicating partial columns in Onehot Encoder
Release 1.1
Major Features and Improvements
- Add Online OneHotEncoder transform
- Add Online heterogeneous FeatureBinning transform
- Add heterogeneous SecureBoost Online Inference for binary-class classification,multi-class classification and regression
- Add service governance, obtain IP and port of all GRPC interfaces through zookeeper
- Support automatically to restore the loaded model when service restarts
Release 1.0
Major Features and Improvements
- Add online federated modeling pipeline DSL parser for online federated inference