-
Notifications
You must be signed in to change notification settings - Fork 419
SSWG Proposal
- Proposal: SSWG-NNNN
- Authors: Daniel Alm, George Barnett, Tim Burks, Michael Rebello
- Sponsor(s): TBD
- Review Manager: TBD
- Status: Implemented
- Implementation: grpc-swift
- Forum Threads: TODO
A gRPC client and server library with code generation.
Package Name | GRPC |
Module Name | GRPC |
Proposed Maturity Level | Sandbox |
License | Apache 2 |
Dependencies | SwiftNIO 2.2, SwiftNIO HTTP2 1.5, SwiftNIO SSL 2.4, SwiftNIO Transport Services 1.1, SwiftProtobuf 1.5, SwiftLog 1.0, Commander 0.8 |
gRPC is a widely used protocol with implementations in a number of languages. Most of these implementations wrap a core C-library which can lead to memory safety issues and is difficult to debug. This leads to further rough edges on iOS where clients have to deal with network connectivity changes (e.g. LTE to WiFi). Having a gRPC server and client implementation in Swift built on top of NIO will help to eliminate or reduce each of these issues.
TODO
Describe the reasoning for this package to be proposed to the Swift Server Working Group to be recommended for use across the Swift Server ecosystem.
If there are missing capabilities or flexibility with existing packages, outline them with clear examples.
We will use SwiftNIO to provide the network layer, SwiftNIO SSL for TLS, SwiftNIO HTTP2 for HTTP/2, and SwiftProtobuf will be used for message serialization. We will also use SwiftNIO Transport Services to provide first-class support for Apple Platforms.
Since gRPC generates client and server bindings from an interface definition language (Protocol Buffers) we should layout an example service so that the following proposal is more concrete. gRPC has four call types:
- unary (client sends one request, server sends one response),
- client streaming (client sends zero or more requests, server sends one response)
- server streaming (client sends one request, server sends zero or more responses)
- bidirectional streaming (client sends zero or more requests, server sends zero or more responses).
As an example we will consider an Echo service with one function for each of the
four call types. Each function takes Echo_EchoRequest
(s) as input and returns
Echo_EchoResponse
(s). The functions are:
-
get
(unary), -
collect
(client streaming), -
expand
(server streaming), and -
update
(bidirectional streaming).
The semantics of each call are explained in the server section below and the code is available in the gRPC Swift repository.
Users implement a service's business logic in a handler conforming to a
generated "provider" protocol which transitively conforms to CallHandlerProvider
.
The implementation of the generated protocol is passed to the sever on initialization.
For the Echo service the generated protocol is:
protocol Echo_EchoProvider: CallHandlerProvider {
func get(
request: Echo_EchoRequest,
context: StatusOnlyCallContext
) -> EventLoopFuture<Echo_EchoResponse>
func collect(
context: UnaryResponseCallContext<Echo_EchoResponse>
) -> EventLoopFuture<(StreamEvent<Echo_EchoRequest>) -> Void>
func expand(
request: Echo_EchoRequest,
context: StreamingResponseCallContext<Echo_EchoResponse>
) -> EventLoopFuture<GRPCStatus>
func update(
context: StreamingResponseCallContext<Echo_EchoResponse>
) -> EventLoopFuture<(StreamEvent<Echo_EchoRequest>) -> Void>
}
-
get
accepts a single request and acontext
which has access to the event loop and request head; the function returns a future response. -
collect
provides acontext
which has access to the event loop, request head and a response promise; the function returns a future stream event handler which should fulfil the response promise. -
expand
accepts a single request and streaming context which provides asendResponse
method in addition to the event loop and request head provided toget
, the call is terminated by returning the future status of the call. -
update
provides the same context asexpand
and returns a future stream event handler which should fulfil the status promise provided by the context.
A server can be started by instantiating a Server
with configuration which
includes a list of providers:
let group = MultiThreadedEventLoopGroup(numberOfThreads: 1)
let configuration = Server.Configuration(
target: .hostAndPort("localhost", 8080),
eventLoopGroup: group,
serviceProviders: [EchoProvider()]
)
let server: EventLoopFuture<Server> = Server.start(configuration: configuration)
To make a unary call to the Echo service:
// Initialize an Echo client
let echo: EchoClient = ...
// Call the Get function
let get = echo.get(Echo_EchoRequest.with { $0.text = "foo bar baz" })
// The server may send back initial metadata.
get.initialMetadata.whenSuccess { (headers: HTTPHeaders) in
print("Get initial metadata: \(headers)")
}
// get is unary, so it has a response future.
get.response.whenSuccess { (response: Echo_EchoResponse in
print("Get response: \(response)")
}
// Each call also has a status future containing the gRPC status code
// and (optional) message.
get.status.whenSuccess { (status: GRPCStatus) in
print("Get status: \(status)")
}
// The server may also send back trailing metadata:
get.trailingMetadata.whenSuccess { (trailers: HTTPHeaders) in
print("Get trailing metadata: \(trailers)")
}
To make a bidirectional streaming call to the Echo service:
// Call the Update function, providing a response handler.
let update = echo.update { (response: Echo_EchoResponse) in
print("Update response: \(response)")
}
// The client is streaming so has methods to send messages to the server:
update.sendMessage(Echo_EchoRequest.with { $0.text = "foo bar baz" }, promise: nil)
// It is also responsible for closing the request stream:
update.sendEnd(promise: nil)
// Note: there are versions of the above methods which return EventLoopFuture<Void>
// instead of accepting a promise. There is also a method to send a batch of messages.
initialMetadata
, trailingMetadata
, and status
also exist on the
call object as in the unary call.
The client streaming and server streaming calls are just a combination of the unary and bidirectional streaming calls above:
- The client streaming call provides methods for sending messages and has a response future
- The server streaming call accepts a single request on initialization has a response handler.
We will provide plugins for the Protobuf compiler protoc
to
generate code for the client and server. SwiftProtobuf provides a
plugin library which wewill make use of to do most of the heavy lifting.
The following are out of scope for this proposal:
- Supporting different serialization formats (e.g. FlatBuffers); only the Protocol Buffers format (via SwiftProtobuf) is supported.
- Support for QUIC.
Servers are started with some configuration:
let server: EventLoopFuture<Server> = Server.start(configuration: configuration)
The configuration (Server.Configuration
) includes what the server should bind
to (host and port, unix domain socket), the event loop group it use, TLS
configuration (optional), error delegate (optional) and a list of
CallHandlerProvider
s which it may use to serve requests.
The server also supports gRPC-Web via a handler which configures the pipeline based on the HTTP version of the request.
When the server recieves a request, the GRPCChannelHandler
checks the path of
the request (i.e the RPC being called) and determines whether a
CallHandlerProvider
exists which may service the request. If one exists then
an appropriate GRPCCallHandler
is returned which delegates logic to a method
implemented by the user. The NIO pipeline for the server's child channels
follows:
-
NIOSSLHandler
(if TLS is being used) - If HTTP/1 (i.e. gRPC-Web):
HTTPServerPipelineHandler
WebCORSHandler
- Otherwise (i.e. standard gRPC):
NIOHTTP2Handler
HTTP2StreamMultiplexer
HTTP2ToHTTP1ServerCodec
-
HTTP1ToRawGRPCServerCodec
translates HTTP/1 types to gRPC metadata and length-prefixed messages, it also handles request/response state, compression (not yet implemented) and message buffering since messages may span multiple frames. It is "Raw" since the emitted messages are just bytes and not yet typed. -
GRPCChannelHandler
configures the pipeline on receiving the request head by looking at the request URI and finding an appropriate service provider. This handler is removed from the pipeline when the handler has configured the rest of the pipeline. -
GRPCCallHandler
handles the delivery of requests and responses to and from the user implemented call handlers.
Where the GRPCCallHandler
is one of:
-
UnaryCallHandler
, -
ClientStreamingCallHandler
, -
ServerStreamingCallHandler
, BidirectionalStreamingCallHandler
An example implementation follows:
class EchoProvider: Echo_EchoProvider {
// get: Return a prefixed version of the request message
func get(request: Echo_EchoRequest, context: StatusOnlyCallContext) -> EventLoopFuture<Echo_EchoResponse> {
// Return the response future.
return context.eventLoop.makeSucceededFuture(Echo_EchoResponse.with {
$0.text = "Swift echo get: \(request.text)"
})
}
// collect: Join the requests on " ", send the joined requests in a single response
func collect(context: UnaryResponseCallContext<Echo_EchoResponse>) -> EventLoopFuture<(StreamEvent<Echo_EchoRequest>) -> Void> {
var parts: [String] = []
return context.eventLoop.makeSucceededFuture({ event in
switch event {
case .message(let message):
// Buffer the message.
parts.append(message.text)
case .end:
// We have a complete response now. Join the parts and succeed the promise.
context.responsePromise.succeed(Echo_EchoResponse.with {
$0.text = "Swift echo collect: " + parts.joined(separator: " ")
})
}
})
}
// expand: Split the request message on " ", send each part in a separate response
func expand(request: Echo_EchoRequest, context: StreamingResponseCallContext<Echo_EchoResponse>) -> EventLoopFuture<GRPCStatus> {
var endOfSendOperationQueue = context.eventLoop.makeSucceededFuture(())
// Split the request and add each part to a queue of messages to respond with.
for (i, part) in request.text.components(separatedBy: " ").enumerated() {
let response = Echo_EchoResponse.with {
$0.text = "Swift echo expand (\(i)): \(part)"
}
endOfSendOperationQueue = endOfSendOperationQueue.flatMap {
context.sendResponse(response)
}
}
// Once the queue is done, return a status OK future.
return endOfSendOperationQueue.map {
GRPCStatus.ok
}
}
// update: Return a prefixed version of each message in the request stream
func update(context: StreamingResponseCallContext<Echo_EchoResponse>) -> EventLoopFuture<(StreamEvent<Echo_EchoRequest>) -> Void> {
var endOfSendOperationQueue = context.eventLoop.makeSucceededFuture(())
var count = 0
return context.eventLoop.makeSucceededFuture({ event in
switch event {
case .message(let message):
let response = Echo_EchoResponse.with {
$0.text = "Swift echo update (\(count)): \(message.text)"
}
// Queue the response message.
endOfSendOperationQueue = endOfSendOperationQueue.flatMap {
context.sendResponse(response)
}
count += 1
case .end:
// End of request stream: fulfill the status promise.
endOfSendOperationQueue.map {
GRPCStatus.ok
}.cascade(to: context.statusPromise)
}
})
}
}
Making Calls
Codifying what has been said previously about the four call types, each of them
build on top of ClientCall
:
public protocol ClientCall {
associatedtype RequestMessage: Message
associatedtype ResponseMessage: Message
/// Initial response metadata.
var initialMetadata: EventLoopFuture<HTTPHeaders> { get }
/// Status of this call which may be populated by the server or client.
var status: EventLoopFuture<GRPCStatus> { get }
/// Trailing response metadata.
var trailingMetadata: EventLoopFuture<HTTPHeaders> { get }
/// Cancel the current call.
func cancel()
}
The calls which have a single response from the server (unary and client
streaming) implement UnaryResponseClientCall
which extends ClientCall
to
include a future response:
public protocol UnaryResponseClientCall: ClientCall {
/// The response message returned from the service if the call is successful.
/// This may be failed if the call encounters an error.
var response: EventLoopFuture<ResponseMessage> { get }
}
For calls which have any number of responses from the server (server streaming
and bidirectional streaming), constructing the call requires a response handler:
(ResponseMessage) -> Void
.
Calls sending a single request to the server (unary and server streaming) accept
a single request on initialisation. Calls which send any number of requests to
the server (client streaming and bidirectional streaming) return a call which
conforms to StreamingRequestClientCall
which extends ClientCall
to provide
methods for sending messages to the server:
public protocol StreamingRequestClientCall: ClientCall {
/// Sends a message to the service.
func sendMessage(_ message: RequestMessage) -> EventLoopFuture<Void>
func sendMessage(_ message: RequestMessage, promise: EventLoopPromise<Void>?)
/// Sends a sequence of messages to the service.
func sendMessages<S: Sequence>(_ messages: S) -> EventLoopFuture<Void> where S.Element == RequestMessage
func sendMessages<S: Sequence>(_ messages: S, promise: EventLoopPromise<Void>?) where S.Element == RequestMessage
/// Terminates a stream of messages sent to the service.
func sendEnd() -> EventLoopFuture<Void>
func sendEnd(promise: EventLoopPromise<Void>?)
}
Each of the call types can be made from factory methods on the GRPCClient
protocol. The call signatures are:
public func makeUnaryCall<Request: Message, Response: Message>(
path: String,
request: Request,
callOptions: CallOptions? = nil,
responseType: Response.Type = Response.self
) -> UnaryCall<Request, Response>
public func makeServerStreamingCall<Request: Message, Response: Message>(
path: String,
request: Request,
callOptions: CallOptions? = nil,
responseType: Response.Type = Response.self,
handler: @escaping (Response) -> Void
) -> ServerStreamingCall<Request, Response>
public func makeClientStreamingCall<Request: Message, Response: Message>(
path: String,
callOptions: CallOptions? = nil,
requestType: Request.Type = Request.self,
responseType: Response.Type = Response.self
) -> ClientStreamingCall<Request, Response>
public func makeBidirectionalStreamingCall<Request: Message, Response: Message>(
path: String,
callOptions: CallOptions? = nil,
requestType: Request.Type = Request.self,
responseType: Response.Type = Response.self,
handler: @escaping (Response) -> Void
) -> BidirectionalStreamingCall<Request, Response>
This keeps the code generation straightforward: the generated client stubs call
these functions with some static information, such as the path (e.g.
"/Echo/Get"
) and the appropriate request and response types.
In cases where no client has been generated, an AnyServiceClient
can be used.
It provides the above methods but has no stubs for a service:
let anyServiceClient: AnyServiceClient = ...
// Equivalent to: echoClient.get(Echo_EchoRequest.with { $0.text = "foo bar baz" })
let get = anyServiceClient.makeUnaryCall(
path: "/Echo/Get",
request: Echo_EchoRequest.with { $0.text = "foo bar baz" },
responseType: Echo_EchoResponse.self
)
Making Clients
So far we have glossed over how to construct a client. A client requires a
connection to a gRPC server, in other gRPC implementations this it typically
called a Channel
. To avoid confusion with NIO it is named ClientConnection
.
A ClientConnection
is initialized with some configuration:
let configuration = ClientConnection.Configuration(
target: .hostAndPort("localhost", "8080"),
eventLoopGroup: group,
// Delegates for observing errors and connectivity state changes:
errorDelegate: nil,
connectivityStateDelegate: nil,
// TLS configuration, a subset of NIO's TLSConfiguration:
tls: nil,
// Connection backoff configuration:
connectionBackoff: ConnectionBackoff()
)
let connection = ClientConnection(configuration: configuration)
Clients take a ClientConnection
and an optional CallOptions
struct on
initialization:
// Call options used for each call unless specified at call time.
// Has support for custom metadata (headers) and call timeouts amongst a few
// other things.
let defaultCallOptions = CallOptions(timeout: try .seconds(90))
// Create a client, this would usually be generated from a proto.
let echo = EchoClient(
connection: connection,
defaultCallOptions: defaultCallOptions // optional
)
ClientConnection
lifecycle
The ClientConnection
is initialized with Configuration
as described above.
During initialization the NIO Channel
for the connection is created and stored
in an EventLoopFuture
. The connection is created using the exponential
backoff algorithm described by gRPC. The state of the connection
is monitored (using the states defined by gRPC: idle, connecting, ready,
transient failure, and shutdown) and will automatically reconnect (with backoff)
if the channel is closed but the close was not initiated by the user.
Users may optionally provide a connectivity state delegate to observe these changes:
public protocol ConnectivityStateDelegate {
func connectivityStateDidChange(from oldState: ConnectivityState, to newState: ConnectivityState)
}
NIO Pipeline
The client’s channel pipeline follows:
-
NIOSSLHandler
(if TLS is being used) NIOHTTP2Handler
HTTP2StreamMultiplexer
Each call is made on an HTTP/2 stream channel whose pipeline is:
HTTP2ToHTTP1ClientCodec
-
HTTP1ToRawGRPCClientCodec
: translates HTTP/1 types into gRPC metadata and length-prefixed messages, it also handles request/response state, compression (not yet implemented) and message buffering since messages may span multiple frames. It is “Raw” since the emitted messages are just bytes and not yet typed. -
GRPCClientCodec
: handles encoding/decoding of messages. -
ClientRequestChannelHandler
: handles sending messages from the client, has unary and streaming versions. -
ClientResponseChannelHandler
: handles receiving messages from the server, has unary and streaming versions. Holds the promises for the varying futures exposed in theClientCall
protocols. It also holds the logic for timing out calls and handling errors.
A few notes:
- The HTTP2 to HTTP1 translation simplifies implementation of the
HTTP1ToRawGRPCClientCodec
- Other handlers exist in the pipeline for error handling and verification (i.e.
- TLS handshake was successful and a valid protocol was negotiated) but were omitted for brevity.
- The differences between the four call types are just in their construction and request and response handlers.
The library also provides a means to run using NIO Transport Services instead
where it’s supported on Apple platforms. The user only has to provide a
correctly typed EventLoopGroup
in their Configuration
and we’ll pick the
appropriate bootstrap. To aid this we provide some utility functions:
public enum NetworkPreference {
// NIOTS when available, NIO otherwise
case best
// Pick manually
case userDefined(NetworkImplementation).
}
public enum NetworkImplementation {
// i.e. NIOTS (this has the appropriate @available/#if canImport(Network))
case networkFramework
// i.e. NIO
case posix
}
public enum PlatformSupport {
// Returns an EventLoopGroup of the appropriate type based on user preference.
public static func makeEventLoopGroup(
loopCount: Int,
networkPreference: NetworkPreference = .best
) -> EventLoopGroup {
// ...
}
}
One thing which should be called out is that the TLS support is provided by SwiftNIO SSL, even when Network.framework is being used. Ideally we would provide TLS via Network.framework if it’s being used, however abstracting over the different configuration for the two interfaces is not trivial.
- Much of the configuration in the gRPC core library is via
"channel arguments". Some of these options are surfaced via
Configuration
andCallOptions
, however, providing a mechanism where new options can be added without breaking API (i.e. adding an additional field to a struct) would be beneficial. - Removing the HTTP1 code from the client pipeline may yield a small performance improvement. This is purely implementation, however, and could be done at any time.
- The server cannot be configured to choose the level of support for gRPC Web (that is: gRPC Web is always supported), making this configurable would be better for users who only want to support standard gRPC.
- Neither the client or server can have the bootstraps easily configured.
- When using NIO Transport Services on Apple Platforms, TLS is always provided
via NIO's
NIOSSLHandler
and not via Network.framework.
TODO
Explain why this solution should be accepted at the proposed maturity level.
TODO
Describe alternative approaches to addressing the same problem, and why you chose this approach instead.