-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Describe large file chunking as a part of protocol #24
Comments
It exposes it as a configuration option to the developer.
I'm not convinced that we need this kind of feature negotiation. I mean I can see how it would be nice to have, but for now I'd like to avoid the complexity this introduces. What do you think? |
On the one hand it adds some complexity and maybe makes protocol less flexible. |
@wronglink thinking a little more about this, I think can be convinced that this is a good idea. @vayam what do you think? |
@felixge @wronglink It is probably useful for integrating with legacy frameworks.
Client can then adjust the |
@vayam I thought about that. It sounds reasonable, but there is a problem: 413 error could be returned by some middleware (a web server, or some framework inside logics). And it can cause the situation when client has to resend the beginning of file again, but the protocol main idea is in the opposite - avoid extra data exchange. I still think that it's a good idea to warn client if server has some restrictions before it tries to send all data. BTW I've forgotten to note that |
@wronglink The initial POST could be optional. If the server closes the connection right away It can be fast failed. I agree it is an extra step. If 413 is not acceptable how about using 422 Unprocessable Entity |
Sorry I've missed that. But I think I've got it: server could detect too large request by |
Well, this is tricky business. Generally speaking http clients do not expect to be send a reply before finishing their body transmission (unless Anyway, I spend a bunch of time yesterday to think about hypermedia/REST and a potentially different way of approaching this protocol. The end result could be something that's compatible with RFC 1867 by default, and could also easily allow the server to control chunk sizes by providing the client with appropriate "form" documents (these could be defined as a JSON media type). I'm not convinced that this is the way to go, but I'll try to come up with a proof of concept for this soon so we can at least consider it. |
The protocol mustn't define a maximum size by default. The current way we plan to approach your issue is by adding the discovery mechanism as discussed in #29. It will provide a way to get the maximum upload size from the server. |
I am faced with this exact issue and like the Max-Content-Length proposal by wronglink. |
Many frameworks and webservers have predefined request maximum size. Hanlding a huge file upload in one request is not trivial task. The current version of protocol says:
But how can client know what is the maximum size of 1 chunk? In most cases if client tries to send too big request the server would return a 413 error or something like this and client would not know what to do now.
I think that an additional header (lets say
Max-Content-Length
) that server returns on initial POST and HEAD requests can help us with that.I haven't found any existing headers for such task so I suggest to use a custom one. Here is a small example (we want to send 50 mb file):
Request:
Response:
Ok. Now client knows that only 10 mb per request is allowed. Than it sends chunks. If something went wrong - make
HEAD
request, detect the offset and continue.The text was updated successfully, but these errors were encountered: