-
-
Notifications
You must be signed in to change notification settings - Fork 384
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
S3 multipart uploads to text streams are committed on exception #763
Comments
This issue affects The problem boils down to how garbage collection of both If writer was properly marked as
Hence it will always return |
TextIoWrapper was patched in v7.0.0 ref 42682e7 (#783) So using the with-statement will now also abort upload on exception when in text mode. I think that fixes this issue |
@ddelange regardless of patches for |
@donsokolone Can you think of a way we can avoid the unwanted writes? |
Problem description
If I
smart_open
an S3 file in text mode, write some data, and then raise an exception, I expect the multipart upload to be terminated, and no key to be created. That's what I see for binary-modesmart_open
streams.This is almost the same issue as #684, but in the context of using text streams, instead of compression. It suggests there might be some broader architectural fix, although I'm not sure what it would be.
Steps/code to reproduce the problem
Given this code:
I expect
key.txt
not to be created. Instead, I see it created with the content "hello\n".For the almost-identical binary version:
it works as intended.
The proximate problem is that while
s3.MultipartWriter
implements cancellation on exception, when we open in text mode, we wrap the object in a TextIOWrapper.TextIOWrapper
inherits fromIOBase
, andIOBase.__exit__
unconditionally callsself.close()
, which completes the write and creates the partial object.Versions
Please provide the output of:
Checklist
Before you create the issue, please make sure you have:
The text was updated successfully, but these errors were encountered: