-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tx results too large2 #1148
Tx results too large2 #1148
Conversation
…ach transaction result at a time and append it to the array
…: code = ResourceExhausted desc = grpc: trying to send message larger than max (26319669 vs. 20971520)
…e, for instance block 55179362
@@ -148,6 +148,11 @@ func (g *EmulatorGateway) GetTransactionResultsByBlockID(_ flow.Identifier) ([]* | |||
panic("GetTransactionResultsByBlockID not implemented") | |||
} | |||
|
|||
func (g *EmulatorGateway) GetTransactionResultByIndex(_ flow.Identifier, _ uint32) (*flow.TransactionResult, error) { | |||
// TODO: implement |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be implemented before merge?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are plenty of other operations in EmulatorGateway that is not implemented. I am not even sure how to do it. It also involves a PR in emulator to expose the missing configuration. And as the error is something that is happening because of GRPC that is not used in emulator i think it is ok to merge this without implementing this.
if strings.Contains(errorMessage, "received message larger than max") || strings.Contains(errorMessage, "trying to send message larger than max") { | ||
txRes := []*flow.TransactionResult{} | ||
for index := range tx { | ||
txr, err := f.gateway.GetTransactionResultByIndex(blockID, uint32(index)) | ||
if err != nil { | ||
return nil, nil, errors.Wrapf(err, "failed getting result for index %d", index) | ||
} | ||
txRes = append(txRes, txr) | ||
} | ||
return tx, txRes, nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if this would make sense to be implemented inside the GetTransactionResultsByBlockID
grpc method. First I think matching this error is very grpc specific. Second it looks like we always want to get the results by index if this error occurs so no need to maybe expose that implementation detail as it is currently grpc specific, due to limitations there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have no way of knowing what transactions to iterate over in that case, we could fetch the block and the collections again, but personally i do not see that it is worth it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why? If you put the above added logic inside the GetTransactionResultsByBlockID
you should be able to call then the GetTransactionResultByIndex
by that same block ID, no?
@@ -36,7 +36,7 @@ import ( | |||
|
|||
// maxGRPCMessageSize 20mb, matching the value set in onflow/flow-go |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
update comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left some comments
I'm closing this as discussed with @bjartek, since the increase of the limit wtih PR #1150 will unblock the issues we are currently facing. If there are similar issues in the future I feel streaming APIs will help getting huge data. The issue with this implementation is also it might open up other issues such as rate limiting on the ANs, since there will be a lot of sub requests to avoid one big request. |
duplicate of #1118 to get all the data.