-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cold start and Duration times #429
Comments
Hey @T-Klug, Thanks for reaching out. Can you share with versions of all our layers you are using, including which runtime you're using? I suspect that most of your added init time is coming from the tracing layer, but I can't know for sure unless I get a bit more info about your setup. |
Thanks @T-Klug. Can you test with just the extension please, and no tracing layer? To do this, you'll set Datadog's next-gen extension shouldn't add nearly as much overhead as you're seeing, other customers have reported overhead in the 10s of milliseconds range compared with their base implementation. So, I'd like to confirm that this overhead is not actually coming from the extension, but from the tracer. |
After reviewing your documentation on the CDK - that is not the correct way of running it. |
still a 200ms difference though between the 600ms and 800 ms |
The first number is durationMs - and the second is initDurationMs
Even with the experimental flag, these numbers are pretty abysmal for cold starts and even durations (due to the flush)
Is the only solution to this writing a delivery mechanism ourselves? What other options exist because this is a big hit on performance.
The text was updated successfully, but these errors were encountered: