-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New functionality, show query real execution plan #149
Comments
Hi @banlex73, I agree that having plan information would be tremendously useful. Unfortunately I don't think that this approach could reliably work. There's no guarantee that the logs are actually available locally (or at least from the powa / powa-collector user), and in a well known location, so this would be mostly incompatible with anyone using syslog for instance. Then there's the question of log retention, log format and adequate auto_explain configuration. At best, you would partial information which won't really help for the really problematic cases like a fast query that suddenly becomes a bit slower but is still too fast to be logged with any reasonable auto_explain.log_min_duration_statement value. Indeed, the useful information isn't the plan in itself but detecting a plan change that impact the query runtime enough that a DBA should investigate and fix the root issue. Unfortunately I don't see any solution being really great as retrieving and storing the plan will always be quite expensive. Have you looked at https://github.com/ossc-db/pg_store_plans? This is probably the most promising approach. Unfortunately, I think that this extension is for now incompatible with powa as it's not really compatible with pg_stat_statements. Maybe things will change there with postgres 14 as there's a new API to use the queryid calculation in other extensions, but as-is I don't see how we could integrate it. It would also be interesting to do some benchmarking of that extension, as it may be too expensive from OLTP workload. |
Hi Julien
I understand your concern..
So far, look what I have
On a monitored host, there's a python script running as a service and
reading postgres log, it is extracting execution plans and SQL queries and
inserting them into local postgres DB.
From the remote repository host, powa-collector connects and copying
plans/queries to the repository database
and finally, powa-web represents through the UI
What I as a DBA have: any period of time I can get all SQL queries with
their real execution plans (sorted, filtered, per database)
[image: image.png]
[image: image.png]
Interesting, I missed *pg_store_plans*, need to try it!
Best regards
Andriy
ср, 27 жовт. 2021 о 22:22 Julien Rouhaud ***@***.***> пише:
… Hi @banlex73 <https://github.com/banlex73>,
I agree that having plan information would be tremendously useful.
Unfortunately I don't think that this approach could reliably work. There's
no guarantee that the logs are actually available locally (or at least from
the powa / powa-collector user), and in a well known location, so this
would be mostly incompatible with anyone using syslog for instance.
Then there's the question of log retention, log format and adequate
auto_explain configuration. At best, you would partial information which
won't really help for the really problematic cases like a fast query that
suddenly becomes a bit slower but is still too fast to be logged with any
reasonable auto_explain.log_min_duration_statement value. Indeed, the
useful information isn't the plan in itself but detecting a plan change
that impact the query runtime enough that a DBA should investigate and fix
the root issue.
Unfortunately I don't see any solution being really great as retrieving
and storing the plan will always be quite expensive.
Have you looked at https://github.com/ossc-db/pg_store_plans? This is
probably the most promising approach. Unfortunately, I think that this
extension is for now incompatible with powa as it's not really compatible
with pg_stat_statements. Maybe things will change there with postgres 14 as
there's a new API to use the queryid calculation in other extensions, but
as-is I don't see how we could integrate it. It would also be interesting
to do some benchmarking of that extension, as it may be too expensive from
OLTP workload.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#149 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AIHWEYE65LUHTK6CW4PIYKLUJDT3VANCNFSM5G2YZ6CQ>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
|
Have you already implemented this or is it some general approach that you would like? Unfortunately the images are not displayed. Because as far as data collection goes, you could entirely do it in a pluggable way. If your python script inserts the explain plans with a timestamp on your local instance (or anywhere where a query source could retrieve it, like using foreign data wrapper or something else), you could write custom snapshot / coalesce / purge functions and therefore implement a custom datasource that powa and powa-collector will handle like the rest of the data sources (see https://powa.readthedocs.io/en/latest/components/powa-archivist/development.html). Maybe that's what you already did? Note that I don't personally maintain such custom datasources and I don't know if anyone does, so maybe there are some small issues with that, but nothing that can't be fixed. The bigger problem would come from the UI. For now it's not possible to have custom widgets on powa-web, but that's also probably doable. If you want to be able to do that I can try to make that work. |
Yes, I have already implemented it as a draft version.
python script inserts plans into
\d+ *powa_execution_plan*
Table "public.powa_execution_plan"
Column | Type | Collation | Nullable |
Default | Storage | Stats target | Description
------------------+--------------------------+-----------+----------+---------+----------+--------------+-------------
srvid | bigint | | not null |
| plain | |
dbid | oid | | not null |
| plain | |
ts | timestamp with time zone | | not null |
| plain | |
datname | text | | not null |
| extended | |
application_name | text | | |
| extended | |
usename | text | | |
| extended | |
rhost | text | | |
| extended | |
duration | numeric | | not null |
| main | |
query | text | | |
| extended | |
plan | text | | |
| extended | |
on powa repository database, teach powa-collector how to get that data from
the monitored postgres cluster
*INSERT* *INTO* powa_functions (srvid, "module", operation, function_name,
query_source, query_cleanup, added_manually, enabled, priority, extname)
*VALUES*(20, 'powa_execution_plan', 'snapshot',
'powa_execution_plan_snapshot', 'powa_execution_plan_src', 'SELECT
powa_execution_plan_reset()', *true*, *true*, 10, *NULL*);
and 3 functions:
*powa_execution_plan_reset*
*powa_execution_plan_snapshot*
*powa_execution_plan_src*
Also managed to add a widget to powa-web to represent this data in UI
чт, 28 жовт. 2021 о 23:27 Julien Rouhaud ***@***.***> пише:
… Have you already implemented this or is it some general approach that you
would like? Unfortunately the images are not displayed.
Because as far as data collection goes, you could entirely do it in a
pluggable way. If your python script inserts the explain plans with a
timestamp on your local instance (or anywhere where a query source could
retrieve it, like using foreign data wrapper or something else), you could
write custom snapshot / coalesce / purge functions and therefore implement
a custom datasource that powa and powa-collector will handle like the rest
of the data sources (see
https://powa.readthedocs.io/en/latest/components/powa-archivist/development.html).
Maybe that's what you already did? Note that I don't personally maintain
such custom datasources and I don't know if anyone does, so maybe there are
some small issues with that, but nothing that can't be fixed.
The bigger problem would come from the UI. For now it's not possible to
have custom widgets on powa-web, but that's also probably doable. If you
want to be able to do that I can try to make that work.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#149 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AIHWEYGOX4CDRJLCN7U3KT3UJJEFBANCNFSM5G2YZ6CQ>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
|
just installed pg_store_plans
Looks great, there is everything what I need. Hope one day it will support
PG14!
чт, 28 жовт. 2021 о 23:27 Julien Rouhaud ***@***.***> пише:
… Have you already implemented this or is it some general approach that you
would like? Unfortunately the images are not displayed.
Because as far as data collection goes, you could entirely do it in a
pluggable way. If your python script inserts the explain plans with a
timestamp on your local instance (or anywhere where a query source could
retrieve it, like using foreign data wrapper or something else), you could
write custom snapshot / coalesce / purge functions and therefore implement
a custom datasource that powa and powa-collector will handle like the rest
of the data sources (see
https://powa.readthedocs.io/en/latest/components/powa-archivist/development.html).
Maybe that's what you already did? Note that I don't personally maintain
such custom datasources and I don't know if anyone does, so maybe there are
some small issues with that, but nothing that can't be fixed.
The bigger problem would come from the UI. For now it's not possible to
have custom widgets on powa-web, but that's also probably doable. If you
want to be able to do that I can try to make that work.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#149 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AIHWEYGOX4CDRJLCN7U3KT3UJJEFBANCNFSM5G2YZ6CQ>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
|
Ah nice! So would you like to be able to do that using a system of custom plugins and maybe additional API for enabling custom extension on the powa side rather than forking powa-web and plain INSERTs?
Oh I didn't know that it wasn't compatible with postgres 14. I will try to ping some of the developers to see if they plan do to something about it or would welcome some PR. |
Yes, I have already implemented it as a draft version.
Ah nice! So would you like to be able to do that using a system of custom
plugins and maybe additional API for enabling custom extension on the powa
side rather than forking powa-web and plain INSERTs?
I am not sure that I can do custom plugins...Also, I tested a little bit
*pg_store_plans* and found it very useful! Now I am thinking of getting rid
of my python script and starting using pg_store_plans instead as a source.
пт, 29 жовт. 2021 о 12:04 Julien Rouhaud ***@***.***> пише:
… Yes, I have already implemented it as a draft version.
Ah nice! So would you like to be able to do that using a system of custom
plugins and maybe additional API for enabling custom extension on the powa
side rather than forking powa-web and plain INSERTs?
just installed pg_store_plans
Looks great, there is everything what I need. Hope one day it will support
PG14!
Oh I didn't know that it wasn't compatible with postgres 14. I will try to
ping some of the developers to see if they plan do to something about it or
would welcome some PR.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#149 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AIHWEYEHTN6OYVMDBE4XPODUJL44LANCNFSM5G2YZ6CQ>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
|
@banlex73 good news, the current development version of pg_store_plans is now compatible with pg14. It should also be way easier to integrate with powa as they kindly agreed to have the extension rely on a single queryid source. |
Great news!
Thank you for letting me know
…On Wed, Nov 24, 2021, 23:12 Julien Rouhaud ***@***.***> wrote:
@banlex73 <https://github.com/banlex73> good news, the current
development version of pg_store_plans is now compatible with pg14. It
should also be way easier to integrate with powa as they kindly agreed to
have the extension rely on a single queryid source.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#149 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AIHWEYE7W4XL3HWECEFGHK3UNXOXVANCNFSM5G2YZ6CQ>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
|
Hello wonderful powa-team
I've been using powa for 3 years already and feel like we are missing one thing here - query's execution plan.
It would be nice to have real execution plans from targets available in powa.
I tried pg_show_plans extension but it provides only execution plans for active sessions, no historical data. I think, that auto_explain is the best in this case. Execution plan can be extracted from a postgres log file and loaded into a table.
My vision is something like:
wondering what do you think about it?
The text was updated successfully, but these errors were encountered: