Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cmd/worklog: add postgres support #114

Merged
merged 6 commits into from
Sep 28, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
66 changes: 66 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -75,3 +75,69 @@ jobs:
sudo apt-get remove -qq libxss-dev libxres-dev libx11-dev
sudo apt-get autoremove -qq
go build -tags no_xorg ./cmd/watcher

postgres-test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: postgres
options: >-
--name postgres
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432

env:
PGHOST: localhost
PGPORT: 5432
PGUSER: test_user
PGPASSWORD: password
POSTGRES_DB: postgres

steps:
- name: install Go
uses: actions/setup-go@v4
with:
go-version: ${{ matrix.go-version }}

- name: checkout code
uses: actions/checkout@v4
with:
fetch-depth: 1

- name: non-Go linux dependencies
run: |
sudo apt-get update
sudo apt-get install -qq libudev-dev

- name: set up postgres users
run: |
psql --host $PGHOST \
--username="postgres" \
--dbname="postgres" \
--command="CREATE USER $PGUSER PASSWORD '$PGPASSWORD'" \
--command="ALTER USER $PGUSER CREATEDB" \
--command="CREATE USER ${PGUSER}_ro PASSWORD '${PGPASSWORD}_ro'" \
--command="\du"
echo ${PGHOST}:${PGPORT}:*:${PGUSER}:${PGPASSWORD} >> ~/.pgpass
echo ${PGHOST}:${PGPORT}:*:${PGUSER}_ro:${PGPASSWORD}_ro >> ~/.pgpass
chmod 600 ~/.pgpass

- name: unit tests postgres
run: |
go test ./cmd/worklog/pgstore

- name: integration tests postgres
uses: nick-fields/retry@v3
with:
timeout_minutes: 10
max_attempts: 3
command: |
go test -tags postgres -run TestScripts/worklog_load_postgres
12 changes: 9 additions & 3 deletions cmd/worklog/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# `worklog`

`worklog` is a module that records screen activity, screen-saver lock state and AFK status. It takes messages from the `watcher` module and records them in an SQLite database and serves a small dashboard page that shows work activity.
`worklog` is a module that records screen activity, screen-saver lock state and AFK status. It takes messages from the `watcher` module and records them in an SQLite or PostgreSQL database and serves a small dashboard page that shows work activity.

Example configuration fragment (requires a kernel configuration fragment):
```
Expand Down Expand Up @@ -37,7 +37,7 @@ log_mode = "log"
log_level = "info"

[module.worklog.options]
database_dir = "worklog"
database = "sqlite:worklog"

[module.worklog.options.rules.afk]
name = "afk-watcher"
Expand Down Expand Up @@ -160,7 +160,7 @@ In addition to the dashboard endpoint provided by the `worklog` server, there ar
- `GET` `/data/`: accepts `date` and `tz` query parameters for the day of the data to collect, and a `raw` parameter to return un-summarised data.
- `GET` `/summary/`: accepts `start`, `end` and `tz` query parameters for time ranges, a `cooldown` parameter to ignore brief AFK periods, an `other` parameter which is a list of other worklog instance URLs to collate into the result, and a `raw` parameter to return un-summarised data.
- `GET` `/dump/`: accepts `start` and `end` query parameters.
- `GET` `/backup/`: accepts `pages_per_step` and `sleep` query parameters corresponding to the [SQLite backup API](https://www.sqlite.org/backup.html)'s [`sqlite3_backup_step` `N` parameter](https://www.sqlite.org/c3ref/backup_finish.html#sqlite3backupstep) and the time between successive `sqlite3_backup_step` calls. The backup endpoint is only available when the server address is a loop-back address.
- `GET` `/backup/`: when using SQLite for data storage, accepts `pages_per_step` and `sleep` query parameters corresponding to the [SQLite backup API](https://www.sqlite.org/backup.html)'s [`sqlite3_backup_step` `N` parameter](https://www.sqlite.org/c3ref/backup_finish.html#sqlite3backupstep) and the time between successive `sqlite3_backup_step` calls; when using PostgreSQL for storage, accepts `directory` indicating the destination directory to write the backup to. The backup endpoint is only available when the server address is a loop-back address.
- `GET`/`POST` `/query`: takes an SQLite SELECT statement (content-type:application/sql or URL parameter, sql) or a CEL program (content-type:application/cel) that may use a built-in `query(<sql select statement>)` function. The query endpoint is only available when the server address is a loop-back address.

A potentially useful configuration for debugging rules is
Expand All @@ -187,3 +187,9 @@ The CEL environment enables the CEL [optional types library](https://pkg.go.dev/
## CEL extensions

The CEL environment provides the [`Lib`](https://pkg.go.dev/github.com/kortschak/dex/internal/celext#Lib) and [`StateLib`](https://pkg.go.dev/github.com/kortschak/dex/internal/celext#StateLib) extensions from the celext package. `StateLib` is only available in `module.*.options.rules.*.src`.

## PostgreSQL store

When using PostgreSQL as a store, the `~/.pgpass` file MAY be used for password look-up for the primary connection to the database and MUST be used for the read-only connection.

The read-only connection is made on start-up. Before connection, the read-only user, which is `${PGUSER}_ro` where `${PGUSER}` is the user for the primary connection, is checked for its ability to read the tables used by the store and for the ability to do any non-SELECT operations. If the user cannot read the tables, a warning is emitted, but the connection is made. If non-SELECT operations are allowed for the user, or the user can read other tables, no connection is made. Since this check is only made at start-up, there is a TOCTOU concern here, but exploiting this would require having user ALTER and GRANT grants at which point you have already lost the game.
19 changes: 13 additions & 6 deletions cmd/worklog/api/api.go
Original file line number Diff line number Diff line change
Expand Up @@ -23,12 +23,19 @@ type Config struct {
LogLevel *slog.Level `json:"log_level,omitempty"`
AddSource *bool `json:"log_add_source,omitempty"`
Options struct {
DynamicLocation *bool `json:"dynamic_location,omitempty"`
Web *Web `json:"web,omitempty"`
DatabaseDir string `json:"database_dir,omitempty"` // Relative to XDG_STATE_HOME.
Hostname string `json:"hostname,omitempty"`
Heartbeat *rpc.Duration `json:"heartbeat,omitempty"`
Rules map[string]Rule `json:"rules,omitempty"`
DynamicLocation *bool `json:"dynamic_location,omitempty"`
Web *Web `json:"web,omitempty"`
// Database is the URL location of the worklog
// database. When the scheme is sqlite, the location
// is a directory relative to XDG_STATE_HOME as
// URL opaque data.
Database string `json:"database,omitempty"`
Hostname string `json:"hostname,omitempty"`
Heartbeat *rpc.Duration `json:"heartbeat,omitempty"`
Rules map[string]Rule `json:"rules,omitempty"`

// Deprecated: Use Database with sqlite scheme.
DatabaseDir string `json:"database_dir,omitempty"` // Relative to XDG_STATE_HOME.
} `json:"options,omitempty"`
}

Expand Down
15 changes: 7 additions & 8 deletions cmd/worklog/dashboard.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,6 @@ import (
"time"

worklog "github.com/kortschak/dex/cmd/worklog/api"
"github.com/kortschak/dex/cmd/worklog/store"
)

func (d *daemon) dashboardData(ctx context.Context) http.HandlerFunc {
Expand Down Expand Up @@ -91,7 +90,7 @@ func dateQuery(u *url.URL, loc *time.Location) (time.Time, error) {
return time.ParseInLocation(time.DateOnly, d, loc)
}

func (d *daemon) eventData(ctx context.Context, db *store.DB, rules map[string]map[string]ruleDetail, date time.Time, raw bool) (map[string]any, error) {
func (d *daemon) eventData(ctx context.Context, db storage, rules map[string]map[string]ruleDetail, date time.Time, raw bool) (map[string]any, error) {
if raw {
return d.rawEventData(ctx, db, rules, date)
}
Expand Down Expand Up @@ -189,7 +188,7 @@ func (d *daemon) eventData(ctx context.Context, db *store.DB, rules map[string]m
return events, nil
}

func (d *daemon) rawEventData(ctx context.Context, db *store.DB, rules map[string]map[string]ruleDetail, date time.Time) (map[string]any, error) {
func (d *daemon) rawEventData(ctx context.Context, db storage, rules map[string]map[string]ruleDetail, date time.Time) (map[string]any, error) {
start, end := day(date)
events := map[string]any{
"date": zoneTranslatedTime(start, date.Location()),
Expand Down Expand Up @@ -220,14 +219,14 @@ func (d *daemon) rawEventData(ctx context.Context, db *store.DB, rules map[strin
return events, nil
}

func (d *daemon) dayData(ctx context.Context, db *store.DB, rules map[string]map[string]ruleDetail, start, end time.Time) (atKeyboard []worklog.Event, dayEvents, windowEvents map[string][]worklog.Event, transitions graph, err error) {
func (d *daemon) dayData(ctx context.Context, db storage, rules map[string]map[string]ruleDetail, start, end time.Time) (atKeyboard []worklog.Event, dayEvents, windowEvents map[string][]worklog.Event, transitions graph, err error) {
dayEvents = make(map[string][]worklog.Event)
windowEvents = make(map[string][]worklog.Event)
transitions = newGraph(rng{min: 5, max: 30}, rng{min: 1, max: 5})
for srcBucket, ruleSet := range rules {
for dstBucket, rule := range ruleSet {
var nextApp worklog.Event // EventsRange is sorted descending.
err := db.EventsRangeFunc(db.BucketID(srcBucket), start, end, -1, func(m worklog.Event) error {
err := db.EventsRangeFunc(ctx, db.BucketID(srcBucket), start, end, -1, func(m worklog.Event) error {
// canonicalise to the time zone that the event was
// recorded in for the purposes of the dashboard.
// See comment in atKeyboard.
Expand Down Expand Up @@ -411,7 +410,7 @@ type summary struct {
Warnings []string `json:"warn,omitempty"`
}

func (d *daemon) rangeSummary(ctx context.Context, db *store.DB, rules map[string]map[string]ruleDetail, start, end time.Time, raw bool, req *url.URL) (summary, error) {
func (d *daemon) rangeSummary(ctx context.Context, db storage, rules map[string]map[string]ruleDetail, start, end time.Time, raw bool, req *url.URL) (summary, error) {
events := summary{
Start: start,
End: end,
Expand Down Expand Up @@ -590,11 +589,11 @@ func mergeSummaries(summaries []summary, cooldown time.Duration) (summary, error
return sum, nil
}

func (d *daemon) atKeyboard(ctx context.Context, db *store.DB, rules map[string]map[string]ruleDetail, start, end time.Time) ([]worklog.Event, error) {
func (d *daemon) atKeyboard(ctx context.Context, db storage, rules map[string]map[string]ruleDetail, start, end time.Time) ([]worklog.Event, error) {
var atKeyboard []worklog.Event
for srcBucket, ruleSet := range rules {
for dstBucket, rule := range ruleSet {
err := db.EventsRangeFunc(db.BucketID(srcBucket), start, end, -1, func(m worklog.Event) error {
err := db.EventsRangeFunc(ctx, db.BucketID(srcBucket), start, end, -1, func(m worklog.Event) error {
// atKeyboard is used for week and year intervals which
// may involve work spanning multiple time zones. We
// canonicalise to the time zone that the event was
Expand Down
Loading
Loading