Skip to content

Releases: event-driven-io/Pongo

0.16.1

09 Oct 12:10
Compare
Choose a tag to compare

📝 What's Changed

Full Changelog: 0.16.0...0.16.1

0.16.0

09 Oct 11:37
Compare
Choose a tag to compare

🚀 What's New

  1. Used UUID v7 as sequential random id generator. This should make documents indexing by id more efficient. You can generate it also using a new ObjectId helper. by @oskardudycz in 94
  2. Added option to run custom SQL directly on Pongo database You can access it as db.sql.query or db.sql.command so the same way as in the collection. 93
  3. Usability improvements for Pongo shell
    Added:
  • PostgreSQL connection check,
  • an option to set the prettifying and log level through shell params and helper methods,
  • direct access to collections in shell (if you provide them with --collection option)
  • Made printing results in shell as table optional and disabled. You can enable it by calling printResultsAsTable() in the shell
    by @oskardudycz in 93, 95
  1. Added printing migration SQL based on the config file So the same way as you do when you run migrations. by @oskardudycz in 93
  2. Added pretty JSON print options to visualise better logs You can set up the variable DUMBO_LOG_STYLE=PRETTY together with DUMBO_LOG_LEVEL=INFO and see all logs prettified. You can also call prettyJSON function in your code from @event-driven-io/dumbo by @oskardudycz in 92

Full Changelog: 0.15.3...0.16.0

0.15.3

08 Oct 10:38
Compare
Choose a tag to compare

📝 What's Changed

Full Changelog: 0.15.2...0.15.3

0.15.2

02 Oct 14:25
Compare
Choose a tag to compare

📝 What's Changed

  • Fixed custom sql operations typing in PongoCollection typing and Find methods results by @oskardudycz in #90

Full Changelog: 0.15.0...0.15.1

0.15.0

01 Oct 12:14
Compare
Choose a tag to compare

🚀 What's New

  • Added Pongo shell. You can now use REPL without installing Pongo in your project. by @oskardudycz in 89
  • Added Optimistic Concurrency handling. No need for Mongo-like retries. by @oskardudycz in #84
  • Added possibility to pass custom SQL to Pongo operations and SQL tagged template literal to make that easier by @oskardudycz in #87
  • Added custom query and command operations to Pongo Collection by @oskardudycz in #88
  • Added basic tracing based on the console logging and printed SQL queries. by @oskardudycz in #86
  • Added sample of the typed client by @oskardudycz in #83
  • Added CockroachDB Docker Compose setup for Simple Pongo sample by @oskardudycz in #85

Full Changelog: 0.14.4...0.15.0

0.14.4

12 Sep 11:42
Compare
Choose a tag to compare

📝 What's Changed

  • Added possibility to use regular database methods on the strongly-typed database by @oskardudycz in #82

Full Changelog: 0.14.3...0.14.4

0.14.3

12 Sep 11:41
Compare
Choose a tag to compare

📝 What's Changed

Full Changelog: 0.14.2...0.14.3

0.14.2

12 Sep 11:39
Compare
Choose a tag to compare

📝 What's Changed

Full Changelog: 0.14.1...0.14.2

0.14.1

12 Sep 11:38
Compare
Choose a tag to compare

📝 What's Changed

  • Added missing bin config in Pongo to make CLI work correctly by @oskardudycz in #79

Full Changelog: 0.14.0...0.14.1

0.14.0

12 Sep 11:22
Compare
Choose a tag to compare

🚀 What's New

1. Added pongo CLI tool.

You can either install it globally through:

npm install -g @event-driven-io/pongo

And run it with:

pongo

or without installing it globally by using npx

npx @event-driven-io/pongo

by @oskardudycz in 78

2. Added strongly-typed client.

Now, if youdefine schema like:

type User = {
  _id?: string;
  name: string;
  age: number;
  address?: Address;
  tags?: string[];
};

const schema = pongoSchema.client({
  database: pongoSchema.db({
    users: pongoSchema.collection<User>('users'),
    customers: pongoSchema.collection<Customer>('customers'),
  }),
});

And pass it to the client, getting the typed version.

const typedClient = pongoClient(postgresConnectionString, {
  schema: { definition: schema },
});
// 👇 client have the same database as we defined above, and the collection
const users = typedClient.database.users;

const doc: User = {
  _id: randomUUUID(),
  name: 'Anita',
  age: 25,
};
const inserted = await users.insertOne(doc);

// 👇 yup, the collection is fully typed!
const pongoDoc = await users.findOne({
  name: 'Anita'
});

You can generate the sample config by calling:

npx @event-driven-io/pongo config sample --generate --file ./src/pongoConfig.ts --collection users --collection orders

Or just print it with:

npx @event-driven-io/pongo config sample --print --collection users --collection customers

Then, you can use existing or adjust generated typing and import it to your application.

by @oskardudycz in 73

3. Added capability to run database migrations

You can run it based on your config file with:

npx @event-driven-io/pongo migrate run --config ./pongoConfig.ts \
--connectionString postgresql://postgres:postgres@localhost:5432/postgres

It'll automatically run the migrations based on the defined collections. Running multiple times is safe, as migration will be only run once.

Instead of passing the connection string as param, you can also set DB_CONNECTION_STRING environment variable and run it as

npx @event-driven-io/pongo migrate run --config ./pongoConfig.ts

You can also run it by providing a collections list:

npx @event-driven-io/pongo migrate run --collection users --collection customers \
--connectionString postgresql://postgres:postgres@localhost:5432/postgres

If you want to try it, first add the --dryRun param, and it'll run the migration in the transaction and roll it back without making changes.

You can also see what will be generated by calling:

npx @event-driven-io/pongo migrate sql --print --collection users --collection customers

4. Added possibility to disable generating Pongo schema upfront

If you run migrations manually, you can ignore the automated migration in Pongo client and get the performance boost:

const typedClient = pongoClient(postgresConnectionString, {
  schema: { autoMigration: 'None', definition: schema },
});

by @oskardudycz in 71, 72

5. Added option to define Schema Components

Schema components define the database schema as a tree structure. They're used for database collection, allowing migration through code. They're exposed in schema property. In the longer term it'll be possible to add your own like indexes, migrations etc.

by @oskardudycz in 75, 77

6. Added PostgreSQL AdvisoryLocks in Dumbo

Now you can use it also in your application; you can use them through:

import { dumbo, AdvisoryLock } from '@event-driven-io/dumbo';

const pool = dumbo({ connectionString });

const carName = await AdvisoryLock.withAcquire(
  pool.execute,
  () => single(pool.execute.query<{name:string}>(rawSql(`SELECT name FROM cars LIMIT 1;`))),
  { lockId: 1023 },
);

Internally they're used to ensure that there are no parallel migrations being run.

by @oskardudycz in 74

Full Changelog: 0.13.1...0.14.0