Plugin framework v2 + new Big Object Archive plugin + new Log Retention Rules plugin
Core Package Changes
This release of the core unlocked package is primarily focused on improving how plugins can interact with the core package - while working towards that goal, several enhancements & bugfixes have been included this release
-
Replaced the checkbox field
LoggerSettings__c.IsPlatformEventStorageEnabled__c
with a new text fieldLoggerSettings__c.DefaultPlatformEventStorageLocation__c
, providing a way for plugins (big object plugin in particular - see below) to provide additional storage location.-
⚠️ For orgs that are upgrading to this version: You will need to review & update your org's settings to configure the new fieldLoggerSettings__c.DefaultPlatformEventStorageLocation__c
- if you previously hadLoggerSettings__c.IsPlatformEventStorageEnabled__c
set totrue
, then you should updateLoggerSettings__c.DefaultPlatformEventStorageLocation__c
to be set toCUSTOM_OBJECTS
. This is the default value going forward, and can be updated using the "Logger Settings" tab available in the Logger Console app (see screenshot below)
-
-
Added new picklist field
Log__c.LogPurgeAction__c
- out of the box, only the value 'Delete' is included/used, but plugins (like the Big Object plugin below) can add new picklist values to support new actions. -
Fixed an issue in
LogEntryEventBuilder
where some stack trace lines would be duplicated -
Renamed class
LoggerEmailUtils
toLoggerEmailSender
-
Added additional fixes for #276 that was partially fixed in
v4.7.0
- some of the unit tests had not been updated to check if deliverability was enabled, resulting in tests still failing in orgs with deliverability disabled. Thanks to @gjslagle12 for reporting these test failures! -
Added new public method
LogBatchPurger.setChainedBatchSize(Integer)
that's used internally to ensure any chained batch jobs use the same batch size as the original job. Previously, only the first job would use the specified batch size, and any chained jobs then used the default of 200. -
Started adding data classifications to custom fields throughout the data model to start progress on #292
-
New fields
DatabaseResultCollectionSize__c
andRecordCollectionSize__c
(originally planned as part of #222) -
Partially implemented #240 by adding new methods
LogEntryEventBuilder.setHttpRequestDetails(request)
andLogEntryEventBuilder.setHttpResponseDetails(response)
, which populates new fields onLogEntryEvent__e
andLogEntry__c
. In a future release, I am going to consider also adding overloads to the logging methods inLogger
. The new fields onLogEntryEvent__e
andLogEntry__c
are:HttpRequestBody__c
HttpRequestBodyMasked__c
HttpRequestCompressed__c
HttpRequestEndpoint__c
HttpRequestMethod__c
HttpResponseBody__c
HttpResponseBodyMasked__c
HttpResponseHeaderKeys__c
HttpResponseStatus__c
HttpResponseStatusCode__c
Version 2 of Plugin Framework
This release includes a new & improved approach for building plugins for Nebula Logger. The first plugin framework beta (referred to as plugin-v1-beta
) was originally released last year in the v4.5.0
release of the unlocked package. Since then, it's remained largely unchanged, but there has been a lot of feedback in the last ~9 months. The new beta of the plugin framework (plugin-v2-beta
) is a complete overhaul of how plugins are built for Nebula Logger, allowing much more control and functionality for plugins.
-
Redesigned plugins for Nebula Logger's trigger handler framework, and added the ability to create plugins for the batch class
LogBatchPurger
. The old Apex classLoggerSObjectHandlerPlugin
has been removed - Apex plugins can now be created by implementing one (or both) of the new interfaces:LoggerPlugin.Batchable
- this interface is used to define & run plugins within the batch jobLogBatchPurger
LoggerPlugin.Triggerable
- this interface is used to define & run plugins within Nebula Logger's trigger framework,LoggerSObjectHandler
-
Reintroduced
LoggerSObjectHandler__mdt
custom metadata type. This can be used to enable/disable some of Nebula Logger's trigger handler classes, as well as a way to override the default trigger handlers with a custom one
New Plugin: Log Entry Archive Plugin
This new plugins provides archiving of logging data in a Big Object, allowing you to clear up data storage used by the custom objects (Log__c
, LogEntry__c
, and LogEntryTag__c
) while still housing your logging data within your Salesforce org. A huge 'thank you' to @jamessimone for all of his work on this - he and I originally started work on this over a year ago, and it unfortunately was put on hold for several months while other technical debt & enhancements were first prioritized. It's incredible to finally see this being released!
ℹ️ This plugin is considered to be in beta. It has been tested & could be deployed to a production org, but due to some severe limitations with Big Objects, this is going to be considered in beta so that additional community feedback can be collected & any related changes can be implemented. In the meantime, upgrading to new versions of the Log Entry Archive plugin may involve some additional manual steps - if this becomes necessary for future upgrades, I'll include details in future releases for any manual steps needed. If/when you run into any issues with this in the future, feel free to start a discussion to ask for help!
-
The Big Object
LogEntryArchive__b
contains all of the same fields (or comparable fields) asLogEntryEvent__e
,Log__c
, andLogEntry__c
combined. -
Closes #117 - a huge thanks to @jamessimone for implementing this via PR #287 (and for creating Big Object prototypes last year!). The plugin provides 2 custom save methods that can be used to bypass platform events (
LogEntryEvent__e
) and custom objects (Log__c
,LogEntry__c
, andLogEntryTag__c
) and instead use the Big ObjectLogEntryArchive__b
as the primary storage location. This also closes #128 - implemented via PR #288, the plugin can also archiveLog__c
,LogEntry__c
andLogEntryTag__c
data before the batch job deletes any records whereLog__c.LogPurgeAction__c == 'Archive'
. This means that the plugin can be configured in 4 ways:LoggerSettings__c.DefaultSaveMethod__c
=EVENT_BUS
,LoggerSettings__c.DefaultPlatformEventStorageLocation__c
=BIG_OBJECT
- with these options, Nebula Logger will still leverage the Event Bus, which ensures that log entries are saved, even if an exception is thrown. This may not be ideal for all orgs/users due to org limits for platform events, but this would provide the most reliable way of logging directly toLogEntryArchive__b
& circumvent the custom objectsLog__c
,LogEntry__c
andLogEntryTag__c
LoggerSettings__c.DefaultSaveMethod__c
=BIG_OBJECT_IMMEDIATE
- with this option, Nebula Logger will skip the Event Bus, and instead try to write directly to the Big ObjectLogEntryArchive__b
. Any Big Object records that are saved will not be rolled back if there are any exceptions in the transaction - however, this option only works if you save the Big Objects before performing DML on any "normal" SObjects. If you perform DML on another SObject first, and then attempt to save directly to the Big Object, the platform will throw a mixed DML exception, and no Big Object records will be saved.LoggerSettings__c.DefaultSaveMethod__c
=BIG_OBJECT_QUEUEABLE
- with this option, Nebula Logger will asynchronously save Big Object records using a queueable job. This is helpful in avoiding hitting limits in the original transaction, and also avoids the mixed DML exception that can occur when usingBIG_OBJECT_IMMEDIATE
(above). However, if an exception occurs in the current transaction, then the queueable job will not be enqueued.LoggerSettings__c.DefaultSaveMethod__c
=EVENT_BUS
,LoggerSettings__c.DefaultPlatformEventStorageLocation__c
=CUSTOM_OBJECTS
,LoggerSettings__c.DefaultLogPurgeAction__c
= 'Archive' - with these options configured, Nebula Logger will utilize the Event Bus to ensure any log entries are published (even if an exception occurs), and the data is then initially stored in the custom objectsLog__c
,LogEntry__c
andLogEntryTag__c
. Once the log's retention date has passed (Log__c.LogRetentionDate__c <= System.today()
, then the plugin will archive the custom object data intoLogEntryArchive__b
before the custom object data is deleted.
-
The included permission set
LoggerLogEntryArchiveAdmin
provides all of the permissions needed forLogEntryArchive__b
and the includedLog Entry Archives
tab -
Includes a custom tab 'Log Entry Archives' to display the LWC
logEntryArchives
. This LWC provides a datatable view ofLogEntryArchive__b
data, with the ability to filter onTimestamp__c
,LoggingLevel__c
, andMessage__c
fields.
New Plugin: Log Retention Rules
This new plugin closes #226 by adding the ability to create & deploy advanced, configurable rules for setting the retention date of Log__c
records, using custom metadata types LogRetentionRule__mdt
and LogRetentionRuleCondition__mdt
. This provides a way to create more advanced log retention policies for logging data stored in Log__c
, LogEntry__c
, and LogEntryTag__c
, using functionality that's similar to adding custom logic to list views. This plugin's code is based on another open source project of mine, ApexValidationRules
ℹ️ This plugin is considered to be in beta. It has been tested & could be deployed to a production org, but since it's new, this is going to be considered in beta so that additional community feedback can be collected & any related changes can be implemented. In the meantime, upgrading to new versions of the Log Retention Rules plugin may involve some additional manual steps - if this becomes necessary for future upgrades, I'll include details in future releases for any manual steps needed. If/when you run into any issues with this in the future, feel free to start a discussion to ask for help!
And for anyone wondering "can I use the Log Retention Rules plugin with the Log Entry Archive plugin?" - the answer is yes! If you install & configure both plugins in your org, then you can create log retention rules to better control when logs are purged - the Log Entry Archive plugin will then run within the LogBatchPurger
job, and any Log__c
records with LogPurgeAction__c == 'Archive'
will be archived into the Big Object LogEntryArchive__b
on the log's retention end date.
Updated Plugin: Logger Slack Plugin
Only a few small changes have been included in this release of the Slack plugin, but upgrading is required in order to work with the latest changes to Nebula Logger's plugin framework
- Added the fields
LogEntry__c.StackTrace__c
andLogEntry__c.ExceptionStackTrace__c
to the Slack notification message - Bugfix for orgs that have the plugin enabled but no Slack endpoint configured
Test Improvements
- Closes #193 by leveraging 3 new test classes:
LoggerMockDataCreator
: Utility class used to help with generating mock data when writing Apex tests for Nebula Logger. These methods are generic, and should work in any Salesforce org. They can be used when writing Apex tests for plugins.LoggerMockDataStore
: Utility class used to mock any data-related operations, including DML statements, Event Bus publishing, and enqueuing async queueable jobs. These methods are generic, and should work in any Salesforce org. They can be used when writing Apex tests for plugins.LoggerTestConfigurator
: Utility class used to help with setting up Nebula Logger's configurations within a test context. These methods are specific to metadata implemented within Nebula Logger. They can be used when writing Apex tests for plugins.
- Reduced average test speed to 150ms by converting some integration tests into true unit tests, using the new class
LoggerMockDataStore
. More improvements to come in future releases that will further reduce the average test speed, and improve the overall quality of the tests. - Started using annotation
@IsTest(IsParallel=true)
wherever possible to try to further improve test speeds. Some classes that perform DML onUser
records cannot leverage parallel test runs, so a few test classes use@IsTest(IsParallel=false)