Logging to the Database with Lucee 5.3

At Rasia, we often work with clients who need specific features developed for their use of Lucee. In this case, a client wanted to be able to store Lucee logs in a database. We provided them with a cost estimate, they approved it, and we got to work. They generously also agreed to allow us to make this new functionality available to everyone in Lucee (the option was to just keep a separate, private Lucee extension that only they used).

When running Lucee, especially in a clustered environment, you can end up with with log entries scattered across different servers and on different machines. Additionally, the log entries are scattered over several individual logs. This new feature allows you to easily aggregate and work with the logs in a more streamlined manner if you want. 

The Lucee logs all have a similar structure:

Field   Description
Severity   The log level determines triggers whether a log entry is generated or not.
ThreadID   The ID of the current thread that generated that log entry. You can see these thread IDs if you use some kind of Monitoring tool like SeeFusion.
Date   The date of the log entry
Time   The time of the log entry
Application   The name of the application that generated the log entry.
Message   The actual information that is logged. For errors it might be the stack trace, for schedulers it might be the information about any scheduled tasks that have run, etc.

So instead of having your logs spread across the servers, by using database logging you can now choose to capture the exact same information into a database of your choice. Anything you connect to Lucee through the administrator as a JDBC datasource can be used as the storage engine for the logs. Let's have a look how this works:

Create a Datasource

First you create a datasource as usual. In my case I named it "logging".

Edit the log settings

Then you edit your log definition details. Select datasource from the appender-tabs (note Lucee uses log4J in the background, which defines appenders).

Configuring the datasource appender

You can define different things in this dialog:

1. Select the log level (see severity above for details). Obviously this will vary per your specific situation, but when you control the logging from your application, you know which level you need.

2. Next, select the database you created. It may make sense to log everything to a different database which you might have set up on a different disk (Raid 5 is fine) since the logging happens sequentially. There is no need for random access to happen on the disk (ha ha..., I never thought that this line would be obsolete one day, but now, in times of SSD disks... A few years ago, this was a very good performance tuning tip). Nonetheless I configure my logging to happen on a different database and sometimes into different tables.

3. The next thing you can configure is to select the table where you want your logging to happe. By default it is called logs. You can of course log everything to the same table, which makes searching easier.

4. Custom. The custom field is a RANDOM string you can enter, which will be useful in identifying your log entry. Here I typically use things like the server where the log came from and eventually the log type, unless I use different tables for different logs.

Let's move our exception logging to the database and see what happens. I simply called an url like follows:

An error is generatedLog entries are now created

As you can see the data is now logged in the table created. All set, and your logs are now stored in a database for rapid searching and retrieval, instead of being scattered about the server in different locations and formats. 

Some notes regarding the current implementation:

* If you do any selects, please remember that the data is not indexed

* Currently, this feature stores only 2048 bytes of the message

* We will continue to improve the logging features in order to make it simpler and easier to use, and to store all the data a stacktrace provides

You can set up the logging to store all the logs in one table in one database if you want, but remember to put an entry into the custom string field so that you can distinguish things like:

* Instance

* Server

* Type of Log

* etc.

We appreciate any feedback on this feature. We also encourage you to tell us about improvements you would like to see! 

Social Media