With loggers we have something called as an Appender. Simply defined an appender is a component responsible for taking a log request, building the log message in a suitable format and sending it to the output device.
So when we need to log to the console we have a ConsoleAppender. For file based logging we use FileAppenders and for logging to a database we have the DatabaseAppender. Logging to a database... ? For what ? Well you log to files, you want better persistence than you log to db.
Actually there are applications where the logs are of a high priority and warrant being placed in database. For instance I once worked with a client who logged all his web service request and response xml to a database table. He provided a screen to the table which allowed the support team to lookup a particular request and its corresponding xml when debugging issues.
I decided to try the database appender with Logback API. The first step was to download the necessary jars:
The database appender here requires the JDBC driver, the jdbc url and the db credentials. An additional property is the connectionSource which is actually the type of Connection wrapper that we would like to use. Logback provides a few options here and I went with the DriverManagerConnectionSource class.
The next step was to write a test class to test the code:
This resulted in three tables:
If we run the sample code now ( with the jdbc driver jar) we can see that the records were created:
As seen from the entries:
Is this a must have feature in our projects ? Logging as it has costs. Database logging is more expensive. In case of logback the log entries will be created in atleast one table for every log message. If its a exception than that becomes multiple entries in a second table. In case we have added values in our MDC map, than there will be a correspondingly high number of insert calls on the logging_event_property table.
From the Logback docs:
So when we need to log to the console we have a ConsoleAppender. For file based logging we use FileAppenders and for logging to a database we have the DatabaseAppender. Logging to a database... ? For what ? Well you log to files, you want better persistence than you log to db.
Actually there are applications where the logs are of a high priority and warrant being placed in database. For instance I once worked with a client who logged all his web service request and response xml to a database table. He provided a screen to the table which allowed the support team to lookup a particular request and its corresponding xml when debugging issues.
I decided to try the database appender with Logback API. The first step was to download the necessary jars:
- logback-classic-1.0.13.jar
- logback-core-1.0.13.jar
- slf4j-api-1.7.5.jar
<?xml version="1.0" encoding="UTF-8"?> <configuration> <appender name="stdout" class="ch.qos.logback.core.ConsoleAppender"> <!-- encoders are assigned the type ch.qos.logback.classic.encoder.PatternLayoutEncoder by default --> <encoder> <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{5} - %msg%n </pattern> </encoder> </appender> <appender name="db" class="ch.qos.logback.classic.db.DBAppender"> <connectionSource class="ch.qos.logback.core.db.DriverManagerConnectionSource"> <driverClass>org.postgresql.Driver</driverClass> <url>jdbc:postgresql://localhost:5432/simple</url> <user>postgres</user> <password>root</password> <!-- no password --> </connectionSource> </appender> <!-- the level of the root level is set to DEBUG by default. --> <root level="TRACE"> <appender-ref ref="stdout" /> <appender-ref ref="db" /> </root> </configuration>As seen here, I have created two appenders - a console appender and a database appender.
The database appender here requires the JDBC driver, the jdbc url and the db credentials. An additional property is the connectionSource which is actually the type of Connection wrapper that we would like to use. Logback provides a few options here and I went with the DriverManagerConnectionSource class.
The next step was to write a test class to test the code:
public class TestDbAppender { private static final Logger logger = LoggerFactory.getLogger(TestDbAppender.class); public TestDbAppender() { logger.info("Class instance created at {}", DateFormat.getInstance().format(new Date()));
} public void doTask() { logger.trace("In doTask"); logger.trace("doTask complete"); } public static void main(String[] args) { logger.warn("Running code..."); new TestDbAppender().doTask(); logger.debug("Code execution complete."); } }This would work - but if it did were would the logs go ?? I looked into the documentation for Logback and according to the documentation:
"The DBAppender inserts logging events into three database tables in a format independent of the Java programming language. These three tables are logging_event, logging_event_property and logging_event_exception. They must exist before DBAppender can be used."Accordingly I picked up the script files for the same.
This resulted in three tables:
If we run the sample code now ( with the jdbc driver jar) we can see that the records were created:
10:04:25.943 [main] WARN t.l.TestDbAppender - Running code... 10:04:26.001 [main] INFO t.l.TestDbAppender - Class instance created at 11/22/13 10:04 AM 10:04:26.036 [main] TRACE t.l.TestDbAppender - In doTask 10:04:26.071 [main] TRACE t.l.TestDbAppender - doTask complete 10:04:26.104 [main] DEBUG t.l.TestDbAppender - Code execution complete.While this was the console appender, The db appender wrote entries to the table:
- Every log call is treated as a log event. The entry for the same is added in the logging_event table. This entry includes the timestamp, actual log message, the logger name etc among other fields.
- If the logger under consideration has any Mapped Diagnostic Context or MDC or context specific values associated than the same will be placed in the logging_event_property table. Too many values here will definitely slow down the log process.
- Messages of type error often have stack traces associated with it. These are logged separately in the logging_event_exception table with each trace being treated as a separate record.
public static void main(String[] args) { logger.error("Running code...", new RuntimeException("This is crazy")); }The consoleAppender printed:
10:31:30.198 [main] ERROR t.l.TestDbAppender - Running code... java.lang.RuntimeException: This is crazy at test.log.TestDbAppender.main(TestDbAppender.java:23) ~[bin/:na]The table logs :
Is this a must have feature in our projects ? Logging as it has costs. Database logging is more expensive. In case of logback the log entries will be created in atleast one table for every log message. If its a exception than that becomes multiple entries in a second table. In case we have added values in our MDC map, than there will be a correspondingly high number of insert calls on the logging_event_property table.
From the Logback docs:
Experiments show that writing a single event into the database takes approximately 10 milliseconds, on a "standard" PC. If pooled connections are used, this figure drops to around 1 millisecond.As seen the result depends on how standard your PC is. If your hardware can afford this hit and the application warrants this level of log maintenance than this is a cool feature. With my client, his debugging headaches were eased to a large extent using the database appender feature. This feature is also available with Log4j.
How store exceptions in database?
ReplyDeleteCheck the last part of the post
DeleteThe problem comes when, say you've been logging for a whole year and for some reason (performance, you don't need 2010 logs, whatever) you're requested/have to either delete or move records from that year, and you have million of records.. that would definetively kill your application (deleting several million records, which may take more than an hour)
ReplyDeleteHow do you face this requirement?
HI, I do not see this as a logging problem. The same problem could occur with any table you have in your application. In a scenario where you would like to move records periodically from a table or delete massive mount of records one solution could be to run batch jobs at low load times.Or maybe use database driven procedures. At its worst you would probably need application downtime. There are sure to be other approaches too.
DeleteA good workmanlike introduction to logback. Full marks for that!
ReplyDeletePlease note that the schema files provided by the logback team are to be taken with a grain of salt. Use them as an inspiration, but be aware that especially char field lengths are somewhat arbitrary and definitely should be subject to change.
If you look at say how logging_event.formatted_message is implemented in the various database schemas in the logback distribution, you'll see an amazing difference from database to database. At least one of them is really stupid, using a text blob on a platform where that data type incurs a serious performance overhead.
As you apparently are using Postgres, which is particularly strict about data integrity, you'll have to be extra careful. I was really hit by a unit test of mine, where an expected Postgres database exception was never logged in the logging_event_exception table. It took me more than half a day to figure out that the logging_event_exception.trace_line field is defined as character varying(254) in the supplied Postgres schema. Considering the rest of the schema, this is really silly as the Postgres JDBC driver is inordinately verbose when it comes to error messages. The first line of the stack trace, which in fact was the driver error message, had a length of almost 800 chars. Try fitting that into 254 chars!
The varied database schema point is interesting. Thanks for sharing your postgres experience. I have been looking to explore it as a part of my blog posts. So your story helps :)
DeleteChanging the table columns might not be possible without impacting db code. So I guess when using logger provided api there is a good chance of encountering these kind of problems. Writing a custom wrapper over your logger could be a solution, but ya big efforts there.
I am guessing you were able to solve your problem my increasing the column size.
Enlarging the size of char-based fields in the underlying database structure has no impact at all on the logback system! So actually it was a trivial change to modify the schema. Luckily this is work-in-progres and not even close to a production stage, so i can recreate the logging tables at any to time to my heart's content.
DeleteFor a starter i've just made logging_event_exception.trace_line a "text" field. I may be reconsidering that decision later, but for now that suffices. As said, it's work in progres.
Oh ok. I was wondering how closely coupled is the logback db code. But yes, with insertion of string based fields I think you should be fine. Do let me know how your final solution works.
DeleteGreat post, thanks !
ReplyDeletecan you give running example of zip?
ReplyDeleteHow will you call a Stored procedure that ultimately log to some custom tables?
ReplyDeleteThanks,
Manoj
how can we retrieve date from timestamp in logging
ReplyDeleteevents table..?