Logging

This section describes how Flexter currently allows logging features into its functioning.

While Flexter runs, it collects and displays relevant logs as a list of recorded events:

10:09:34.227 INFO Initialized in 3055 milliseconds

The above example shows the timestamp of the event, its level (in particular, INFO highlights a progress in the process) and the corresponding message.

Other levels you can see when executing Flexter:

- WARN if the application encountered a potentially harmful situation
- ERROR which highlights a possible cause of failure

Flexter logging is built on top of the Log4j APIs, a package which is freely distributed under the Apache Software License.

Its configuration can be specified in a file of properties stored as key-value pairs, e.g.

log4j.rootCategory=WARN,stdout,stderr

You can customize this configuration using the -L parameter and pointing to the path of your log4j.properties file.

To turn on generating log files:

log4j.rootCategory=WARN,stdout,stderr,file

As default, the files will be generated per job and inside of these possible locations:

1. /var/log/flexter
2. ~/.cache/flexter/log
3. %LOCALDATA%/flexter/log
4. ./

Other log4j appenders are available to for logging into files, like, RollingFileAppender and DailyRollingFileAppender more recommended to keep all jobs in the same log files.

Another handy option is the -c/–commands parameter, whose setting can be used for debugging purposes. As a result, all the SQL commands will be exposed to the logs showing what is actually happening behind the scenes.

For instance:

10:09:34.227 INFO Initialized in 3055 milliseconds
10:09:34.232 INFO Parsing data
10:09:35.044 INFO calculating xpath statistics
INSERT INTO schma(sch_name, id_org) VALUES('donut', 1) ;

WITH rps AS (SELECT 20 AS jump) ,setget AS (SELECT jump ,setval('seq_du_stat', nextval('seq_du_stat')+jump-1) as end_id FROM rps) SELECT end_id-jump+1 AS begin_id FROM setget ;

10:09:40.996 INFO writing xpath statistics
batching 19 statements: 1: INSERT INTO "du_stat" ("du_addr","du_ns","id_du","id_parent_du","found_qty","found_ne_qty","max_length_char_num","numeric_max_decimal_precision","per_parent_min_occurs","per_parent_min_ne_occurs","per_parent_max_occurs","per_parent_max_ne_occurs","min_val","max_val","mixed_flg","id_du_stat","id_parent_du_stat","du_name","du_type_cd","stat_stdtype","datetime_field_ord","id_job","id_sch") VALUES ('/items/item/fillings/filling',NULL,NULL,NULL,7,0,0,NULL,3,0,4,0,NULL,NULL,false,3945823,3945822,'filling','e',NULL,NULL,16019,4639) 
2: INSERT INTO "du_stat" ("du_addr","du_ns","id_du","id_parent_du","found_qty","found_ne_qty","max_length_char_num","numeric_max_decimal_precision","per_parent_min_occurs","per_parent_min_ne_occurs","per_parent_max_occurs","per_parent_max_ne_occurs","min_val","max_val","mixed_flg","id_du_stat","id_parent_du_stat","du_name","du_type_cd","stat_stdtype","datetime_field_ord","id_job","id_sch") VALUES ('/items',NULL,NULL,NULL,1,0,0,NULL,0,0,0,0,NULL,NULL,false,3945820,NULL,'items','e',NULL,NULL,16019,4639)
...
10:09:43.252 INFO updating schema origin 4639
10:09:43.253 INFO calculating document statistics
10:09:44.879 INFO writing document statistics
batching 1 statements: 1: INSERT INTO "doc_stat" ("id_doc","id_doc32","id_doc_num","status_flg","du_addr_qty","converter_calls_qty","converter_errors_qty","converter_warns_qty","truncations_qty","empty_notnull_qty","missing_maps_qty","fks_qty","missing_fks_qty","parser_warns_qty","doc_size","id_job","id_sch") VALUES ('Documents/flexter/samples/donut.zip!/donut.xml','ter/samples/donut.zip!/donut.xml',NULL,'C',154,0,0,0,0,0,154,0,0,0,4644,16019,4639) ;