Target Connections

This section goes through the Flexter target connection creation procedure.

Target Connections are based on the URI standard. It means Flexter can access different locations and protocols defined by the path.

protocol://user:password@hostname:port/path?param=value#param=value
file://
ftp:// ftps://
hdfs:// maprfs://
s3a://
jdbc:xyz://
snowflake://
bigquery://
mongodb://
redshift://

Support of jdbc protocols:

jdbc:bigquery:
jdbc:derby:
jdbc:sqlserver:
jdbc:mariadb:
jdbc:mysql:
jdbc:oracle:
jdbc:postgresql:
jdbc:redshift:
jdbc:snowflake:
jdbc:teradata:

Some of these protocols require you to provide a username and password. In all cases it’s possible to use the parameters or configuration files to define them. For some, you can specify the username and password directly in the URI path, e.g. for FTP.

-u, --out-user USER            Output user
                          	
-p, --out-password PASSWORD    Output password

The Target Formats can be split into three types:

1. File based
2. Hive
3. JDBC
4. Spark Connector

All four types have the following parameters in common:

-S, --savemode SAVEMODE     Save Mode when table, directory or file
                            exists or when getting the output tables's DDL
                            [e]rror, [a]ppend, [o]verwrite, [i]gnore, [w]riteschema, [p]rintschema
                            default: append

-Y, --out-part PARTITIONS   Number of partitions for writing data

-N, --unified-fks           Unified FKs in composite columns (applies to "reuse" optimization)

--reset-pks                 Sequential reset primary keys starting from 1

--out-opt PROP=VALUE,...    Extra options for Spark dataFrame writer, ex: prop1=value1,prop2=value2

-r, --out-prefix PREFIX     Append a prefix in each output table name
                              
-y, --out-suffix SUFFIX     Append a suffix in each output table name

--rename [IDX=|NAME=]NEW_NAME,* 
                            Rename a table or multiple tables by index (1,2,3...) or by its original name

--namemode MODE             Change the table/column original names to:
                            - "": Keep original names
                            - [l]ower: to lower case.
                            - [u]pper: to upper case.
                            - [c]camelcase: Remove separators and keep words separated by case.

Unified foreign keys is an alternative way for the ER normalization. It merges all foreign keys into two other columns: one to store the table name and another table to store the foreign key value.

Reset the primary keys is an option for generating primary keys restarting from 1. It can conflict with other generated batches but it could be useful for the first run or overwrite save mode.

Flexter can generate a partitioned table as output. The –partition-by parameter generates partitioned output for each table. It’s capable to use static, random (date, time, timestamp, uuid) and input column value for partitioning.

The partition column name can be auto generated considering the –partition-by parameter or can be set by –partition-by-name parameter.

File Based Target Connections

For file based Target Connections we need to define the Target Format and the location for the output directory.

$ xml2er|json2er -f <File Format> -o <Output Directory> …

Useful parameters

-f, --out-format FORMAT         Output format:
                                - jdbc, parquet, orc, json, csv, tsv, psv, avro...
                                default: orc
                          	
-z, --compression COMPRESSION   Compression mode for filesystem output
                                - none, snappy, gzip, bzip2, deflate, lzo, lz4, zlib, xz...
                                default: snappy

Similar to a Source Connection, the format parameter also accepts the full class name if the extra jars are included in the spark-submit library path. Here is an example for working with Avro.

# example writing to avro files
$ xml2er|json2er -f avro -o <Output Directory> …

Hive as Target Connection

Hive as a Target Connection behaves similar to Target Connections that output files. This mode can be activated by parameter -V or –hive-create and the output location is optional.

$ xml2er|json2er -V …
When the output location is not defined, the tables will be created as managed tables, following the database/schema default location.

With a defined output location, the tables will be created as external tables.

$ xml2er|json2er -V -o <Output Directory> …
The target schema might be provided, otherwise the Hive “default” schema will be used implicitly.

Below the useful options:

-V, --hive-create              Enable creating hive tables
                          	
-E, --out-schema SCHEMA        Creating hive or jdbc tables into schema

JDBC as Target Connection

When specifying JDBC as a Target Connection you need to include an output path with the jdbc:// protocol.

$ xml2er|json2er -o jdbc:<database url> -u USER -p PASS …
An important parameter:
-B, --batchsize BATCHSIZE   	Batch size to write into databases
                                default: 1000

The batch size can be changed to make the inserts stored in bigger or smaller packages before being sent to the database.

Because the characteristics of a parallel Apache Spark architecture, it can’t guarantee that all inserts will be made in the same transaction.

Each table will require a new transaction, and it could be performed with 1 or more executors. Each executor will use its own transaction across the cluster.

Printing out the DDL to the console or writing to the target Jdbc

Whether the target is a Jdbc Connection or a Hive schema, you can print out the output tables’ DDL to the console using the [p]rintschema parsing mode.

...
17:38:30.827 INFO Initialized in 2889 milliseconds
17:38:30.829 INFO getting schema definition
17:38:30.831 INFO Loading metadata
17:38:31.407 INFO printing schema definition

CREATE TABLE items ("PK_items" NUMERIC(38) NOT NULL, "FILENAME" VARCHAR(4000) NOT NULL, "FILEPATH" TEXT NOT NULL, PRIMARY KEY("PK_items"))
CREATE TABLE item ("PK_item" NUMERIC(38) NOT NULL, "FK_items" NUMERIC(38) NOT NULL, "id" SMALLINT , "name" VARCHAR(10) , "ppu" FLOAT , "type_" VARCHAR(6) , PRIMARY KEY("PK_item"))
CREATE TABLE topping ("FK_item" NUMERIC(38) NOT NULL, "id" SMALLINT , "topping" VARCHAR(24) )
CREATE TABLE batter ("PK_batter" NUMERIC(38) NOT NULL, "FK_item" NUMERIC(38) NOT NULL, "id" SMALLINT , "name" VARCHAR(12) , PRIMARY KEY("PK_batter"))
CREATE TABLE size_ ("FK_batter" NUMERIC(38) NOT NULL, "size_" VARCHAR(6) NOT NULL)
CREATE TABLE filling ("FK_item" NUMERIC(38) NOT NULL, "addcost" FLOAT , "id" SMALLINT , "name" VARCHAR(16) )

17:38:32.966 INFO checking foreign keys

ALTER TABLE "item" ADD FOREIGN KEY("FK_items") REFERENCES "items"("PK_items");
ALTER TABLE "topping" ADD FOREIGN KEY("FK_item") REFERENCES "item"("PK_item");
ALTER TABLE "batter" ADD FOREIGN KEY("FK_item") REFERENCES "item"("PK_item");
ALTER TABLE "size_" ADD FOREIGN KEY("FK_batter") REFERENCES "batter"("PK_batter");
ALTER TABLE "filling" ADD FOREIGN KEY("FK_item") REFERENCES "item"("PK_item");

Alternatively, you can create the empty target tables using the [w]riteschema mode.

These options could be extremely for set up and preparing the database before writing the data.

$ psql -U flex2er -d x2er
Password for user flex2er:
psql 
Type "help" for help.

x2er=# \dt
        List of relations
 Schema |   	Name   	| Type  |  Owner  
--------+------------------+-------+---------
 public | batter       	| table | flex2er
 public | filling      	| table | flex2er
 public | item         	| table | flex2er
 public | items        	| table | flex2er
 public | size_        	| table | flex2er
 public | topping      	| table | flex2er

Spark Connector as Target Connection

The Apache Spark offers the option to create custom spark connectors, which the vendors usually create by their own.

Flexter has been supporting fewer options, and others can be tried out if they can operate by setting the correct parameters.

The supported ones are:

  • Spark Snowflake Connector: snowflake://
  • Spark BigQuery Connector: bigquery://
  • Spark MongoDB Connector: mongodb://
  • Spark Redshift Connector: redshift://

These connectors aren’t shipped with Flexter, but the Flexter Modules / download.sh tool can help to install them into Spark, or they can be downloaded using the xml2er|json2er|merge2er --pkg PKG parameter, analog to spark-submit --package parameter.

Spark Snowflake Connector

Example using Spark Snowflake connector. It follows the same standard is JDBC jdbc:snowflake://<snowflake_url>

# user/password based
$ xml2er|json2er|merge2er -o 'snowflake://<snowflake_url>[?param=value[&param=value]][;param=value[&param=value]]' -u USER -p PASS …

# password-less private key
$ xml2er|json2er|merge2er -o 'snowflake://<snowflake_url>[?param=value[&param=value]][;param=value[&param=value]]' -u USER --out-opt private_key_file=pathTo/file.key …

# password based private key
$ xml2er|json2er|merge2er -o 'snowflake://<snowflake_url>[?param=value[&param=value]][;param=value[&param=value]]' -u USER --out-opt private_key_file=pathTo/file.key --out-opt private_key_file_pwd=PASS …

Important parameters:

-E, --out-schema SCHEMA     Creating hive or jdbc tables into schema

--out-opt PROP=VALUE,...    Extra options for Spark dataFrame writer, ex: prop1=value1,prop2=value2

Spark BigQuery Connector

Example using Spark BigQuery connector.

# direct writing method
$ xml2er|json2er|merge2er -o 'bigquery://project[:dataset][?param=value[&param=value]][;param=value[&param=value]]' -u USER -p PASS …

# indirect writing method
$ xml2er|json2er|merge2er -o 'bigquery://project[:dataset]?temp=<gcs_bucket>[&param=value]][;param=value[&param=value]]' -u USER -p PASS …

Important parameters:

-E, --out-schema DATASET    Creating BigQuery tables into dataset

--out-opt PROP=VALUE,...    Extra options for Spark dataFrame writer, ex: prop1=value1,prop2=value2

Spark MongoDB Connector

Example using Spark MongoDB connector.

$ xml2er|json2er|merge2er -o 'mongodb://HOST[:PORT]/DB[.COLLECTION][?param=value[&param=value]][;param=value[&param=value]]' -u USER -p PASS …

Important parameters:

-E, --out-schema COLLECTION Creating MongoDB tables into collection

--out-opt PROP=VALUE,...    Extra options for Spark dataFrame writer, ex: prop1=value1,prop2=value2

Custom Spark Connector

Other unsupported connectors can be tried, if they don’t demand code changes, operating entirely by parameters, it may worth try it out.

$ xml2er|json2er|merge2er -f FORMAT_NAME_OR_CLASS --out-opt PROP=VALUE --conf PROP=VALUE

Important parameters:

-f FORMAT_NAME_OR_CLASS     The custom format name or class.   

-o OUTPUT_PATH              It sets the dataFrame writer option path=value

-E, --out-schema SCHEMA     It appends the schema_name. in each written table.

--conf PROP=VALUE           It sets Spark Configuration settings, some connectors require it. 
                            ex: prop1=value1,prop2=value2

--out-opt PROP=VALUE,...    Extra options for Spark dataFrame writer.
                            ex: prop1=value1,prop2=value2