Command Line Interface¶
The Command Line Interface (CLI) allows you to use DaRT Reader programmatically and has features that are unavailable in the GUI such as validation of checksums, writing to databases that are not included by default and renaming database fields.
Prerequisites¶
The CLI is an executable with filename beye. It is located in the installation directory on Windows
and under Contents/MacOS on Mac OS. As part of the installation process it is automatically added to the PATH on
Windows. If you want to add beye to the PATH on Mac OS add the following line to ~/.zshrc:
Open a shell and type
to verify that you can execute the CLI.
You will also need a valid license key to use the commands of the CLI: Open the GUI, in the menu bar go to File > Settings and enter the license key.
CLI commands¶
The CLI provides the following commands:
inspect¶
Inspect a DaRT extract:
The argumentdirectory-file is a path to the directory file. All
payload files belonging to the extract must be in the same directory as the directory file.
The inspect command will return meta information about the extract and the payload files. Following is an example output:
Client (key)..............................
Logical system..................S10CLNT300
DaRT release...........................2.7
SAP release............................740
User..............................abcabcab
Request date....................2025-08-08
Request time......................10:09:08
Fiscal year...........................2019
Compressed..............................No
Separator used...........................;
Unicode mode...........................Yes
Number of end of line bytes..............2
Codepage..............................4103
t2
File exists................true
Expected file size.....11840096
Actual file size.......11840096
Difference....................0
list¶
List segments in an extract:
This command lists the segments contained in the extract or if segment-name is provided
gives field names and their descriptions of the segment. sep-size option increases the whitespace
between columns.
For example running:
will write the segment names, a description of the segment and the number of rows of the segment of the extract described by list some-directory-file_DR.
For example:
FTR_GDPDU_XSTR_AT02 Finanzgeschäftsvorgangstypen 372
FTR_GDPDU_XSTR_AT07 Typ von Bewegungen und Konditionen 90
FTR_GDPDU_XSTR_AT10 Finanzgeschäftsart-Bezeichnung 145
FTR_GDPDU_XSTR_AT10B Customizing Bewertung 244
FTR_GDPDU_XSTR_AT30 Formeltabelle für Finanzmathematik 3
FTR_GDPDU_XSTR_AT40 Berechnungstypen des Finanzstromrechners 60
FTR_GDPDU_XSTR_ATMA Art eines Rahmenvertrags 1
FTR_GDPDU_XSTR_TRACC_AA_REF Kontierungsreferenz der parallelen Bewertungsbereiche 20
FTR_GDPDU_XSTR_TRDC_DFLOWTYPE Definition Fortschreibungsarten 882
FTR_GDPDU_XSTR_TRGC_VAL_AREA Bewertungsbereiche 3
FTR_GDPDU_XSTR_TWPOB Portfoliobestand 13
FVD_GDPDU_XSTR_T056P Referenzzinssatztabelle 9
FVD_GDPDU_XSTR_TD01 Sicherheitsarten 15
FVD_GDPDU_XSTR_TZFB Berechnungsbasis 19
FVD_GDPDU_XSTR_TZST Stornierungsgründe 7
TXW_ACCCAT Kontierungstypen 15
...
Executing the command with a segment name will return the field position, the field name and the field description:
Here, as throughout the CLI it is important that the payload files are in the same directory as the directory file.
save¶
Save extract to a database:
This command saves the extract in a relational database. The config-file is a superset
of JSON called Human-Optimized Config Object Notation (HOCON). It has the following structure. If you want to use one of the bundled database drivers:
{
"path-to-directory-file": "/path/to/directory-file",
"robust": false,
// this property is currently not honoured but needs to be provided
"chunk-size-per-connection": 5000,
// the number of rows that each chunk contains per connection
"number-connections": 5,
// the number of database connections,
"table-prefix": null,
// optionally prefix all tables by this. Useful for writing to schemas
"encoding": null,
// optional hint for encoding of the extract,
"selected-segments": {
"type": "include", // Use "include" to include segments and "exclude" to exclude segments
"segments": ["TXW_BI_HD", "TXW_BI_POS"], // an array of segments to include or exclude
}, // optional if null all segments will be written
"table-rename":{
"TXW_FI_POS" : { // old table name, e.g. TXW_FI_POS
"BSEG": { // new table name, e.g. BSEG
"DMBTR": "AMOUNT" // old field name (e.g. DMBTR), new field name (e.g. AMOUNT)
}
}
}// use this if you want to rename tables and/or fields
"auto-commit": false,
// true if SQL statements should be auto commited false otherwise. Default false
"connection-timeout": 600000,
// maximum number of milliseconds that a client will wait for a connection. Default 600000,
"max-life-time": 150000,
// maximum lifetime of a connection in the connection pool in milliseconds. Default 150000.,
"validation-timeout": 300000,
// maximum milliseconds after a connection is declared dead
"database-connection": {
"type": "bundled-driver",
"connection-string": "jdbc:postgresql://localhost:5432/mydb",
"username": "auser", // optional user name. Can also be passed on command line
"password": "secret", // optional password. Better to pass on command line
"driver": "postgres" // one of: sqlite, h2, mssql, postgres, mysql, duckdb, access
}
}
If you want to use your own driver the structure is the same except for the property database-connection:
{
"path-to-directory-file": "/path/to/directory-file",
"robust": false,
// this property is currently not honoured but needs to be provided
"chunk-size-per-connection": 5000,
// the number of rows that each chunk contains per connection
"number-connections": 5,
// the number of database connections,
"table-prefix": null,
// optionally prefix all tables by this. Useful for writing to schemas
"encoding": null,
// optional hint for encoding of the extract,
"selected-segments": {
"type": "include", // Use "include" to include segments and "exclude" to exclude segments
"segments": ["TXW_BI_HD", "TXW_BI_POS"] // an array of segments to include or exclude
}, // optional if null all segments will be written
"table-rename":{
"TXW_FI_POS" : { // old table name, e.g. TXW_FI_POS
"BSEG": { // new table name, e.g. BSEG
"DMBTR": "AMOUNT" // old field name (e.g. DMBTR), new field name (e.g. AMOUNT)
}
}
}// use this if you want to rename tables and/or fields
"auto-commit": false,
// true if SQL statements should be auto commited false otherwise. Default false
"connection-timeout": 600000,
// maximum number of milliseconds that a client will wait for a connection. Default 600000,
"max-life-time": 150000,
// maximum lifetime of a connection in the connection pool in milliseconds. Default 150000.,
"validation-timeout": 300000,
// maximum milliseconds after a connection is declared dead
"database-connection": {
"type": "own-driver",
"connection-string": "jdbc:postgresql://localhost:5432/mydb",
"driver-class-name": "org.postgresql.Driver",
// the class name of the JDBC driver
"username": "auser",
// optional user name. Can also be passed on command line
"password": "secret",
// optional password. Better to pass on command line
"field-mapping": {
"BigDecimal": {
"type-name": "DECIMAL", // map the decimal type in the extract to the postgres type DECIMAL type
"has-length": false // true if to integrate length information as in CHAR(3), i.e. length = 3
},
"String": {
"type-name": "VARCHAR", // map the string type in the extract to the postgres type VARCHAR
"has-length": true
},
"Int": {
"type-name": "INTEGER",
"has-length": false
},
"Date": {
"type-name": "DATE",
"has-length": false
},
"Float": {
"type-name": "FLOAT",
"has-length": false
},
"BigInt": {
"type-name": "BIGINT",
"has-length": false
},
"LocalTime": {
"type-name": "TIME",
"has-length": false
},
"YearMonth": {
"type-name": "DATE",
"has-length": false
},
"Char": {
"type-name": "CHAR",
"has-length": true
}
}
}
}
With number-connections and chunk-size-per-connection you can tune the throughput. For example,
with number-connections = 5 and chunk-size-per-connection = 5000, batches of 5000 rows will
be written in 5 parallel connections. The higher the throughput the more memory will be consumed. If
you encounter memory exceptions you need to provide the underlying JVM process with more
memory (see Advanced Configuration).
DaRT Reader will use the capabilities of the JDBC driver to escape reserved keywords of the particular SQL
dialect. However, not all drivers provide methods for escaping and therefore it can happen that DaRT Reader
will report SQL errors. To work around these problems, you could either exclude the offending segment
using the selected-segments property or rename the field/table using the table-rename property.
Use the table-prefix property to write to a specific schema. Specifying table-prefix = "myschema." will prepend
each table name with the prefix "myschema." and will have the effect of writing to the "myschema." schema on
most databases.
validate¶
Validate database against extract:
The DaRT extract contains information about sums for fields in some segments. These
are called check sums. The validate command calculates these sums in the database and
compares them to what is contained in the extract (as fixed numbers). Reuse the config-file
that was used to save the extract.
An example output looks like this:
The extract contains two checksums (the SAP default) and they match.
write-to-file¶
Export segments to CSV:
segment-names... is a white space separated list of segments to write.
For example
will write two CSV files (TXW_CUST.csv and TXW_COMPC.csv) including the segments TXW_CUST and TXW_COMPC to the current working directory.
If omitted,
all segments are written to output-dir. It will use as the file name the name of the segment followed
by .csv. If a file of the same name exists in the output-dir, the operation aborts
with a message that the file already exists. This command does not check before writing
whether a conflicting file exists. Instead, it writes the segments in batches as it encounters
them in the extract. As a result, it could happen that some segments are successfully written until
the file conflict occurs.
Advanced configuration¶
The CLI can be configured through the config file described in the save command. However, sometimes it is necessary to provide extra parameters to the underlying JVM process. Specifically two use cases are common:
1. You want to provide the JVM process with more memory
2. You want to put your own JDBC driver on the classpath
From the installation directory follow the relative path app on Windows and Contents/app on Mac OS and open the
file beye.cfg. At the end of this file in a section called [JavaOptions] make the following additions:
- To enter the location of the JDBC that should be on the class path add:
Substitute
/path/to/JDBC/driver/*with the path to the directory containing the JDBC driver. - To increase memory add This adds 512mb initial heap size and 4g maximum heap size. Increase these parameters if the JVM process runs out of memory.