In addtion to the WbExport,
WbImport and WbCopy
commands, SQL Workbench/J implements a set of additional SQL commands that are not part
of the SQL standard. These commands can be used like any other SQL command
(such as UPDATE
inside SQL Workbench/J, i.e. inside the editor
or as part of a SQL script that is run through SQL Workbench/J in batch mode.
As those commands are implemented by SQL Workbench/J you will not be able to
use them when running your SQL scripts using a different client program
(e.g. psql
, SQL*Plus or phpmyadmin
.
Creates an XML report of selected tables. This report could be used to generate an HTML documentation of the database (e.g. using the XSLT command). This report can also be generated from within the Database Object Explorer
The resulting XML file can be transformed into a HTML documentation of your database schema.
Sample stylesheets can be downloaded from http://www.sql-workbench.net/xstl.html.
If you have XSLT stylsheets that you would like to share, please send them to
<support@sql-workbench.net>
.
![]() | |
To see table and column comments with an Oracle database, you need to enable remarks reporting for the JDBC driver, otherwise the driver will not return comments. |
The command supports the following parameters:
Parameter | Description |
---|---|
-file | The filename of the output file. |
-tables | A (comma separated) list of tables to report. Default is
all tables. If this parameter is specified -schemas is ignored.
If you want to generate the report on tables from different users/schemas you have
to use fully qualified names in the list (e.g. -tables=MY_USER.TABLE1,OTHER_USER.TABLE2 )
You can also specify wildcards in the table name: -table=CONTRACT_% will create
an XML report for all tables that start with CONTRACT_ .
|
-excludeTableNames |
A (comma separated) list of tables to exclude from reporting. This is only used if
-tables is also specified. To create a report on all tables, but exclude those that start
with 'DEV', use -tables=* -excludeTableNames=DEV*
|
-schemas | A (comma separated) list of schemas to generate the report from.
For each user/schema all tables are included in the report. e.g.
-schemas=MY_USER,OTHER_USER would generate a report
for all tables in the schemas MY_USER and OTHER_USER .
|
-includeTables | Control the output of table information for the report. The default is
true . Valid values are true , false .
|
-includeTableGrants | If tables are included in the output, the grants for each table can also be included with
this parameter. The default value is false .
|
-includeProcedures | Control the output of stored procedure information for the report. The default is
false . Valid values are true , false .
|
-includeTriggers |
This parameter controls if table triggers are added to the output.
The default value is true .
|
-includeSequences | Control the output of sequence information for the report. The default is
false . Valid values are true , false .
|
-reportTitle |
Defines the title for the generated XML file. The specified title is written
into the tag <report-title> and can be used when
transforming the XML e.g. into a HTML file.
|
-stylesheet | Apply a XSLT transformation to the generated XML file. |
-xsltOutput | The name of the generated output file when applying the XSLT transformation. |
WbSchemaDiff
analyzes two schemas (or a list of tables)
and outputs the differences between those schemas as an XML file. The XML file
describes the changes that need to be applied to the target schema to have
the same structure as the reference schema, e.g. modify column definitions,
remove or add tables, remove or add indexes.
The output is intended to be transformed using XSLT (e.g. with the XSLT Command). Sample XSLT transformations can be found on the SQL Workbench/J homepage
The command supports the following parameters:
Parameter | Description |
---|---|
-referenceProfile | The name of the connection profile for the reference connection. If this is not specified, then the current connection is used. |
-referenceGroup | If the name of your reference profile is not unique across all profiles, you will need to specify the group in which the profile is located with this parameter. |
-targetProfile |
The name of the connection profile for the target connection (the one that needs to be migrated). If this is not specified, then the current connection is used.
If you use the current connection for reference and target,
then you should prefix the table names with schema/user or
use the |
-targetGroup | If the name of your target profile is not unique across all profiles, you will need to specify the group in which the profile is located with this parameter. |
-file | The filename of the output file. If this is not supplied the output will be written to the message area |
-referenceTables | A (comma separated) list of tables that are the reference tables, to be checked. |
-targetTables |
A (comma separated) list of tables in the target
connection to be compared to the source tables. The tables
are "matched" by their position in the list. The first table in the
If you omit this parameter, then all tables from the
target connection with the same names as those listed in
If you omit both parameters, then all tables that the user can access are retrieved from the source connection and compared to the tables with the same name in the target connection. |
-referenceSchema | Compare all tables from the specified schema (user) |
-targetSchema | A schema in the target connection to be compared to the tables from the reference schema. |
-encoding | The encoding to be used for the XML file. The default is UTF-8 |
-includePrimaryKeys | Select whether primary key constraint definitions should be compared as well.
The default is true .
Valid values are true or false .
|
-includeForeignKeys | Select whether foreign key constraint definitions should be compared as well.
The default is true .
Valid values are true or false .
|
-includeTableGrants |
Select whether table grants should be compared as well.
The default is false .
|
-includeTriggers |
Select whether table triggers are compared as well.
The default value is true .
|
-includeConstraints |
Select whether table and column (check) constraints should be compared as well. SQL Workbench/J compares the constraint definition (SQL) as stored in the database.
The default is to compare table constraints ( |
-useConstraintNames |
When including check constraints this parameter controls whether constraints should be matched by name, or only by their expression. If comparing by names is enabled, the diff output will contain elements for constraint modification otherwise only drop and add entries will be available.
The default is to compare by names( |
-includeViews |
Select whether views should also be compared. When comparing
views, the source as it is stored in the DBMS is compared. This comparison
is case-sensitiv, which means
The default is |
-includeProcedures |
Select whether stored procedures should also be compared. When comparing procedures the source as it is stored in the DBMS is compared. This comparison is case-sensitiv. A comparison across different DBMS will also not work!
The default is |
-includeIndex |
Select whether indexes should be compared as well. The default
is to not compare index definitions.
Valid values are true or false .
|
-includeSequences |
Select whether sequences should be compared as well. The default is
to not compare sequences. Valid values are true , false .
|
-useJdbcTypes |
Define whether to compare the DBMS specific data types, or
the JDBC data type returned by the driver. When comparing
tables from two different DBMS it is recommended to use
Valid values are |
-stylesheet | Apply a XSLT transformation to the generated XML file. |
-xsltOutput | The name of the generated output file when applying the XSLT transformation. |
The WbDataDiff
command can be used to generate SQL scripts
that update a target database such that the data is identical to a reference
database. This is similar to the WbSchemaDiff
but compares
the actual data in the tables rather than the table structure.
For each table the command will create up to three script files, depending on
the needed statements to migrate the data. One file for UPDATE
statements,
one file for INSERT
statements and one file for DELETE
statements (if -includeDelete=true
is specified)
![]() | |
As this command needs to read every row from the reference and the target
table, processing large tables can take quite some time, especially if |
WbDataDiff
requires that all involved tables have a primary key
defined. If a table does not have a primary key, WbDataDiff
will
stop the processing.
To improve performance (a bit), the rows are retrieved in chunks from the
target table by dynamically constructing a WHERE clause for the rows
that were retrieved from the reference table. The chunk size
can be controlled using the property workbench.sql.sync.chunksize
The chunk size defaults to 25. This is a conservative setting to avoid
problems with long SQL statements when processing tables that have
a PK with multiple columns. If you know that your primary keys
consist only of a single column and the values won't be too long, you
can increase the chunk size, possibly increasing the performace when
generating the SQL statements. As most DBMS have a limit on the length
of a single SQL statement, be careful when setting the chunksize too high.
The same chunk size is applied when generating DELETE
statements by the WbCopy
command,
when syncDelete mode is enabled.
The command supports the following parameters:
Parameter | Description |
---|---|
-referenceProfile | The name of the connection profile for the reference connection. If this is not specified, then the current connection is used. |
-referenceGroup | If the name of your reference profile is not unique across all profiles, you will need to specify the group in which the profile is located with this parameter. If the profile's name is unique you can omit this parameter |
-targetProfile |
The name of the connection profile for the target connection (the one that needs to be migrated). If this is not specified, then the current connection is used.
If you use the current connection for reference and target,
then you should prefix the table names with schema/user or
use the |
-targetGroup | If the name of your target profile is not unique across all profiles, you will need to specify the group in which the profile is located with this parameter. |
-file |
The filename of the main script file. The command creates two
scripts per table. One script named update_<tablename>.sql
that contains all needed UPDATE or INSERT
statements. The second script is named delete_<tablename>.sql
and will contain all DELETE statements for the target table.
The main script merely calls (using WbInclude)
the generated scripts for each table.
|
-referenceTables |
A (comma separated) list of tables that are the reference
tables, to be checked. You can specify the table with wildcards,
e.g. -referenceTables=P% to compare all tables
that start with the letter P .
|
-targetTables |
A (comma separated) list of tables in the target
connection to be compared to the source tables. The tables
are "matched" by their position in the list. The first table in the
If you omit this parameter, then all tables from the
target connection with the same names as those listed in
If you omit both parameters, then all tables that the user can access are retrieved from the source connection and compared to the tables with the same name in the target connection. |
-referenceSchema | Compare all tables from the specified schema (user) |
-targetSchema | A schema in the target connection to be compared to the tables from the reference schema. |
-checkDependencies |
Valid values are Sorts the generated scripts in order to respect foreign key dependencies for deleting and inserting rows.
The default is |
-includeDelete |
Valid values are
Generates
The default is |
-type |
Valid values are Defines the type of the generated files. |
-encoding |
The encoding to be used for the SQL scripts. The default depends
on your operating system. It will be displayed when you run
XML files are always stored in UTF-8 |
-sqlDateLiterals |
Valid values: Controls the format in which the values of DATE, TIME and TIMESTAMP columns are written into the generated SQL statements. For a detailed description of the possible values, please refer to the WbExport command. |
-ignoreColumns |
With this parameter you can define a list of column names that should not be considered when comparing data. You can e.g. exclude columns that store the last access time of a row, or the last update time if that should not be taken into account when checking for changes. |
-showProgress |
Valid values: true, false, <numeric value>
Control the update frequence in the statusbar (when running in
GUI mode). The default is every 10th row is reported. To disable
the display of the progress specifiy a value of 0 (zero) or the
value |
Compare all tables between two connections, and write the output to the
file migrate_staging.sql
, but do not generate
DELETE
statements.
WbDataDiff -referenceProfile="Production" -targetProfile="Staging" -file=migrate_staging.sql -includeDelete=false
Compare a list of matching tables between two databases and write the output to the
file migrate_staging.sql
including DELETE
statements.
WbDataDiff -referenceProfile="Production" -targetProfile="Staging" -referenceTables=person,address,person_address -file=migrate_staging.sql -includeDelete=true
Compare three tables that are differently named in the target database and
ignore all columns (regardless in which table they appear) that are named
LAST_ACCESS
or LAST_UPDATE
WbDataDiff -referenceProfile="Production" -targetProfile="Staging" -referenceTables=person,address,person_address -targetTables=t_person,t_address,t_person_address -ignoreColumns=last_access,last_update -file=migrate_staging.sql -includeDelete=true
The command WbGrepSource
can be used to search
in the source code of the specified database objects.
The command basically retrieves the source code for all selected objects and does a simple search on that source code. The source code that is searched is identical to the source code that is displayed in the "Source" tab in the various DbExplorer panels.
The search values can be regular expressions. When searching the source code the specified expression must be found somewhere in the source. The regex is not used to match the entire source.
The command supports the following parameters:
Parameter | Description |
---|---|
-searchValues |
A comma separated list of values to be searched for. |
-useRegex |
Valid values are
If this parameter is set to true, the values specified with
The default for this parameter is |
-matchAll |
Valid values are
This specifies if all values specified with
The default for this parameter is |
-ignoreCase |
Valid values are When set to true, the comparison is be done case-insesitive ("ARTHUR" will match "Arthur" or "arthur").
The default for this parameter is |
-types |
Specifies if the object types to be searched. The values for this
parameter are the same as in the "Type" drop down of DbExplorer's
table list. Additionally the types When specifying a type that contains a space, the type name neeeds to be enclosed in quotes, e.g. -types="materialized view"
The default for this parameter is
To search in all available object types, use |
-objects |
A list of object names to be searched. These names may contain
SQL wildcards, e.g. |
-schemas |
Specifies a list of schemas to be searched (for DBMS that support schemas). If this parameter is not specified the current schema is searched. |
The functionality of the WbGrepSource
command is also
available through a GUI at
→
The command WbGrepData
can be used to search
for occurances of a certain value in all columns of multiple tables.
It is the commandline version of the (client side) Search Table Data tab
in the DbExplorer. A more detailed description on how the searching is performed is available that chapter.
![]() | |
To search the data of a table a SELECT * FROM the_table is executed and
processed on a row by row basis. Although SQL Workbench/J only keeps one row at a time in memory
it is possible that the JDBC drivers caches the full result set in memory. Please see the chapter
Common problems for your DBMS to check if the JDBC driver you are using
caches result sets.
|
The command supports the following parameters:
Parameter | Description |
---|---|
-search |
The value to be searched for |
-ignoreCase |
Valid values are When set to true, the comparison is be done case-insesitive ("ARTHUR" will match "Arthur" or "arthur").
The default for this parameter is |
-compareType |
Valid values are
When specifying
The default for this parameter is |
-tables |
A list of table names to be searched. These names may contain
SQL wildcards, e.g. |
-types |
By default |
-excludeTables |
A list of table names to be excluded from the search. If e.g. the wildcard for -tables would select too many tables, you can exclude individual tables with this parameter. The parameter values may include SQL wildcards.
|
-excludeLobs |
If this parameter is set to true, CLOB and BLOB columns will not be retrieved at all, which is useful if you retrieve a lot of rows from tables with columns of those type to reduce the memory that is needed.
If this switch is set to |
This defines an internal variable which is used for variable substitution during SQL execution. Details can be found in the chapter Variable substitution.
The syntax for defining a variable is: WbVarDef variable=value
The variable definition can also be read from a file. The file should list
each variable definition on one line (this is the format of a normal Java properties
file). Lines beginning with a #
sign are ignored.
The syntax is WBVARDEF -file=<filename>
You can also specify a file when starting SQL Workbench/J with the
parameter -vardef=filename.ext
. When specifying a filename
you can also define an encoding for the file using the -encoding
switch. The specified file has to be a regular Java properties file.
For details see see Reading variables from a file.
This removes an internal variable from the variable list. Details can be found in the chapter Variable substitution.
This list all defined variables from the variable list. Details can be found in the chapter Variable substitution.
The WbConfirm
command pauses the execution of the
current script and displays a message. You can then choose to stop
the script or continue. The message can be supplied as a parameter of
the command. If no message is supplied, a default message is
displayed.
This command can be used to prevent accidental execution of a script even if confirm updates is not enabled.
This command has no effect in batch mode.
If you want to run a stored procedure that has OUT
parameters, you have to use the WbCall
command to correctly see the returned value of the parameters.
Consider the following (Oracle) procedure:
CREATE OR REPLACE procedure return_answer(answer OUT integer) IS BEGIN answer := 42; END; /
To call this procedure you need to supply a placeholder indicating that a parameter is needed.
SQL> WbCall return_answer(?); PARAMETER | VALUE ----------+------ ANSWER | 42 (1 Row) Converted procedure call to JDBC syntax: {call return_answer(?)} Execution time: 0.453s SQL>
If the stored procedure has a REF CURSOR (as an output parameter), WbCall
will detect this, and retrieve the result of the ref cursors.
Consider the following (Oracle) stored procedure:
CREATE PROCEDURE ref_cursor_example(pid number, person_result out sys_refcursor, addr_result out sys_refcursor) is BEGIN OPEN person_result FOR SELECT * FROM person WHERE person_id = pid; OPEN addr_result FOR SELECT a.* FROM address a JOIN person p ON a.address_id = p.address_id WHERE p.person_id = pid; END; /
To call this procedure you use the same syntax as with a regular OUT parameter:
WbCall ref_cursor_example(42, ?, ?);
SQL Workbench/J will display two result tabs, one for each cursor returned by the procedure. If you use
WbCall ref_cursor_example(?, ?, ?)
you will be prompted to enter a
value for the first parameter (because that is an IN parameter).
When using ref cursors in PostgreSQL, normally such a function can simply be used inside a SELECT
statement, e.g. SELECT * FROM refcursorfunc();
. Unfortunately the PostgreSQL JDBC driver
does not handle this correctly and you will not see the result set returned by the function.
To display the result set returned by such a function, you have to use WbCall
as well
CREATE OR REPLACE FUNCTION refcursorfunc() RETURNS refcursor AS $$ DECLARE mycurs refcursor; BEGIN OPEN mycurs FOR SELECT * FROM PERSON; RETURN mycurs; END; $$ LANGUAGE plpgsql; /
You can call this function using
WbCall refcursorfunc();
This will then display the result from the SELECT inside the function.
With the WbInclude
command you run SQL scripts without
actually loading them into the editor, or call other scripts from within
a script. The format of the command is WbInclude -file=filename;
.
For DBMS other then MS SQL, the command can be abbreviated using the @ sign: @filename;
is equivalent to WbInclude -file=filename;
.
The called script way may also include other scripts. Relative filens (e.g. as parameters
for SQL Workbench/J commands) in the script are always resolved to the directory
where the script is located, not the current directory of the application.
The reason for excluding MS SQL is, that when creating stored procedures in MS SQL, the procedure
parameters are identified using the @ sign, thus SQL Workbench/J would interpret the lines
with the variable definition as the WbInclude command. If you want to use the @ command
with MS SQL, you can configure this in your
workbench.settings
configuration file.
![]() | |
If the included SQL script contains |
The long version of the command accepts additional parameters. When using the long version, the filename needs to be passed as a parameter as well.
Only files up to a certain size will be read into memory. Files exceeding this size will be processes statement by statement. In this case the automatic detection of the alternate delimiter will not work. If your scripts exceed the maximum size and do use the alternate delimiter you will have to use the "long" version so that you can specify the actual delimiter used in your script.
The command supports the following parameters:
Parameter | Description |
---|---|
-file | The filename of the file to be included. |
-continueOnError |
Defines the behaviour if an error occurs in one of the statements.
If this is set to true then script execution will continue
even if one statement fails. If set to false script execution
will be halted on the first error. The default value is false
|
-delimiter |
Specify the delimiter that is used in the script. This defaults
to ; . If you want to define a delimiter that
will only be recognized when it's the only text in a line, append
:nl to the value, e.g.: -delimiter=/:nl
|
-encoding | Specify the encoding of the input file. If no encoding is specified, the default encoding for the current platform (operating system) is used. |
-verbose |
Controls the logging level of the executed commands.
-verbose=true has the same effect as adding a
WbFeedback on inside the called script.
-verbose=false has the same effect as adding
the statement WbFeedback off to the called script.
|
-useSavepoint |
Control if each statement from the file should be guarded with a savepoint
when executing the script. Setting this to true will make
execution of the script more robust, but also slows down the processing
of the SQL statements.
|
-ignoreDropErrors | Controls if errors resulting from DROP statements should be treated as an error or as a warning. |
Execute my_script.sql
@my_script.sql;
Execute my_script.sql
but abort on the first error
wbinclude -file="my_script.sql" -continueOnError=false;
If you manage your stored procedures in Liquibase ChangeLogs, you can use this command to run the necessary SQL directly from the XML file, without the need to copy and paste it into SQL Workbench/J. This is useful when testing and developing stored procedures that are managed by a Liquibase changeLog.
![]() | |
This is NOT a replacement for Liquibase.
It will not convert any of the Liquibase tags to "real" SQL.
It is merely a convenient way to extract and run SQL statements stored in a Liquibase XML file! |
The attribute splitStatements
for the sql
tag is evaluated. The delimiter used to split the statements follows the usual SQL Workbench/J rules (including the use
of the alternate delimiter).
WbRunLB
supports the following parameters:
Parameter | Description |
---|---|
-file |
The filename of the Liquibase changeLog (XML) file. The <include> tag is NOT supported! SQL statements stored in files
that are referenced using Liquibase's include tag will not be processed.
|
-changeSet |
A list of changeSet ids to be run. If this is omitted, then the SQL from all changesets (containing ) are executed. The value
specified can include the value for the author attribute as well, -changeSet="Arthur;42" selects the changeSet
where author="Arthur" and id="42" . This parameter can be repeated in order to select
multiple changesets: -changeSet="Arthur;42" -changeSet="Arthur;43" .
|
-author |
Select all changeSets with a given author, e.g. -author=Arthur . If this parameter is specified, -changeSet
is ignored. This parameter can be repeated in order to select changesets from multiple authors: -author=Arthur -author=Zaphod .
|
-continueOnError |
Defines the behaviour if an error occurs in one of the statements.
If this is set to true then script execution will continue
even if one statement fails. If set to false script execution
will be halted on the first error. The default value is false
|
-encoding | Specify the encoding of the input file. If no encoding is specified, UTF-8 is used. |
To be able to directly edit data in the result set (grid) SQL Workbench/J needs
a primary key on the underlying table. In some cases these primary keys are not present or
cannot be retrieved from the database (e.g. when using updateable views).
To still be able to automatically update a result based on those tables (without always
manually defining the primary key) you can manually define a primary
key using the WbDefinePk
command.
Assuming you have an updateable view called v_person
where
the primary key is the column person_id
. When you simply do a
SELECT * FROM v_person
, SQL Workbench/J will prompt you for the
primary key when you try to save changes to the data. If you run
WbDefinePk v_person=person_id
before retrieving the result, SQL Workbench/J will automatically
use the person_id
as the primary key (just as if this
information had been retrieved from the database).
To delete a definition simply call the command with an empty column list:
WbDefinePk v_person=
If you want to define certain mappings permanently, this can be done using a mapping file that is specified in the configuration file. The file specified has to be a text file with each line containing one primary key definition in the same format as passed to this command. The global mapping will automatically be saved when you exit the application if a filename has been defined. If no file is defined, then all PK mappings that you define are lost when exiting the application (unless you explicitely save them using WbSavePkMap
v_person=person_id v_data=id1,id2
will define a primary key for the view v_person
and one for
the view v_data
. The definitions stored in that file can
be overwritten using the WbDefinePk
command, but those changes
won't be saved to the file. This file will be read for all database connections and
is not profile specific. If you have conflicting primary key definitions for
different databases, you'll need to execute the WbDefinePk
command
each time, rather then specifying the keys in the mapping file.
When you define the key columns for a table through the GUI, you have the option
to remember the defined mapping. If this option is checked, then that mapping
will be added to the global map (just as if you had executed WbDefinePk
manually.
![]() | |
The mappings will be stored with lowercase table names internally, regardless how you specify them. |
To view the currently defined primary keys, execute the command
WbListPkDef
.
To load the additional primary key definitions from a file, you can
use the the WbLoadPKMap
command. If a filename is defined
in the configuration file then that
file is loaded. Alternatively if no file is configured, or if you want to
load a different file, you can specify the filename using the -file
parameter.
To save the current primary key definitions to a file, you can
use the the WbSavePKMap
command. If a filename is defined
in the configuration file then the
definition is stored in that file. Alternatively if no file is configured, or if you want to
store the current mapping into a different file, you can specify the filename
using the -file
parameter.
The default fetch size for a connection can be defined in the connection profile. Using the
command WbFetchSize
you can change the fetch size without changing the connection profile.
The following script changes the default fetch size to 2500 rows and then runs a WbExport
command.
WbFetchSize 2500; WbExport -sourceTable=person -type=text -file=/temp/person.txt;
WbFetchSize
will not change the current connection profile.
To send several SQL Statements as a single "batch" to the database server, the two commands WbStartBatch and WbEndBatch can be used.
All statements between these two will be sent as a single statement (using executeBatch()
) to the server.
Note that not all JDBC drivers support batched statements, and the flexibility what kind of statements can be batched varies between the drivers as well. Most drivers will not accept different types of statements e.g. mixing DELETE and INSERT in the same batch.
To send a group of statements as a single batch, simply use the command WbStartBatch
to mark the beginning and
WbEndBatch
to mark the end. You have to run all statements together either by using "Execute all" or by selecting all
statements (including WbStartBatch and WbEndBatch) and then using "Execute selected". The following example sends all INSERT statements
as a single batch to the database server:
WbStartBatch; INSERT INTO person (id, firstname, lastname) VALUES (1, 'Arthur', 'Dent'); INSERT INTO person (id, firstname, lastname) VALUES (2, 'Ford', 'Prefect'); INSERT INTO person (id, firstname, lastname) VALUES (3, 'Zaphod', 'Beeblebrox'); INSERT INTO person (id, firstname, lastname) VALUES (4, 'Tricia', 'McMillian'); WbEndBatch; COMMIT;
To save the contents of a BLOB
or CLOB
column
into an external file the WbSelectBlob
command can be used. Most DBMS
support reading of CLOB
(character data) columns directly, so depending
on your DBMS (and JDBC driver) this command might only be needed for binary data.
The syntax is very similar to the regular SELECT
statement, an additional
INTO
keyword specifies the name of the external file into which the
data should be written:
WbSelectBlob blob_column INTO c:/temp/image.bmp FROM theTable WHERE id=42;
Even if you specify more then one column in the column list, SQL Workbench/J will only use the first column. If the SELECT returns more then one row, then one outputfile will be created for each row. Additional files will be created with a counter indicating the row number from the result. In the above example, image.bmp, image_1.bmp, image_3.bmp and so on, would be created.
WbSelectBlob
is intended for an ad-hoc retrieval of a single LOB column.
If you need to extract the contents of several LOB rows and columns it is recommended to
use the WbExport command.
You can also manipulate (save, view, upload) the contents of BLOB columns in a result set. Please refer to BLOB support for details.
Normally SQL Workbench/J prints the results for each statement
into the message panel. As this feedback can slow down the execution
of large scripts, you can disable the feedback using the WbFeedback
command. When WbFeedback OFF
is executed, only a summary of the
number of executed statements will be displayed, once the script execution has
finished. This is the same behaviour as selecting "Consolidate script log" in the
options window. The only difference is, that the setting through WbFeedback
is temporary and does not affect the global setting.
The SET
command is passed on directly to the driver,
except for the parameters described in this chapter as they
have an equivalent JDBC call which will be executed instead.
Oracle does not have a SQL set command. The SET command that is available in SQL*Plus is a specific SQL*Plus command and will not work with other client software. Most of the SQL*Plus SET commands only make sense with SQL*Plus (e.g. formatting of the results). To be able to run SQL scripts that are intended for Oracle SQL*PLus, any error reported from the SET command when running against an Oracle database will silently be ignored and only logged as a warning.
SET feedback ON/OFF
is equivalent to the WbFeedback
command, but mimics the syntax of Oracle's SQL*Plus utility.
SET serveroutput on
is equivalent to the ENABLEOUT
command and SET serveroutput off
is equivalent to DISABLEOUT command.
With the command SET autocommit ON/OFF
autocommit can be turned on or
off for the current connection. This is equivalent to setting the autocommit property
in the connection profile or toggling
the state of the
→
menu item.
Limits the number of rows returned by the next statement. The behaviour of this command
is a bit different between the console mode and the GUI mode. In console mode, the maxrows
stay in effect until you explicitely change it back using SET maxrows
again.
In GUI mode, the maxrows setting is only in effect for the script currently being executed and will only temporarily overwrite any value entered in the "Max. Rows" field.
In the connection profile two options can be specified to define the behaviour when running commands that might change the update: a "read only" mode that ignores such commands and a "confirm all" mode, where you need to confirm any statement that might change the database.
These states can temporarily be changed without actually changing the profile
using the WbMode
command.
![]() | |
This changes the mode for all editor tabs, not only for the one where you run the command. |
Parameters for the WbMode
command are:
reset
Resets the flags to the profile's definition
normal
Makes all changes possible (turns off read only and confirmations)
confirm
Enables confirmation for all updating commands
readonly
Turns on the read only mode
The following example will turn on read only mode for the current connection, so that any subsequent statement that updates the database will be ignored
WbMode readonly;
To change the current connection back to the settings from the profile use:
WbMode reset;
Describe shows the definition of the given table. It can be abbreviated with DESC. The command expects the table name as a parameter. The output of the command will be several result tabs to show the table structure, indexes and triggers (if present). If the "described" object is a view, the message tab will additionally contain the view source (if available).
DESC person;
If you want to show the structure of a table from a different user, you need
to prefix the table name with the desired user DESCRIBE otheruser.person;
This command lists all available tables (including views and synonyms). This output is equivalent to the left part of the Database Object Explorer's Table tab.
You can limit the displayed objects by either specifying a wildcard for the
names to be retrieved: WbList P%
will list all tables or
views starting with the letter "P"
The command supports two parameters to specify the tables and objects defined in a more detailed manner. If you want to limit the result by specifying a wildcard for the name and the object type, you have to use the parameter switches:
Parameter | Description |
---|---|
-objects |
Select the objects to be returned using a wildcard name, e.g. |
-types |
Limit the result to specific object types, e.g. |
This command will list all stored procedures available to the current user. The output of this command is equivalent to the Database Explorer's Procedure tab.
You can limit the list by supplying a wildcard search for the name, e.g.:
WbListProcs public.p%
This command will list all stored triggers available to the current user. The output of this command is equivalent to the Database Explorer's Triggers tab (if enabled)
This command will show the source for a single stored procedure (if the current DBMS is supported by SQL Workbench/J). The name of the procedure is given as an argument to the command:
WbProcSource theAnswer
Lists the available catalogs (or databases). It is the same information that is shown in the DbExplorer's "Database" dropdown.
The output of this command depends on the underlying JDBC driver and DBMS.
For MS SQL Server this lists the available databases (which then could be changed
with the command USE <dbname>
)
For Oracle this command returns nothing as Oracle does not implement the concept of catalogs.
This command calls the JDBC driver's getCatalogs()
method and will
return its result. If on your database system this command does not display
a list, it is most likely that your DBMS does not support catalogs (e.g. Oracle)
or the driver does not implement this feature.
This command ignores the filter defined for catalogs in the connection profile and always returns all databases.
Lists the available schemas from the current connection. The output of this command depends on the underlying JDBC driver and DBMS. It is the same information that is shown in the DbExplorer's "Schema" dropdown.
This command ignores the filter defined for schemas in the connection profile and always returns all schemas.
With the WbConnect
command, the connection for the script that is
currently be exected can be changed.
When this command is run in GUI mode, the connection is only
changed for the remainder of the script execution. Therefor at least one other statement should be
executed together with the WbConnect
command. Either by running
the complete script of the editor or selecting the WbConnect
command
together with other statements. Once the script has finished, the connection is closed
and the "global" connection (selected in the connect dialog) is active again. This also applies
to scripts that are run in batch mode or
scripts that are started from within the console using
WbInclude
.
When this command is entered directly in the commandline of the
console mode, the current connection is closed and the
new connection is kept open until the application ends, or a new connection is established
using WbConnect
on the commandline again.
The command supports the following parameters:
Parameter | Description |
---|---|
-profile | Defines the profile to connect to. If this parameter is specified all other parameters are ignored. |
or | |
-url | The JDBC connection URL |
-username | Specify the username for the DBMS |
-password | Specify the password for the user |
-driver | Specify the full class name of the JDBC driver |
-driverJar | Specify the full pathname to the .jar file containing the JDBC driver |
-autocommit | Set the autocommit property for this connection. You can also
control the autocommit mode from within your script by using the
SET AUTOCOMMIT command.
|
-rollbackOnDisconnect | If this parameter is set to true, a ROLLBACK will
be sent to the DBMS before the connection is closed. This setting is
also available in the connection profile.
|
-trimCharData |
Turns on right-trimming of values retrieved from CHAR
columns. See the description of the
profile properties for details.
|
-removeComments | This parameter corresponds to the Remove comments setting of the connection profile. |
-fetchSize | This parameter corresponds to the Fetch size setting of the connection profile. |
-ignoreDropError | This parameter corresponds to the Ignore DROP errors setting of the connection profile. |
If none of the parameters is supplied when running the command, it is assumed that any value
after WbConnect
is the name of a connection profile, e.g.:
WbConnect production
will connect using the profile name production
, and is equivalent to
WbConnect -profile=production
Transforms an XML file via a XSLT stylesheet. This can be used to format XML input files into the correct format for SQL Workbench/J or to transform the output files that are generated by the various SQL Workbench/J commands.
Parameters for the XSLT command:
Parameter | Description |
---|---|
-inputfile | The name of the XML source file. |
-xsltoutput | The name of the generated output file. |
-stylesheet | The name of the XSLT stylesheet to be used. |
-xsltParameters |
A list of parameters (key/value pairs) that should be passed to the XSLT processor. When using e.g. the wbreport2liquibase.xslt
stylesheet, the value of the author attribute can be set using -xsltParameters="authorName=42" .
|
To turn on support for Oracle's DBMS_OUTPUT
package you have to use the
(SQL Workbench/J specific) command ENABLEOUT
.
After running ENABLEOUT
the DBMS_OUTPUT
package is enabled,
and any message written with dbms_output.put_line()
is displayed in the message
pane after executing a SQL statement. It is equivalent to calling the dbms_output.enable() procedure.
You can control the buffer size of the DBMS_OUTPUT
package by passing the
desired buffer size as a parameter to the ENABLEOUT
command:
ENABLEOUT 32000;
![]() | |
Due to a bug in Oracle's JDBC driver, you cannot retrieve columns with
the |
To disable the DBMS_OUTPUT
package again, use the (SQL Workbench/J specific)
command DISABLEOUT
. This is equivalent to calling
dbms_output.disable()
procedure.