Writer expects that mapping of types and indexes in your Elasticsearch exists. If it is missing and you have enabled automatic index creation, new mapping will be created.
- Configuration has 2 parts -
elastic
andtables
- The
elastic
section defines connection info and import confighost
- server addressport
- elasticsearch listening portusername
- elasticsearch username#password
- elasticsearch passwordbulkSize
(optional) - size of a batch to upload to Elasticsearch (default is 10.000)ssh
- SSH tunnel configurationenabled
- enable SSH tunnel for connection to ElasticsearchsshHost
- address of the SSH serversshPort
(optional) - SSH listening port (default is 22)user
- SSH loginkeys
#private
- Your private key used for authentication. Note that keys MUST have maintain linebreaks every 72 bytes, according to rfc4716 section 3. When copying the contents of a key file, it is important to replace all true linebreaks with "\n" so that it can be accepted by the configuration editor, and be parsed correctly in order to establish the SSH tunnel successfully.
- The
tables
section defines database tables, their columns and their data typesfile
ortableId
file
- CSV file of the table we want to write into Elasticsearch (see https://github.com/keboola/docker-bundle/blob/master/ENVIRONMENT.md#input-mapping) for more info about Input MappingtableId
- (deprecated) -StorageAPI table ID of the table we want to write into Elasticsearch (see https://github.com/keboola/docker-bundle/blob/master/ENVIRONMENT.md#input-mapping) for more info about Input Mapping (Works only if destination attribute is not set in table configuration)
index
- index name in EStype
- type of the data, determines the type in ES,id
(optional) - determines in which column of table is the document's ID/primary keyexport
- whether this table shall be exported to ES
- The optional
items
section defines columns mappingname
string (required) - name of the column in CSV filedbName
string (required) - name in the databasetype
string (required) - type in the database. Special type "ignore" serves to ignore column present in CSV file.nullable
bool (required) - is nullable?
{
"elastic": {
"host": "my.hostname.com",
"port": 9200,
"bulkSize": 10000
},
"tables": [
{
"file": "products.csv",
"index": "production",
"type": "products",
"id": "id",
"export": true
}
]
}
{
"elastic": {
"host": "my.hostname.com",
"port": 9200,
"bulkSize": 10000,
"ssh": {
"enabled": true,
"sshHost": "10.112.1.1",
"sshPort": 22,
"user": "extractor",
"keys": {
"private": "YOUR\nPRIVATE\nKEY\nWITHOUT\nPASSPHRASE"
}
}
},
"tables": [
{
"file": "products.csv",
"index": "production",
"type": "products",
"id": "id",
"export": true
}
]
}
{
"elastic": "...",
"tables": [
{
"file": "products.csv",
"index": "production",
"type": "products",
"id": "id",
"export": true,
"items": [
{
"name": "order",
"dbName": "order",
"type": "integer",
"nullable": false
},
{
"name": "vat",
"dbName": "vat-usa",
"type": "double",
"nullable": true
}
]
}
]
}
Elasticsearch Writer is integrated in Keboola Connection.
Available with standard KB Docker Generic UI
MIT licensed, see LICENSE file.