Skip to main content

A CLI tool for exporting data from Elasticsearch into a CSV file.

Project description

es2csv

A CLI tool for exporting data from Elasticsearch into a CSV file

Command line utility, written in Python, for querying Elasticsearch in Lucene query syntax or Query DSL syntax and exporting result as documents into a CSV file. This tool can query bulk docs in multiple indices and get only selected fields, this reduces query execution time.

Quick Look Demo

https://cloud.githubusercontent.com/assets/7491121/12016825/59eb5f82-ad58-11e5-81eb-871a49e39c37.gif

Installation

>From source:

$ pip install git+https://github.com/taraslayshchuk/es2csv.git

>From pip:

$ pip install es2csv

Usage

$ es2csv [-h] -q QUERY [-u URL] [-a AUTH] [-i INDEX [INDEX ...]]
         [-D DOC_TYPE [DOC_TYPE ...]] [-t TAGS [TAGS ...]] -o FILE
         [-f FIELDS [FIELDS ...]] [-d DELIMITER] [-m INTEGER]
         [-s INTEGER] [-k] [-r] [-e] [--verify-certs]
         [--ca-certs CA_CERTS] [--client-cert CLIENT_CERT]
         [--client-key CLIENT_KEY] [-v] [--debug]

Arguments:
 -q, --query QUERY                        Query string in Lucene syntax.               [required]
 -o, --output_file FILE                   CSV file location.                           [required]
 -u, --url URL                            Elasticsearch host URL. Default is http://localhost:9200.
 -a, --auth                               Elasticsearch basic authentication in the form of username:password.
 -i, --index-prefixes INDEX [INDEX ...]   Index name prefix(es). Default is ['logstash-*'].
 -D, --doc_types DOC_TYPE [DOC_TYPE ...]  Document type(s).
 -t, --tags TAGS [TAGS ...]               Query tags.
 -f, --fields FIELDS [FIELDS ...]         List of selected fields in output. Default is ['_all'].
 -d, --delimiter DELIMITER                Delimiter to use in CSV file. Default is ",".
 -m, --max INTEGER                        Maximum number of results to return. Default is 0.
 -s, --scroll_size INTEGER                Scroll size for each batch of results. Default is 100.
 -k, --kibana_nested                      Format nested fields in Kibana style.
 -r, --raw_query                          Switch query format in the Query DSL.
 -e, --meta_fields                        Add meta-fields in output.
 --verify-certs                           Verify SSL certificates. Default is False.
 --ca-certs CA_CERTS                      Location of CA bundle.
 --client-cert CLIENT_CERT                Location of Client Auth cert.
 --client-key CLIENT_KEY                  Location of Client Cert Key.
 -v, --version                            Show version and exit.
 --debug                                  Debug mode on.
 -h, --help                               show this help message and exit

Examples

Searching on localhost and save to database.csv

$ es2csv -q 'host: localhost' -o database.csv

Same in Query DSL syntax

$ es2csv -r -q '{"query": {"match": {"host": "localhost"}}}' -o database.csv

Very long queries can be read from file

$ es2csv -r -q @'~/query string file.json' -o database.csv

With tag

$ es2csv -t dev -q 'host: localhost' -o database.csv

More tags

$ es2csv -t dev prod -q 'host: localhost' -o database.csv

On custom Elasticsearch host

$ es2csv -u my.cool.host.com:9200 -q 'host: localhost' -o database.csv

You are using secure Elasticsearch with nginx? No problem!

$ es2csv -u http://my.cool.host.com/es/ -q 'host: localhost' -o database.csv

With enabled SSL certificate verification (off by default)

$ es2csv --verify-certs -u https://my.cool.host.com/es/ -q 'host: localhost' -o database.csv

With your own certificate authority bundle

$ es2csv --ca-certs '/path/to/your/ca_bundle' --verify-certs -u https://host.com -q '*' -o out.csv

Not default port?

$ es2csv -u my.cool.host.com:6666/es/ -q 'host: localhost' -o database.csv

With Authorization

$ es2csv -u http://login:password@my.cool.host.com:6666/es/ -q 'host: localhost' -o database.csv

With explicit Authorization

$ es2csv -a login:password -u http://my.cool.host.com:6666/es/ -q 'host: localhost' -o database.csv

Specifying index

$ es2csv -i logstash-2015-07-07 -q 'host: localhost' -o database.csv

More indexes

$ es2csv -i logstash-2015-07-07 logstash-2015-08-08 -q 'host: localhost' -o database.csv

Or index mask

$ es2csv -i logstash-2015-* -q 'host: localhost' -o database.csv

And now together

$ es2csv -i logstash-2015-01-0* logstash-2015-01-10 -q 'host: localhost' -o database.csv

Collecting all data on all indices

$ es2csv -i _all -q '*' -o database.csv

Specifying document type

$ es2csv -D log -i _all -q '*' -o database.csv

Selecting some fields, what you are interesting in, if you don’t need all of them (query run faster)

$ es2csv -f host status date -q 'host: localhost' -o database.csv

Or field mask

$ es2csv -f 'ho*' 'st*us' '*ate' -q 'host: localhost' -o database.csv

Selecting all fields, by default

$ es2csv -f _all -q 'host: localhost' -o database.csv

Selecting meta-fields: _id, _index, _score, _type

$ es2csv -e -f _all -q 'host: localhost' -o database.csv

Selecting nested fields

$ es2csv -f comments.comment comments.date comments.name -q '*' -i twitter -o database.csv

Max results count

$ es2csv -m 6283185 -q '*' -i twitter -o database.csv

Retrieve 2000 results in just 2 requests (two scrolls 1000 each):

$ es2csv -m 2000 -s 1000 -q '*' -i twitter -o database.csv

Changing column delimiter in CSV file, by default ‘,’

$ es2csv -d ';' -q '*' -i twitter -o database.csv

Changing nested columns output format to Kibana style like

$ es2csv -k -q '*' -i twitter -o database.csv

An JSON document example

{
  "title": "Nest eggs",
  "body":  "Making your money work...",
  "tags":  [ "cash", "shares" ],
  "comments": [
    {
      "name":    "John Smith",
      "comment": "Great article",
      "age":     28,
      "stars":   4,
      "date":    "2014-09-01"
    },
    {
      "name":    "Alice White",
      "comment": "More like this please",
      "age":     31,
      "stars":   5,
      "date":    "2014-10-22"
    }
  ]
}

A CSV file in Kibana style format

body,comments.age,comments.comment,comments.date,comments.name,comments.stars,tags,title
Making your money work...,"28,31","Great article,More like this please","2014-09-01,2014-10-22","John Smith,Alice White","4,5","cash,shares",Nest eggs

A CSV file in default format

body,comments.0.age,comments.0.comment,comments.0.date,comments.0.name,comments.0.stars,comments.1.age,comments.1.comment,comments.1.date,comments.1.name,comments.1.stars,tags.0,tags.1,title
Making your money work...,28,Great article,2014-09-01,John Smith,4,31,More like this please,2014-10-22,Alice White,5,cash,shares,Nest eggs

Release Changelog

5.2.1 (2017-04-02)

  • Added –verify-certs, –ca-certs, –client-cert, –client-key arguments for SSL configuration. (Issue #11 and #24, Pull #22)

  • Added –scroll_size(-s) argument to specify the scroll size of requests. (Pull #27)

5.2.0 (2017-02-16)

  • Updating version elasticsearch-py to 5.2.* and added support of Elasticsearch 5. (Issue #19)

2.4.3 (2017-02-15)

  • Update doc according to wildcard support in fields naming.

  • Added support of old version pip. (Issue #16)

2.4.2 (2017-02-14)

  • Added wildcard support in fields naming.

  • Removed column sorting. (Issue #21)

2.4.1 (2016-11-10)

  • Added –auth(-a) argument for Elasticsearch basic authentication. (Pull #17)

  • Added –doc_types(-D) argument for specifying document type. (Pull #13)

2.4.0 (2016-10-26)

  • Added JSON validation for raw query. (Issue #7)

  • Added checks to exclude hangs during connection issues. (Issue #9)

  • Updating version elasticsearch-py to 2.4.0 and freeze this dependence according to mask 2.4.*. (Issue #14)

  • Updating version progressbar2 to fix issue with visibility.

1.0.3 (2016-06-12)

  • Added option to read query string from file –query(-q) @’~/filename.json’. (Issue #5)

  • Added –meta_fields(-e) argument for selecting meta-fields: _id, _index, _score, _type. (Issue #6)

  • Updating version elasticsearch-py to 2.3.0.

1.0.2 (2016-04-12)

  • Added –raw_query(-r) argument for using the native Query DSL format.

1.0.1 (2016-01-22)

  • Fixed support elasticsearch-1.4.0.

  • Added –version argument.

  • Added history changelog.

1.0.0.dev1 (2016-01-04)

  • Fixed encoding in CSV to UTF-8. (Issue #3, Pull #1)

  • Added better progressbar unit names. (Pull #2)

  • Added pip installation instruction.

1.0.0.dev0 (2015-12-25)

  • Initial registration.

  • Added first dev-release on github.

  • Added first release on PyPI.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

es2csv-5.2.1.tar.gz (9.0 kB view hashes)

Uploaded Source

Built Distribution

es2csv-5.2.1-py27-none-any.whl (12.7 kB view hashes)

Uploaded Python 2.7

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page