Skip to main content

Algolia Search API Client for Python

Project description

Algolia Search API Client for Python

Algolia Search is a hosted full-text, numerical, and faceted search engine capable of delivering realtime results from the first keystroke.

Our Python client lets you easily use the Algolia Search API from your backend. It wraps the Algolia Search REST API.

Build Status PyPI version Coverage Status

Table of Contents

Getting Started

  1. Setup

  2. Quick Start

  3. Online documentation

  4. Tutorials

Commands Reference

  1. Add a new object

  2. Update an object

  3. Search

  4. Multiple queries

  5. Get an object

  6. Delete an object

  7. Delete by query

  8. Index settings

  9. List indices

  10. Delete an index

  11. Clear an index

  12. Wait indexing

  13. Batch writes

  14. Security / User API Keys

  15. Copy or rename an index

  16. Backup / Retrieve all index content

  17. Logs

Setup

To setup your project, follow these steps:

  1. Install AlgoliaSearch using pip: pip install algoliasearch.

  2. Initialize the client with your ApplicationID and API-Key. You can find all of them on your Algolia account.

from algoliasearch import algoliasearch

client = algoliasearch.Client("YourApplicationID", 'YourAPIKey')

Quick Start

In 30 seconds, this quick start tutorial will show you how to index and search objects.

Without any prior configuration, you can start indexing 500 contacts in the contacts index using the following code:

index = client.init_index("contact")
batch = json.load(open('contacts.json'))
index.add_objects(batch)

You can now search for contacts using firstname, lastname, company, etc. (even with typos):

# search by firstname
print index.search("jimmie")
# search a firstname with typo
print index.search("jimie")
# search for a company
print index.search("california paint")
# search for a firstname & company
print index.search("jimmie paint")

Settings can be customized to tune the search behavior. For example, you can add a custom sort by number of followers to the already great built-in relevance:

index.set_settings({"customRanking": ["desc(followers)"]})

You can also configure the list of attributes you want to index by order of importance (first = most important):

index.set_settings({"attributesToIndex": ["lastname", "firstname", "company",
                                         "email", "city", "address"]})

Since the engine is designed to suggest results as you type, you’ll generally search by prefix. In this case the order of attributes is very important to decide which hit is the best:

print index.search("or")
print index.search("jim")

Notes: If you are building a web application, you may be more interested in using our JavaScript client to perform queries. It brings two benefits: * Your users get a better response time by not going through your servers * It will offload unnecessary tasks from your servers

<script src="//cdn.jsdelivr.net/algoliasearch/3/algoliasearch.min.js"></script>
<script>
var client = algoliasearch('ApplicationID', 'apiKey');
var index = client.initIndex('indexName');

// perform query "jim"
index.search('jim', searchCallback);

// the last optional argument can be used to add search parameters
index.search(
  'jim', {
    hitsPerPage: 5,
    facets: '*',
    maxValuesPerFacet: 10
  },
  searchCallback
);

function searchCallback(err, content) {
  if (err) {
    console.error(err);
    return;
  }

  console.log(content);
}
</script>

Documentation

Check our online documentation: * Initial Import * Ranking & Relevance * Indexing * Search * Sorting * Filtering * Faceting * Geo-Search * Security * REST API

Tutorials

Check out our tutorials: * Search bar with autocomplete menu * Search bar with multi category autocomplete menu * Instant search result pages

Commands Reference

Add a new object to the Index

Each entry in an index has a unique identifier called objectID. There are two ways to add en entry to the index:

  1. Using automatic objectID assignment. You will be able to access it in the answer.

  2. Supplying your own objectID.

You don’t need to explicitly create an index, it will be automatically created the first time you add an object. Objects are schema less so you don’t need any configuration to start indexing. If you wish to configure things, the settings section provides details about advanced settings.

Example with automatic objectID assignment:

res = index.add_object({"firstname": "Jimmie",
                       "lastname": "Barninger"})
print "ObjectID=%s" % res["objectID"]

Example with manual objectID assignment:

res = index.add_object({"firstname": "Jimmie",
                       "lastname": "Barninger"}, "myID")
print "ObjectID=%s" % res["objectID"]

Update an existing object in the Index

You have three options when updating an existing object:

  1. Replace all its attributes.

  2. Replace only some attributes.

  3. Apply an operation to some attributes.

Example on how to replace all attributes of an existing object:

index.save_object({"firstname": "Jimmie",
                  "lastname": "Barninger",
                  "city": "New York",
                  "objectID": "myID"})

You have many ways to update an object’s attributes:

  1. Set the attribute value

  2. Add an element to an array

  3. Remove an element from an array

  4. Add an element to an array if it doesn’t exist

  5. Increment an attribute

  6. Decrement an attribute

Example to update only the city attribute of an existing object:

index.partial_update_object({"city": "San Francisco",
                           "objectID": "myID"})

Example to add a tag:

index.partial_update_object({"_tags": { "value": "MyTag", "_operation": "Add"},
                           "objectID": "myID"})

Example to remove a tag:

index.partial_update_object({"_tags": { "value": "MyTag", "_operation": "Remove"},
                           "objectID": "myID"})

Example to add a tag if it doesn’t exist:

index.partial_update_object({"_tags": { "value": "MyTag", "_operation": "AddUnique"},
                           "objectID": "myID"})

Example to increment a numeric value:

index.partial_update_object({"price": { "value": 42, "_operation": "Increment"},
                           "objectID": "myID"})

Example to decrement a numeric value:

index.partial_update_object({"price": { "value": 42, "_operation": "Decrement"},
                           "objectID": "myID"})

Multiple queries

You can send multiple queries with a single API call using a batch of queries:

# perform 3 queries in a single API call:
# - 1st query targets index `categories`
# - 2nd and 3rd queries target index `products`
results = self.client.multiple_queries([{"indexName" : "categories", "query" : myQueryString, "hitsPerPage": 3}
  , {"indexName" : "categories", "query" : myQueryString, "hitsPerPage": 3, "tagFilters": "promotion"}
  , {"indexName" : "categories", "query" : myQueryString, "hitsPerPage": 10}])

print results["results"]

The resulting JSON answer contains a results array storing the underlying queries answers. The answers order is the same than the requests order.

You can specify a strategy to optimize your multiple queries: - none: Execute the sequence of queries until the end. - stopIfEnoughMatches: Execute the sequence of queries until the number of hits is reached by the sum of hits.

Get an object

You can easily retrieve an object using its objectID and optionally specify a comma separated list of attributes you want:

# Retrieves all attributes
index.get_object("myID")
# Retrieves firstname and lastname attributes
res = index.get_object("myID", "firstname,lastname")
# Retrieves only the firstname attribute
res = index.get_object("myID", "firstname")

You can also retrieve a set of objects:

res = index.get_objects(["myID1", "myID2"])

Delete an object

You can delete an object using its objectID:

index.delete_object("myID")

Delete by query

You can delete all objects matching a single query with the following code. Internally, the API client performs the query, deletes all matching hits, and waits until the deletions have been applied.

params = {}
index.delete_by_query("John", params)

Index Settings

You can retrieve all settings using the get_settings function. The result will contain the following attributes:

Indexing parameters

  • attributesToIndex: (array of strings) The list of fields you want to index.If set to null, all textual and numerical attributes of your objects are indexed. Be sure to update it to get optimal results.This parameter has two important uses:

  • Limit the attributes to index.For example, if you store a binary image in base64, you want to store it and be able to retrieve it, but you don’t want to search in the base64 string.

  • Control part of the ranking.(see the ranking parameter for full explanation) Matches in attributes at the beginning of the list will be considered more important than matches in attributes further down the list. In one attribute, matching text at the beginning of the attribute will be considered more important than text after. You can disable this behavior if you add your attribute inside unordered(AttributeName). For example, attributesToIndex: ["title", "unordered(text)"]. You can decide to have the same priority for two attributes by passing them in the same string using a comma as a separator. For example title and alternative_title have the same priority in this example, which is different than text priority: attributesToIndex:["title,alternative_title", "text"].

  • numericAttributesToIndex: (array of strings) All numerical attributes are automatically indexed as numerical filters. If you don’t need filtering on some of your numerical attributes, you can specify this list to speed up the indexing. If you only need to filter on a numeric value with the operator ‘=’, you can speed up the indexing by specifying the attribute with equalOnly(AttributeName). The other operators will be disabled.

  • attributesForFaceting: (array of strings) The list of fields you want to use for faceting. All strings in the attribute selected for faceting are extracted and added as a facet. If set to null, no attribute is used for faceting.

  • attributeForDistinct: The attribute name used for the Distinct feature. This feature is similar to the SQL “distinct” keyword. When enabled in queries with the distinct=1 parameter, all hits containing a duplicate value for this attribute are removed from results. For example, if the chosen attribute is show_name and several hits have the same value for show_name, then only the best one is kept and others are removed. Note: This feature is disabled if the query string is empty and there aren’t any tagFilters, facetFilters, nor numericFilters parameters.

  • ranking: (array of strings) Controls the way results are sorted.We have nine available criteria:

  • typo: Sort according to number of typos.

  • geo: Sort according to decreasing distance when performing a geo location based search.

  • words: Sort according to the number of query words matched by decreasing order. This parameter is useful when you use the optionalWords query parameter to have results with the most matched words first.

  • proximity: Sort according to the proximity of the query words in hits.

  • attribute: Sort according to the order of attributes defined by attributesToIndex.

  • exact:

    • If the user query contains one word: sort objects having an attribute that is exactly the query word before others. For example, if you search for the TV show “V”, you want to find it with the “V” query and avoid getting all popular TV shows starting by the letter V before it.

    • If the user query contains multiple words: sort according to the number of words that matched exactly (not as a prefix).

  • custom: Sort according to a user defined formula set in the customRanking attribute.

  • asc(attributeName): Sort according to a numeric attribute using ascending order. attributeName can be the name of any numeric attribute in your records (integer, double or boolean).

  • desc(attributeName): Sort according to a numeric attribute using descending order. attributeName can be the name of any numeric attribute in your records (integer, double or boolean). The standard order is [“typo”, “geo”, “words”, “proximity”, “attribute”, “exact”, “custom”].

  • customRanking: (array of strings) Lets you specify part of the ranking.The syntax of this condition is an array of strings containing attributes prefixed by the asc (ascending order) or desc (descending order) operator. For example, "customRanking" => ["desc(population)", "asc(name)"].

  • queryType: Select how the query words are interpreted. It can be one of the following values:

  • prefixAll: All query words are interpreted as prefixes.

  • prefixLast: Only the last word is interpreted as a prefix (default behavior).

  • prefixNone: No query word is interpreted as a prefix. This option is not recommended.

  • separatorsToIndex: Specify the separators (punctuation characters) to index. By default, separators are not indexed. Use +# to be able to search Google+ or C#.

  • slaves: The list of indices on which you want to replicate all write operations. In order to get response times in milliseconds, we pre-compute part of the ranking during indexing. If you want to use different ranking configurations depending of the use case, you need to create one index per ranking configuration. This option enables you to perform write operations only on this index and automatically update slave indices with the same operations.

  • unretrievableAttributes: The list of attributes that cannot be retrieved at query time. This feature allows you to have attributes that are used for indexing and/or ranking but cannot be retrieved. Defaults to null.

  • allowCompressionOfIntegerArray: Allows compression of big integer arrays. We recommended enabling this feature and then storing the list of user IDs or rights as an integer array. When enabled, the integer array is reordered to reach a better compression ratio. Defaults to false.

Query expansion

  • synonyms: (array of array of string considered as equals). For example, you may want to retrieve the black ipad record when your users are searching for dark ipad, even if the word dark is not part of the record. To do this, you need to configure black as a synonym of dark. For example, "synomyms": [ [ "black", "dark" ], [ "small", "little", "mini" ], ... ]. Synonym feature also supports multi-words expression like "synonyms": [ ["NY", "New York"] ]

  • placeholders: (hash of array of words). This is an advanced use case to define a token substitutable by a list of words without having the original token searchable. It is defined by a hash associating placeholders to lists of substitutable words. For example, "placeholders": { "<streetnumber>": ["1", "2", "3", ..., "9999"]} would allow it to be able to match all street numbers. We use the < > tag syntax to define placeholders in an attribute. For example:

  • Push a record with the placeholder: { "name" : "Apple Store", "address" : "&lt;streetnumber&gt; Opera street, Paris" }.

  • Configure the placeholder in your index settings: "placeholders": { "<streetnumber>" : ["1", "2", "3", "4", "5", ... ], ... }.

  • disableTypoToleranceOn: (string array) Specify a list of words on which automatic typo tolerance will be disabled.

  • altCorrections: (object array) Specify alternative corrections that you want to consider. Each alternative correction is described by an object containing three attributes:

  • word: The word to correct.

  • correction: The corrected word.

  • nbTypos The number of typos (1 or 2) that will be considered for the ranking algorithm (1 typo is better than 2 typos).

For example "altCorrections": [ { "word" : "foot", "correction": "feet", "nbTypos": 1 }, { "word": "feet", "correction": "foot", "nbTypos": 1 } ].

Default query parameters (can be overwritten by queries)

  • minWordSizefor1Typo: (integer) The minimum number of characters needed to accept one typo (default = 4).

  • minWordSizefor2Typos: (integer) The minimum number of characters needed to accept two typos (default = 8).

  • hitsPerPage: (integer) The number of hits per page (default = 10).

  • attributesToRetrieve: (array of strings) Default list of attributes to retrieve in objects. If set to null, all attributes are retrieved.

  • attributesToHighlight: (array of strings) Default list of attributes to highlight. If set to null, all indexed attributes are highlighted.

  • attributesToSnippet: (array of strings) Default list of attributes to snippet alongside the number of words to return (syntax is ‘attributeName:nbWords’).By default, no snippet is computed. If set to null, no snippet is computed.

  • highlightPreTag: (string) Specify the string that is inserted before the highlighted parts in the query result (defaults to “<em>”).

  • highlightPostTag: (string) Specify the string that is inserted after the highlighted parts in the query result (defaults to “</em>”).

  • optionalWords: (array of strings) Specify a list of words that should be considered optional when found in the query.

You can easily retrieve settings or update them:

settings = index.get_settings()
print settings
index.set_settings({"customRanking": ["desc(followers)"]})

List indices

You can list all your indices along with their associated information (number of entries, disk size, etc.) with the list_indexes method:

print client.list_indexes()

Delete an index

You can delete an index using its name:

client.delete_index("contacts")

Clear an index

You can delete the index contents without removing settings and index specific API keys by using the clearIndex command:

index.clear_index()

Wait indexing

All write operations in Algolia are asynchronous by design.

It means that when you add or update an object to your index, our servers will reply to your request with a taskID as soon as they understood the write operation.

The actual insert and indexing will be done after replying to your code.

You can wait for a task to complete using the waitTask method on the taskID returned by a write operation.

For example, to wait for indexing of a new object:

res = index.add_object({"firstname": "Jimmie",
                       "lastname": "Barninger"})
index.wait_task(res["taskID"])

If you want to ensure multiple objects have been indexed, you only need to check the biggest taskID.

Batch writes

You may want to perform multiple operations with one API call to reduce latency. We expose four methods to perform batch operations: * add_objects: Add an array of objects using automatic objectID assignment. * save_objects: Add or update an array of objects that contains an objectID attribute. * delete_objects: Delete an array of objectIDs. * partial_update_objects: Partially update an array of objects that contain an objectID attribute (only specified attributes will be updated).

Example using automatic objectID assignment:

res = index.add_objects([{"firstname": "Jimmie",
                         "lastname": "Barninger"},
                        {"firstname": "Warren",
                         "lastname": "Speach"}])

Example with user defined objectID (add or update):

res = index.save_objects([{"firstname": "Jimmie",
                          "lastname": "Barninger",
                           "objectID": "myID1"},
                          {"firstname": "Warren",
                          "lastname": "Speach",
                           "objectID": "myID2"}])

Example that deletes a set of records:

res = index.delete_objects(["myID1", "myID2"])

Example that updates only the firstname attribute:

res = index.partial_update_objects([{"firstname": "Jimmie",
                                   "objectID": "myID1"},
                                  {"firstname": "Warren",
                                   "objectID": "myID2"}])

If you have one index per user, you may want to perform a batch operations across severals indexes. We expose a method to perform this type of batch:

res = index.batch([
    {"action": "addObject", "indexName": "index1", {"firstname": "Jimmie", "lastname": "Barninger"}},
    {"action": "addObject", "indexName": "index2", {"firstname": "Warren", "lastname": "Speach"}}])

The attribute action can have these values: - addObject - updateObject - partialUpdateObject - partialUpdateObjectNoCreate - deleteObject

Security / User API Keys

The admin API key provides full control of all your indices. You can also generate user API keys to control security. These API keys can be restricted to a set of operations or/and restricted to a given index.

To list existing keys, you can use list_user_keys method:

# Lists global API Keys
client.list_user_keys()
# Lists API Keys that can access only to this index
index.list_user_keys()

Each key is defined by a set of permissions that specify the authorized actions. The different permissions are: * search: Allowed to search. * browse: Allowed to retrieve all index contents via the browse API. * addObject: Allowed to add/update an object in the index. * deleteObject: Allowed to delete an existing object. * deleteIndex: Allowed to delete index content. * settings: allows to get index settings. * editSettings: Allowed to change index settings. * analytics: Allowed to retrieve analytics through the analytics API. * listIndexes: Allowed to list all accessible indexes.

Example of API Key creation:

# Creates a new global API key that can only perform search actions
res = client.add_user_key(["search"])
print res["key"]
# Creates a new API key that can only perform search action on this index
res = index.add_user_key(["search"])
print res["key"]

You can also create an API Key with advanced settings:

  • validity: Add a validity period. The key will be valid for a specific period of time (in seconds).

  • maxQueriesPerIPPerHour: Specify the maximum number of API calls allowed from an IP address per hour. Each time an API call is performed with this key, a check is performed. If the IP at the source of the call did more than this number of calls in the last hour, a 403 code is returned. Defaults to 0 (no rate limit). This parameter can be used to protect you from attempts at retrieving your entire index contents by massively querying the index.

Note: If you are sending the query through your servers, you must use the enable_rate_limit_forward("TheAdminAPIKey", "EndUserIP", "APIKeyWithRateLimit") function to enable rate-limit.

  • maxHitsPerQuery: Specify the maximum number of hits this API key can retrieve in one call. Defaults to 0 (unlimited). This parameter can be used to protect you from attempts at retrieving your entire index contents by massively querying the index.

  • indexes: Specify the list of targeted indices. You can target all indices starting with a prefix or ending with a suffix using the ‘*’ character. For example, “dev_*” matches all indices starting with “dev_” and “*_dev” matches all indices ending with “_dev”. Defaults to all indices if empty or blank.

  • referers: Specify the list of referers. You can target all referers starting with a prefix or ending with a suffix using the ‘*’ character. For example, “algolia.com/*” matches all referers starting with “algolia.com/” and “*.algolia.com” matches all referers ending with “.algolia.com”. Defaults to all referers if empty or blank.

  • queryParameters: Specify the list of query parameters. You can force the query parameters for a query using the url string format (param1=X&param2=Y…).

  • description: Specify a description to describe where the key is used.

# Creates a new index specific API key valid for 300 seconds, with a rate limit of 100 calls per hour per IP and a maximum of 20 hits

params = {
    'validity': 300,
    'maxQueriesPerIPPerHour': 100,
    'maxHitsPerQuery': 20,
    'indexes': ['dev_*'],
    'referers': ['algolia.com/*'],
    'queryParameters': 'typoTolerance=strict&ignorePlurals=false',
    'description': 'Limited search only API key for algolia.com'
}

res = client.add_user_key(params)
print res["key"]

Update the permissions of an existing key:

# Update an existing global API key that is valid for 300 seconds
res = client.update_user_key("myAPIKey", ["search"], 300)
print res["key"]
# Update an existing index specific API key valid for 300 seconds, with a rate limit of 100 calls per hour per IP and a maximum of 20 hits
res = index.update_user_key("myAPIKey", ["search"], 300, 100, 20)
print res["key"]

Get the permissions of a given key:

# Gets the rights of a global key
print client.get_user_key_acl("f420238212c54dcfad07ea0aa6d5c45f")
# Gets the rights of an index specific key
print index.get_user_key_acl("71671c38001bf3ac857bc82052485107")

Delete an existing key:

# Deletes a global key
print client.delete_user_key("f420238212c54dcfad07ea0aa6d5c45f")
# Deletes an index specific key
print index.delete_user_key("71671c38001bf3ac857bc82052485107")

You may have a single index containing per user data. In that case, all records should be tagged with their associated user_id in order to add a tagFilters=user_42 filter at query time to retrieve only what a user has access to. If you’re using the JavaScript client, it will result in a security breach since the user is able to modify the tagFilters you’ve set by modifying the code from the browser. To keep using the JavaScript client (recommended for optimal latency) and target secured records, you can generate a secured API key from your backend:

# generate a public API key for user 42. Here, records are tagged with:
#  - 'user_XXXX' if they are visible by user XXXX
public_key = client.generate_secured_api_key('YourSearchOnlyApiKey', 'tagFilters=user_42')

This public API key can then be used in your JavaScript code as follow:

var client = algoliasearch('YourApplicationID', '<%= public_api_key %>');
client.setExtraHeader('X-Algolia-QueryParameters', 'tagFilters=user_42'); // must be same than those used at generation-time

var index = client.initIndex('indexName')

index.search('something', function(err, content) {
  if (err) {
    console.error(err);
    return;
  }

  console.log(content);
});

You can mix rate limits and secured API keys by setting an extra user_token attribute both at API key generation time and query time. When set, a unique user will be identified by her IP + user_token instead of only by her IP. This allows you to restrict a single user to performing a maximum of N API calls per hour, even if she shares her IP with another user.

# generate a public API key for user 42. Here, records are tagged with:
#  - 'user_XXXX' if they are visible by user XXXX
public_key = client.generate_secured_api_key('YourRateLimitedApiKey', 'tagFilters=user_42', 'user_42')

This public API key can then be used in your JavaScript code as follow:

var client = algoliasearch('YourApplicationID', '<%= public_api_key %>');

// must be same than those used at generation-time
client.setExtraHeader('X-Algolia-QueryParameters', 'tagFilters=user_42');

// must be same than the one used at generation-time
client.setUserToken('user_42');

var index = client.initIndex('indexName')

index.search('another query', function(err, content) {
  if (err) {
    console.error(err);
    return;
  }

  console.log(content);
});

You can also generate secured API keys to limit the usage of a key to a referer. The generation use the same function than the Per user restriction. This public API key can be used in your JavaScript code as follow:

var client = algoliasearch('YourApplicationID', '<%= public_api_key %>');

// must be same than those used at generation-time
client.setExtraHeader('X-Algolia-AllowedReferer', 'algolia.com/*');

var index = client.initIndex('indexName')

index.search('another query', function(err, content) {
  if (err) {
    console.error(err);
    return;
  }

  console.log(content);
});

Copy or rename an index

You can easily copy or rename an existing index using the copy and move commands. Note: Move and copy commands overwrite the destination index.

# Rename MyIndex in MyIndexNewName
print client.move_index("MyIndex", "MyIndexNewName")
# Copy MyIndex in MyIndexCopy
print client.copy_index("MyIndex", "MyIndexCopy")

The move command is particularly useful if you want to update a big index atomically from one version to another. For example, if you recreate your index MyIndex each night from a database by batch, you only need to: 1. Import your database into a new index using batches. Let’s call this new index MyNewIndex. 1. Rename MyNewIndex to MyIndex using the move command. This will automatically override the old index and new queries will be served on the new one.

# Rename MyNewIndex in MyIndex (and overwrite it)
print client.move_index("MyNewIndex", "MyIndex")

Backup / Retrieve of all index content

You can retrieve all index content for backup purposes or for SEO using the browse method. This method can retrieve up to 1,000 objects per call and supports full text search and filters but the distinct feature is not available Unlike the search method, the sort by typo, proximity, geo distance and matched words is not applied, the hits are only sorted by numeric attributes specified in the ranking and the custom ranking.

You can browse the index:

# Iterate with a filter over the index
res = self.index.browse_all({"query": "test", "numericFilters": "i<42"})
for hit in res
    # Do something

# Retrieve the next cursor from the browse method
res = self.index.browse_from({"query": "test", "numericFilters": "i<42"}, None)
print res["cursor"]

Logs

You can retrieve the latest logs via this API. Each log entry contains: * Timestamp in ISO-8601 format * Client IP * Request Headers (API Key is obfuscated) * Request URL * Request method * Request body * Answer HTTP code * Answer body * SHA1 ID of entry

You can retrieve the logs of your last 1,000 API calls and browse them using the offset/length parameters: * *offset*: Specify the first entry to retrieve (0-based, 0 is the most recent log entry). Defaults to 0. * *length*: Specify the maximum number of entries to retrieve starting at the offset. Defaults to 10. Maximum allowed value: 1,000. * *onlyErrors*: Retrieve only logs with an HTTP code different than 200 or 201. (deprecated) * *type*: Specify the type of logs to retrieve: * *query*: Retrieve only the queries. * *build*: Retrieve only the build operations. * *error*: Retrieve only the errors (same as *onlyErrors* parameters).

# Get last 10 log entries
print client.get_logs()
# Get last 100 log entries
print client.get_logs(0, 100)

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

algoliasearch-1.6.1.tar.gz (194.8 kB view hashes)

Uploaded Source

Built Distribution

algoliasearch-1.6.1-py2.py3-none-any.whl (181.8 kB view hashes)

Uploaded Python 2 Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page