Endpoints

How to get data from our API

You have the possibility to get results from predefined elements (datasets or scripts) using our API which returns JSON data. In addition, you can also set custom filters on the requests. Right now our API has a limit of returning 1 million rows of data at a time when querying data.

Prerequisites

  1. SF user account to generate an API access token

  2. SF platform backend URL of your environment

  3. Id of the predefined element (dataset/script)

First, you need an API access token, which can be generated by each user. This token is equivalent to your credentials so you will have the same permissions as the user who created this API access token. For information on how to generate such an API access token see Get your access token.

The second information you need is the backend URL of your environment. You can see this URL when your login screen is displayed. Here you require the first part of the URL which usually looks like exampleapi.senseforce.io

SF platform backend URL

The third and last thing you need for your API request is the ID of the element you want to request. For example, for a dataset, you can see this information when you open the desired dataset in the SF platform. The ID is the last part which is shown in the address bar (see example below).

Dataset ID

Double-check the headers before making a request. When a request library (like 'requests' in Python) or Postman is used, some headers are auto-generated so we don't have to manually set them, but this is not always true especially for 'Host' or 'Content-Type'.

How to construct the API request

Below you can find information about endpoints structure and required parameters.

Execute a dataset through Senseforce API

POST https://<your senseforce backend platform url>/api/dataset/execute/<id>

The URL you have to send your POST request to has to have the structure as shown above. This API endpoint can contain extra two optional parameters: limit and offset. After inserting your parameters the request looks like https://exampleapi.senseforce.io/api/dataset/execute/d9d6gDg7fGmd3mI?offset=0&limit=100

For how to get the backend URL and/or dataset id see the section above. Within the body of your request, you can add filter clauses which are additional filters applied to the original dataset. For more information about additional filters see the sections below. The body itself is optional. So it can be an empty array or an array of filter objects, but for each filter object the parameters mentioned in "Body parameters" are mandatory.

Query Parameters

Name

Type

Description

Name

Type

Description

id

string

The dataset id has to

offset

string

Number of rows to skip

limit

string

Number of rows to return

Headers

Name

Type

Description

Name

Type

Description

Content-Type

string

Set to "application/json"

Authorization

string

Set to "Bearer <your API access token>"

Request Body

Name

Type

Description

Name

Type

Description

clause

object

Filter clause object. Each additional filter is defined by such a clause object

type

string

Can be "timestamp" or "string" (for all other column datatypes)

operator

integer

Id of the filter operator used to define the filter condition

parameters

string

Filter arguments

columnName

string

Name of the dataset column the filter should be applied to

 

200 Cake successfully retrieved. 404 Could not find a cake matching this query. { "name": "Cake's name", "recipe": "Cake's recipe name", "cake": "Binary cake"}

Requesting original dataset

When you want to get all data from the dataset you can send requests with an empty array as a body. Below you can find a sample Python script to do so.

import requests import json from pandas import DataFrame url = "https://<your senseforce backend platform url>/api/dataset/execute/<id>" headers = {'Content-Type': 'application/json', 'Authorization': 'Bearer <your API access token>'} filters = [] response = requests.post(url, headers=headers, json=filters) data = response.text parsed_data = json.loads(data) df = DataFrame(parsed_data)

Requesting dataset with original labels

By default, all response element property names (dataset column labels) are converted to first letter lower case.

To avoid this behavior, and to keep the original label letter case, use an additional parameter in the request useOriginalLabels and set it to true. (/api/dataset/execute/<id>?useOriginalLabels=true)

To use converted column labels, you can explicitly set useOriginalLabels to false, or simply omit the parameter.

 

 

Requesting dataset with additional filters

All filters defined in the dataset will be applied anyway, but you can also apply additional filters to it. To define a filter you have to create a structure like the one shown in the example below.

In the example below a filter for the column named "device" is applied so that it only contains values equal to "vienna-prater-ferris-wheel-motor1".

[{ "clause": { "type": "string", "operator": 7, "parameters": [{ "value": "vienna-prater-ferris-wheel-motor1" } ] }, "columnName": "device" } ]

Within these clause objects (filter definitions) all operators which are also available on the SF Platform can be used. But some restrictions have to be considered. Not each operator can be applied to each column, because of their datatype. And not all of these filters have the same number of parameters. Most of them have only one, but some of them have none (e.g. IsEmpty, IsNotEmpty) and some have two (e.g. Between).

A summary of all available filter "operator" and their restrictions are given in the table below:

Filter Operators Table

Operator

Operator id

# param

Required parameter datatype

Required column datatype

LessThan

1

1

same as column

integer | long | double

GreaterThan

2

1

same as column

integer | long | double

LessThanOrEqualTo

3

1

same as column

integer | long | double

GreaterThanOrEqualTo

4

1

same as column

integer | long | double

Equal

5

1

same as column

integer | long | double | string

NotEqual

6

1

same as column

integer | long | double | string

Like

7

1

string

string

RegExpMatch

8

1

string

string

NotRegExpMatch

9

1

string

string

Between

10

2

same as column

integer | long | double | timestamp

In

11

1...n

same as column

each datatype allowed

IsEmpty

12

0

-

each datatype allowed

IsNotEmpty

13

0

-

each datatype allowed

CustomToday

14

0

-

timestamp

CustomThisWeek

15

0

-

timestamp

CustomThisMonth

16

0

-

timestamp

CustomLastThreeMonths

17

0

-

timestamp

CustomLastXMinutes

18

1

integer

timestamp

CustomDay

19

1

timestamp

timestamp

NotLike

20

1

string

string

CustomYesterday

21

0

-

timestamp

CustomLastXDays

22

1

integer

timestamp

CustomLastXWeeks

23

1

integer

timestamp

CustomLastXMonths

24

1

integer

timestamp

CustomLastWeek

25

0

-

timestamp

CustomLastMonth

26

0

-

timestamp

CustomRelativeBetween

27

2

integer

timestamp

NotIn

28

1...n

same as column

each datatype allowed

NOTE: "timestamps" are integer values representing the Unix timestamp in milliseconds (!!!).

 

Below you can find an extended version of the example Python script where also additional filters are applied.

 

 

Execute a script through Senseforce API

POST https://<your senseforce backend platform url>/api/script/execute/{scriptId}

The URL request has to have the structure as shown above. This API endpoint can contain extra two optional parameters: limit and offset. After inserting the parameters, the request may look like this: https://exampleapi.senseforce.io/api/script/execute/807a0c12-35e9-4b7f-b695-837c2cf5fb41?offset=0&limit=100 Within the body of the request, you can add script filters and dataset filters. Dataset filters are additional filters applied to the original dataset. For more information about the script filters and dataset, filters see the sections below. The body can be an empty object or can contain filter objects.

Path Parameters

Name

Type

Description

Name

Type

Description

scriptId

string

 

Query Parameters

Name

Type

Description

Name

Type

Description

offset

string

Number of rows to skip

limit

string

Number of rows to return

Headers

Name

Type

Description

Name

Type

Description

Authorization

string

Set to "Bearer <your API access token>"

Content-Type

string

Set to "application/json"

Request Body

Name

Type

Description

Name

Type

Description

ScriptFilters

array

List of script filters

DatasetFilters

array

List of dataset filters

 

The Body Parameters ("ScriptFilters" and "DatasetFilters") contain the same filter object structure in their list. See the filter object structure in the section below. The Body Parameter "DatasetFilters" can contain filter objects associated with a column of a dataset. These filters can target columns from multiple datasets as well (a script can work with one or multiple datasets).

Executing a script

Let's use a demo script and see how this script can be executed via Senseforce API. The following script has a "value" variable and it outputs this variable as result.

Let's execute this script via Senseforce API. When you want to execute a script without any filters you can send requests with an empty object as a body. Below you can find a C# request sample:

Executing a script with script filters

In the example below a script filter for the column named "value" is applied so that it only contains values equal to "hello from script". The Request Body will be like:

A summary of all available filter "operator" and their restrictions are given in the above "Filter Operators" table

Executing a script with dataset filters

Let's use a demo script that uses two datasets so you can see how the dataset filters can be applied.

In the example below, two dataset filters are set, which target different datasets. The Request Body will be like:

Executing a script with script and dataset filters

Let's use a demo script which uses two datasets so you can see how the dataset filters and script filters can be applied.

In the example below, one script filter and two dataset filters are set. The Request Body will be like:

Dataset filters are first considered and then at the end, the script filter is applied.