This the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Integrations Guide

The Integrations Guide provides an overview of integrations with various services and infrastructure.

1 - Database Guide

The Database Guide provides an overview of setting up databases and the specifics of each DB type.

Install Database Drivers

StreamZero DX requires a Python DB-API database driver and a SQLAlchemy dialect to be installed for each datastore you want to connect to within the executor image.

Configuring Database Connections

StreamZero can manage preset connection configurations. This enables a platform wide set up for both confidential as well as general access databases.

StreamZero uses the SQL Alchemy Engine along with the URL template based approach to connection management. The connection configurations are maintained as secrets within the platform and are therefore not publicly accessible i.e. access is provided for administrators only.

Retrieving DB Connections

The following is how to retrieve a named connection. The following sample assumes that the connection identifier key is uploaded to the package as a secrets.json.

1
2
3
4
5
6
7
8
from fx_ef import context
import sqlalchemy as db

db_url = context.secrets.get('my_connection')
engine = db.create_engine(db_url)

connection = engine.connect()
metadata = db.MetaData()

In the above example the db_url is set up as a secret with name 'my_connection'.

Depending on whether this is a service, project or platform level secret there are different approaches to set up the secret. For service level secret the following is a sample set up for a secrets.json file of the package.

1
2
3
{
  "my_connection" = "mysql://scott:tiger@localhost/test"
}
  • For Project scope use the 'secrets' tab of the Project Management UI.
  • For Platform scope secrets use the Vault UI in the DX Manager Application.

Database Drivers

The following table provides a guide on the python libs to be installed within the Executor docker image. For instructions on how to extend the Executor docker image please check this page: /docs/extending_executor_image

You can read more here about how to install new database drivers and libraries into your StreamZero FX executor image.

Note that many other databases are supported, the main criteria being the existence of a functional SQLAlchemy dialect and Python driver. Searching for the keyword “sqlalchemy + (database name)” should help get you to the right place.

If your database or data engine isn’t on the list but a SQL interface exists, please file an issue so we can work on documenting and supporting it.

A list of some of the recommended packages.

Database PyPI package
Amazon Athena pip install "PyAthenaJDBC>1.0.9 , pip install "PyAthena>1.2.0
Amazon Redshift pip install sqlalchemy-redshift
Apache Drill pip install sqlalchemy-drill
Apache Druid pip install pydruid
Apache Hive pip install pyhive
Apache Impala pip install impyla
Apache Kylin pip install kylinpy
Apache Pinot pip install pinotdb
Apache Solr pip install sqlalchemy-solr
Apache Spark SQL pip install pyhive
Ascend.io pip install impyla
Azure MS SQL pip install pymssql
Big Query pip install pybigquery
ClickHouse pip install clickhouse-driver==0.2.0 && pip install clickhouse-sqlalchemy==0.1.6
CockroachDB pip install cockroachdb
Dremio pip install sqlalchemy_dremio
Elasticsearch pip install elasticsearch-dbapi
Exasol pip install sqlalchemy-exasol
Google Sheets pip install shillelagh[gsheetsapi]
Firebolt pip install firebolt-sqlalchemy
Hologres pip install psycopg2
IBM Db2 pip install ibm_db_sa
IBM Netezza Performance Server pip install nzalchemy
MySQL pip install mysqlclient
Oracle pip install cx_Oracle
PostgreSQL pip install psycopg2
Trino pip install sqlalchemy-trino
Presto pip install pyhive
SAP Hana pip install hdbcli sqlalchemy-hana or pip install apache-Feris[hana]
Snowflake pip install snowflake-sqlalchemy
SQLite No additional library needed
SQL Server pip install pymssql
Teradata pip install teradatasqlalchemy
Vertica pip install sqlalchemy-vertica-python
Yugabyte pip install psycopg2

1.1 - Supported Databases

The Database Guide provides an overview of setting up databases and the specifics of each DB type.

1.1.1 - Ascend.io

Ascend.io

The recommended connector library to Ascend.io is impyla.

The expected connection string is formatted as follows:

ascend://{username}:{password}@{hostname}:{port}/{database}?auth_mechanism=PLAIN;use_ssl=true

1.1.2 - Amazon Athena

AWS Athena

PyAthenaJDBC

PyAthenaJDBC is a Python DB 2.0 compliant wrapper for the Amazon Athena JDBC driver.

The connection string for Amazon Athena is as follows:

awsathena+jdbc://{aws_access_key_id}:{aws_secret_access_key}@athena.{region_name}.amazonaws.com/{schema_name}?s3_staging_dir={s3_staging_dir}&...

Note that you’ll need to escape & encode when forming the connection string like so:

s3://... -> s3%3A//...

PyAthena

You can also use PyAthena library (no Java required) with the following connection string:

awsathena+rest://{aws_access_key_id}:{aws_secret_access_key}@athena.{region_name}.amazonaws.com/{schema_name}?s3_staging_dir={s3_staging_dir}&...

1.1.3 - Amazon Redshift

AWS Redshift

The sqlalchemy-redshift library is the recommended way to connect to Redshift through SQLAlchemy.

You’ll need to the following setting values to form the connection string:

  • User Name: userName
  • Password: DBPassword
  • Database Host: AWS Endpoint
  • Database Name: Database Name
  • Port: default 5439

Here’s what the connection string looks like:

redshift+psycopg2://<userName>:<DBPassword>@<AWS End Point>:5439/<Database Name>

1.1.4 - Apache Drill

Apache Drill

SQLAlchemy

The recommended way to connect to Apache Drill is through SQLAlchemy. You can use the sqlalchemy-drill package.

Once that is done, you can connect to Drill in two ways, either via the REST interface or by JDBC. If you are connecting via JDBC, you must have the Drill JDBC Driver installed.

The basic connection string for Drill looks like this:

drill+sadrill://<username>:<password>@<host>:<port>/<storage_plugin>?use_ssl=True

To connect to Drill running on a local machine running in embedded mode you can use the following connection string:

drill+sadrill://localhost:8047/dfs?use_ssl=False

JDBC

Connecting to Drill through JDBC is more complicated and we recommend following this tutorial.

The connection string looks like:

drill+jdbc://<username>:<passsword>@<host>:<port>

ODBC

We recommend reading the Apache Drill documentation and read the Github README to learn how to work with Drill through ODBC.

1.1.5 - Apache Druid

Apache Druid

Use the SQLAlchemy / DBAPI connector made available in the pydruid library.

The connection string looks like:

druid://<User>:<password>@<Host>:<Port-default-9088>/druid/v2/sql

Customizing Druid Connection

When adding a connection to Druid, you can customize the connection a few different ways in the Add Database form.

Custom Certificate

You can add certificates in the Root Certificate field when configuring the new database connection to Druid:

<img src={useBaseUrl("/img/root-cert-example.png")} />{" “}

When using a custom certificate, pydruid will automatically use https scheme.

Disable SSL Verification

To disable SSL verification, add the following to the Extras field:

engine_params:
{"connect_args":
	{"scheme": "https", "ssl_verify_cert": false}}

1.1.6 - Apache Hive

Apache Hive

The pyhive library is the recommended way to connect to Hive through SQLAlchemy.

The expected connection string is formatted as follows:

hive://hive@{hostname}:{port}/{database}

1.1.7 - Apache Impala

Apache Impala

The recommended connector library to Apache Impala is impyla.

The expected connection string is formatted as follows:

impala://{hostname}:{port}/{database}

1.1.8 - Apache Kylin

Apache Kylin

The recommended connector library for Apache Kylin is kylinpy.

The expected connection string is formatted as follows:

kylin://<username>:<password>@<hostname>:<port>/<project>?<param1>=<value1>&<param2>=<value2>

1.1.9 - Apache Pinot

Apache Pinot

The recommended connector library for Apache Pinot is pinotdb.

The expected connection string is formatted as follows:

pinot+http://<pinot-broker-host>:<pinot-broker-port>/query?controller=http://<pinot-controller-host>:<pinot-controller-port>/``

1.1.10 - Apache Solr

Apache Solr

The sqlalchemy-solr library provides a Python / SQLAlchemy interface to Apache Solr.

The connection string for Solr looks like this:

solr://{username}:{password}@{host}:{port}/{server_path}/{collection}[/?use_ssl=true|false]

1.1.11 - Apache Spark SQL

Apache Spark SQL

The recommended connector library for Apache Spark SQL pyhive.

The expected connection string is formatted as follows:

hive://hive@{hostname}:{port}/{database}

1.1.12 - Clickhouse

Clickhouse

To use Clickhouse with StreamZero you will need to add the following Python libraries:

clickhouse-driver==0.2.0
clickhouse-sqlalchemy==0.1.6

If running StreamZero using Docker Compose, add the following to your ./docker/requirements-local.txt file:

clickhouse-driver>=0.2.0
clickhouse-sqlalchemy>=0.1.6

The recommended connector library for Clickhouse is sqlalchemy-clickhouse.

The expected connection string is formatted as follows:

clickhouse+native://<user>:<password>@<host>:<port>/<database>[?options…]clickhouse://{username}:{password}@{hostname}:{port}/{database}

Here’s a concrete example of a real connection string:

clickhouse+native://demo:demo@github.demo.trial.altinity.cloud/default?secure=true

If you’re using Clickhouse locally on your computer, you can get away with using a native protocol URL that uses the default user without a password (and doesn’t encrypt the connection):

clickhouse+native://localhost/default

1.1.13 - CockroachDB

CockroachDB

The recommended connector library for CockroachDB is sqlalchemy-cockroachdb.

The expected connection string is formatted as follows:

cockroachdb://root@{hostname}:{port}/{database}?sslmode=disable

1.1.14 - CrateDB

CrateDB

The recommended connector library for CrateDB is crate. You need to install the extras as well for this library. We recommend adding something like the following text to your requirements file:

crate[sqlalchemy]==0.26.0

The expected connection string is formatted as follows:

crate://crate@127.0.0.1:4200

1.1.15 - Databricks

Databricks

To connect to Databricks, first install databricks-dbapi with the optional SQLAlchemy dependencies:

1
pip install databricks-dbapi[sqlalchemy]

There are two ways to connect to Databricks: using a Hive connector or an ODBC connector. Both ways work similarly, but only ODBC can be used to connect to SQL endpoints.

Hive

To use the Hive connector you need the following information from your cluster:

  • Server hostname
  • Port
  • HTTP path

These can be found under “Configuration” -> “Advanced Options” -> “JDBC/ODBC”.

You also need an access token from “Settings” -> “User Settings” -> “Access Tokens”.

Once you have all this information, add a database of type “Databricks (Hive)” in StreamZero, and use the following SQLAlchemy URI:

databricks+pyhive://token:{access token}@{server hostname}:{port}/{database name}

You also need to add the following configuration to “Other” -> “Engine Parameters”, with your HTTP path:

{"connect_args": {"http_path": "sql/protocolv1/o/****"}}

ODBC

For ODBC you first need to install the ODBC drivers for your platform.

For a regular connection use this as the SQLAlchemy URI:

databricks+pyodbc://token:{access token}@{server hostname}:{port}/{database name}

And for the connection arguments:

{"connect_args": {"http_path": "sql/protocolv1/o/****", "driver_path": "/path/to/odbc/driver"}}

The driver path should be:

  • /Library/simba/spark/lib/libsparkodbc_sbu.dylib (Mac OS)
  • /opt/simba/spark/lib/64/libsparkodbc_sb64.so (Linux)

For a connection to a SQL endpoint you need to use the HTTP path from the endpoint:

{"connect_args": {"http_path": "/sql/1.0/endpoints/****", "driver_path": "/path/to/odbc/driver"}}

1.1.16 - Dremio

Dremio

The recommended connector library for Dremio is sqlalchemy_dremio.

The expected connection string for ODBC (Default port is 31010) is formatted as follows:

dremio://{username}:{password}@{host}:{port}/{database_name}/dremio?SSL=1

The expected connection string for Arrow Flight (Dremio 4.9.1+. Default port is 32010) is formatted as follows:

dremio+flight://{username}:{password}@{host}:{port}/dremio

This blog post by Dremio has some additional helpful instructions on connecting StreamZero to Dremio.

1.1.17 - Elasticsearch

Elasticsearch

The recommended connector library for Elasticsearch is elasticsearch-dbapi.

The connection string for Elasticsearch looks like this:

elasticsearch+http://{user}:{password}@{host}:9200/

Using HTTPS

elasticsearch+https://{user}:{password}@{host}:9200/

Elasticsearch as a default limit of 10000 rows, so you can increase this limit on your cluster or set Feris’s row limit on config

ROW_LIMIT = 10000

You can query multiple indices on SQL Lab for example

SELECT timestamp, agent FROM "logstash"

But, to use visualizations for multiple indices you need to create an alias index on your cluster

POST /_aliases
{
    "actions" : [
        { "add" : { "index" : "logstash-**", "alias" : "logstash_all" } }
    ]
}

Then register your table with the alias name logstasg_all

Time zone

By default, StreamZero uses UTC time zone for elasticsearch query. If you need to specify a time zone, please edit your Database and enter the settings of your specified time zone in the Other > ENGINE PARAMETERS:

{
    "connect_args": {
        "time_zone": "Asia/Shanghai"
    }
}

Another issue to note about the time zone problem is that before elasticsearch7.8, if you want to convert a string into a DATETIME object, you need to use the CAST function,but this function does not support our time_zone setting. So it is recommended to upgrade to the version after elasticsearch7.8. After elasticsearch7.8, you can use the DATETIME_PARSE function to solve this problem. The DATETIME_PARSE function is to support our time_zone setting, and here you need to fill in your elasticsearch version number in the Other > VERSION setting. the StreamZero will use the DATETIME_PARSE function for conversion.

1.1.18 - Exasol

Exasol

The recommended connector library for Exasol is sqlalchemy-exasol.

The connection string for Exasol looks like this:

exa+pyodbc://{username}:{password}@{hostname}:{port}/my_schema?CONNECTIONLCALL=en_US.UTF-8&driver=EXAODBC

1.1.19 - Firebird

Firebird

The recommended connector library for Firebird is sqlalchemy-firebird. StreamZero has been tested on sqlalchemy-firebird>=0.7.0, <0.8.

The recommended connection string is:

firebird+fdb://{username}:{password}@{host}:{port}//{path_to_db_file}

Here’s a connection string example of StreamZero connecting to a local Firebird database:

firebird+fdb://SYSDBA:masterkey@192.168.86.38:3050//Library/Frameworks/Firebird.framework/Versions/A/Resources/examples/empbuild/employee.fdb

1.1.20 - Firebolt

Firebolt

The recommended connector library for Firebolt is firebolt-sqlalchemy. StreamZero has been tested on firebolt-sqlalchemy>=0.0.1.

The recommended connection string is:

firebolt://{username}:{password}@{database}
or
firebolt://{username}:{password}@{database}/{engine_name}

Here’s a connection string example of StreamZero connecting to a Firebolt database:

firebolt://email@domain:password@sample_database
or
firebolt://email@domain:password@sample_database/sample_engine

1.1.21 - Google BigQuery

Google BigQuery

The recommended connector library for BigQuery is pybigquery.

Install BigQuery Driver

Follow the steps here about how to install new database drivers when setting up StreamZero locally via docker-compose.

echo "pybigquery" >> ./docker/requirements-local.txt

Connecting to BigQuery

When adding a new BigQuery connection in StreamZero, you’ll need to add the GCP Service Account credentials file (as a JSON).

  1. Create your Service Account via the Google Cloud Platform control panel, provide it access to the appropriate BigQuery datasets, and download the JSON configuration file for the service account.
  2. In StreamZero you can either upload that JSON or add the JSON blob in the following format (this should be the content of your credential JSON file):
{
        "type": "service_account",
        "project_id": "...",
        "private_key_id": "...",
        "private_key": "...",
        "client_email": "...",
        "client_id": "...",
        "auth_uri": "...",
        "token_uri": "...",
        "auth_provider_x509_cert_url": "...",
        "client_x509_cert_url": "..."
    }
  1. Additionally, can connect via SQLAlchemy URI instead

    The connection string for BigQuery looks like:

    bigquery://{project_id}
    

    Go to the Advanced tab, Add a JSON blob to the Secure Extra field in the database configuration form with the following format:

    {
    "credentials_info": <contents of credentials JSON file>
    }
    

    The resulting file should have this structure:

    {
     "credentials_info": {
         "type": "service_account",
         "project_id": "...",
         "private_key_id": "...",
         "private_key": "...",
         "client_email": "...",
         "client_id": "...",
         "auth_uri": "...",
         "token_uri": "...",
         "auth_provider_x509_cert_url": "...",
         "client_x509_cert_url": "..."
         }
     }
    

You should then be able to connect to your BigQuery datasets.

To be able to upload CSV or Excel files to BigQuery in StreamZero, you’ll need to also add the pandas_gbq library.

1.1.22 - Google Sheets

Google Sheets

Google Sheets has a very limited SQL API. The recommended connector library for Google Sheets is shillelagh.

1.1.23 - Hana

Hana

The recommended connector library is sqlalchemy-hana.

The connection string is formatted as follows:

hana://{username}:{password}@{host}:{port}

1.1.24 - Hologres

Hologres

Hologres is a real-time interactive analytics service developed by Alibaba Cloud. It is fully compatible with PostgreSQL 11 and integrates seamlessly with the big data ecosystem.

Hologres sample connection parameters:

  • User Name: The AccessKey ID of your Alibaba Cloud account.
  • Password: The AccessKey secret of your Alibaba Cloud account.
  • Database Host: The public endpoint of the Hologres instance.
  • Database Name: The name of the Hologres database.
  • Port: The port number of the Hologres instance.

The connection string looks like:

postgresql+psycopg2://{username}:{password}@{host}:{port}/{database}

1.1.25 - IBM DB2

IBM DB2

The IBM_DB_SA library provides a Python / SQLAlchemy interface to IBM Data Servers.

Here’s the recommended connection string:

db2+ibm_db://{username}:{passport}@{hostname}:{port}/{database}

There are two DB2 dialect versions implemented in SQLAlchemy. If you are connecting to a DB2 version without LIMIT [n] syntax, the recommended connection string to be able to use the SQL Lab is:

ibm_db_sa://{username}:{passport}@{hostname}:{port}/{database}

1.1.26 - IBM Netezza Performance Server

IBM Netezza Performance Server

The nzalchemy library provides a Python / SQLAlchemy interface to IBM Netezza Performance Server (aka Netezza).

Here’s the recommended connection string:

netezza+nzpy://{username}:{password}@{hostname}:{port}/{database}

1.1.27 - Microsoft SQL Server

SQL Server

The recommended connector library for SQL Server is pymssql.

The connection string for SQL Server looks like this:

mssql+pymssql://<Username>:<Password>@<Host>:<Port-default:1433>/<Database Name>/?Encrypt=yes

1.1.28 - MySQL

MySQL

The recommended connector library for MySQL is mysqlclient.

Here’s the connection string:

mysql://{username}:{password}@{host}/{database}

Host:

  • For Localhost or Docker running Linux: localhost or 127.0.0.1
  • For On Prem: IP address or Host name
  • For Docker running in OSX: docker.for.mac.host.internal Port: 3306 by default

One problem with mysqlclient is that it will fail to connect to newer MySQL databases using caching_sha2_password for authentication, since the plugin is not included in the client. In this case, you should use [mysql-connector-python](https://pypi.org/project/mysql-connector-python/) instead:

mysql+mysqlconnector://{username}:{password}@{host}/{database}

1.1.29 - Oracle

Oracle

The recommended connector library is cx_Oracle.

The connection string is formatted as follows:

oracle://<username>:<password>@<hostname>:<port>

1.1.30 - Postgres

Postgres

Note that, if you’re using docker-compose, the Postgres connector library psycopg2 comes out of the box with Feris.

Postgres sample connection parameters:

  • User Name: UserName
  • Password: DBPassword
  • Database Host:
    • For Localhost: localhost or 127.0.0.1
    • For On Prem: IP address or Host name
    • For AWS Endpoint
  • Database Name: Database Name
  • Port: default 5432

The connection string looks like:

postgresql://{username}:{password}@{host}:{port}/{database}

You can require SSL by adding ?sslmode=require at the end:

postgresql://{username}:{password}@{host}:{port}/{database}?sslmode=require

You can read about the other SSL modes that Postgres supports in Table 31-1 from this documentation.

More information about PostgreSQL connection options can be found in the SQLAlchemy docs and the PostgreSQL docs.

1.1.31 - Presto

Presto

The pyhive library is the recommended way to connect to Presto through SQLAlchemy.

The expected connection string is formatted as follows:

presto://{hostname}:{port}/{database}

You can pass in a username and password as well:

presto://{username}:{password}@{hostname}:{port}/{database}

Here is an example connection string with values:

presto://datascientist:securepassword@presto.example.com:8080/hive

By default StreamZero assumes the most recent version of Presto is being used when querying the datasource. If you’re using an older version of Presto, you can configure it in the extra parameter:

{
    "version": "0.123"
}

1.1.32 - Rockset

Rockset

The connection string for Rockset is:

rockset://apikey:{your-apikey}@api.rs2.usw2.rockset.com/

For more complete instructions, we recommend the Rockset documentation.

1.1.33 - Snowflake

Snowflake

The recommended connector library for Snowflake is snowflake-sqlalchemy<=1.2.4.

The connection string for Snowflake looks like this:

snowflake://{user}:{password}@{account}.{region}/{database}?role={role}&warehouse={warehouse}

The schema is not necessary in the connection string, as it is defined per table/query. The role and warehouse can be omitted if defaults are defined for the user, i.e.

snowflake://{user}:{password}@{account}.{region}/{database}

Make sure the user has privileges to access and use all required databases/schemas/tables/views/warehouses, as the Snowflake SQLAlchemy engine does not test for user/role rights during engine creation by default. However, when pressing the “Test Connection” button in the Create or Edit Database dialog, user/role credentials are validated by passing “validate_default_parameters”: True to the connect() method during engine creation. If the user/role is not authorized to access the database, an error is recorded in the StreamZero logs.

1.1.34 - Teradata

Teradata

The recommended connector library is teradatasqlalchemy.

The connection string for Teradata looks like this:

teradata://{user}:{password}@{host}

ODBC Driver

There’s also an older connector named sqlalchemy-teradata that requires the installation of ODBC drivers. The Teradata ODBC Drivers are available here: https://downloads.teradata.com/download/connectivity/odbc-driver/linux

Here are the required environment variables:

export ODBCINI=/.../teradata/client/ODBC_64/odbc.ini
export ODBCINST=/.../teradata/client/ODBC_64/odbcinst.ini

We recommend using the first library because of the lack of requirement around ODBC drivers and because it’s more regularly updated.

1.1.35 - Trino

Trino

Supported trino version 352 and higher

The sqlalchemy-trino library is the recommended way to connect to Trino through SQLAlchemy.

The expected connection string is formatted as follows:

trino://{username}:{password}@{hostname}:{port}/{catalog}

If you are running trino with docker on local machine please use the following connection URL

trino://trino@host.docker.internal:8080

Reference: Trino-Feris-Podcast

1.1.36 - Vertica

Vertica

The recommended connector library is sqlalchemy-vertica-python. The Vertica connection parameters are:

  • User Name: UserName
  • Password: DBPassword
  • Database Host:
    • For Localhost : localhost or 127.0.0.1
    • For On Prem : IP address or Host name
    • For Cloud: IP Address or Host Name
  • Database Name: Database Name
  • Port: default 5433

The connection string is formatted as follows:

vertica+vertica_python://{username}:{password}@{host}/{database}

Other parameters:

  • Load Balancer - Backup Host

1.1.37 - YugabyteDB

YugabyteDB

YugabyteDB is a distributed SQL database built on top of PostgreSQL.

Note that, if you’re using docker-compose, the Postgres connector library psycopg2 comes out of the box with StreamZero.

The connection string looks like:

postgresql://{username}:{password}@{host}:{port}/{database}

2 - Notifications and Messaging

How to integrate a notifications with the StreamZero Platform.

StreamZero provides you access to over 40 notification services such as Slack, Email and Telegram.

StreamZero FX uses the Apprise Python Libs as an engine for notifiation dispatch. The power of Apprise gives you access to over 40 notification services. A complete list is provided in a table and the end of the document.

In order to send notifications from your package you need is to create and emit a pre-defined event type.

How to send notifications from your package

In order to send notifications from your package you need to send a ‘ferris.notifications.apprise.notification’ event from your package

You can do it like so.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
from fx_ef import context

# Please note that the value for the url_template is the name used within the config 
# For a specifc URL template in configurations.
# Please see configurations sample on how to configure

data = {
"url_template": "slack_1",
"body": "This is the content",
"title": "This is the subject"
}

event_type = "ferris.notifications.apprise.notification"
context.events.send(event_type, data)

How does it Work?

There are 2 approaches to implementing the notifications support.

  • Implementation within a StreamZero Service
  • Implementation in an Exit Gateway

The 2nd option is used in platforms which are behind a firewall and therefore require the gateway to be outside the firewall for accessing external services. In these cases the adapter runs as a separate container.

Irrespective of the infrastructure implementation the service internal API (as illustrated above) does not change.

The following documentation refers to Option 1.

Pre-Requisites

In order to send notifcations

  • The Apprise Libs must be present in the Executor Image.

  • You must have the Apprise Notifications Packages installed and running. You can find the code base further below in document.

  • You must upload a secrets.json file for the Apprise Notifications Package. Please note that you should maintain a separate copy of the configs since this will not be displayed in your configuration manager since it contains credentials.

  • A sample configuration file is provided below. Please use the table based on Apprise documentation to understand the URL Template structure.

  • Once the Apprise Notifications Package is installed along with the configurations you must link the package to be triggered by the ‘ferris.notifications.apprise.notification’ event.

The StreamZero Apprise Package

The following is code for an StreamZero executor package to send apprise based notifications.

To send a notification from within your python application, just do the following:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
import apprise
from fx_ef import context

# Getting the incoming parameters
url_template_name = context.params.get('url_template')

# Create an Apprise instance
apobj = apprise.Apprise()

# Setting up the Apprise object with URLs
# The URL is retreived from the uploaded secrets.json
apobj.add(context.secrets.get(url_template_name))

try: 
  apobj.notify(
    body=get_param('body'),
    title=get_param('title'),
  )
except Exception as ex:
  print(ex)

Configuration

The following is a sample configuration which is uploaded as a secrets.json file for the StreamZero Apprise Package.

The configuration consists of a set of named URL templates. With each url_template being based on the Apprise URL schema as shown in the sections further in document.

While you are free to name URL templates as you wish it is preferred to prefix them with an indication of the underlying service being used to send notifications.

1
2
3
4
5
6
7
8
{
  "slack_1": "slack://TokenA/TokenB/TokenC/",
  "slack_2": "slack://TokenA/TokenB/TokenC/Channel",
  "slack_3":"slack://botname@TokenA/TokenB/TokenC/Channel",
  "slack_4": "slack://user@TokenA/TokenB/TokenC/Channel1/Channel2/ChannelN",
  "telegram_1": "tgram://bottoken/ChatID",
  "telegram_2": "tgram://bottoken/ChatID1/ChatID2/ChatIDN"
}

The configurations must be added to a secrets.json file and uploaded as part of the apprise_package.

The apprise package must be configured to be triggered by the ‘ferris.notifications.apprise.notification’ event.

The table below identifies the services this tool supports and some example service urls you need to use in order to take advantage of it.

Click on any of the services listed below to get more details on how you can configure Apprise to access them.

Notification Service Service ID Default Port Example Syntax
Apprise API apprise:// or apprises:// (TCP) 80 or 443 apprise://hostname/Token
AWS SES ses:// (TCP) 443 ses://user@domain/AccessKeyID/AccessSecretKey/RegionName
ses://user@domain/AccessKeyID/AccessSecretKey/RegionName/email1/email2/emailN
Discord discord:// (TCP) 443 discord://webhook_id/webhook_token
discord://avatar@webhook_id/webhook_token
Emby emby:// or embys:// (TCP) 8096 emby://user@hostname/
emby://user:password@hostname
Enigma2 enigma2:// or enigma2s:// (TCP) 80 or 443 enigma2://hostname
Faast faast:// (TCP) 443 faast://authorizationtoken
FCM fcm:// (TCP) 443 fcm://project@apikey/DEVICE_ID
fcm://project@apikey/#TOPIC
fcm://project@apikey/DEVICE_ID1/#topic1/#topic2/DEVICE_ID2/
Flock flock:// (TCP) 443 flock://token
flock://botname@token
flock://app_token/u:userid
flock://app_token/g:channel_id
flock://app_token/u:userid/g:channel_id
Gitter gitter:// (TCP) 443 gitter://token/room
gitter://token/room1/room2/roomN
Google Chat gchat:// (TCP) 443 gchat://workspace/key/token
Gotify gotify:// or gotifys:// (TCP) 80 or 443 gotify://hostname/token
gotifys://hostname/token?priority=high
Growl growl:// (UDP) 23053 growl://hostname
growl://hostname:portno
growl://password@hostname
growl://password@hostname:port
Note: you can also use the get parameter version which can allow the growl request to behave using the older v1.x protocol. An example would look like: growl://hostname?version=1
Home Assistant hassio:// or hassios:// (TCP) 8123 or 443 hassio://hostname/accesstoken
hassio://user@hostname/accesstoken
hassio://user:password@hostname:port/accesstoken
hassio://hostname/optional/path/accesstoken
IFTTT ifttt:// (TCP) 443 ifttt://webhooksID/Event
ifttt://webhooksID/Event1/Event2/EventN
ifttt://webhooksID/Event1/?+Key=Value
ifttt://webhooksID/Event1/?-Key=value1
Join join:// (TCP) 443 join://apikey/device
join://apikey/device1/device2/deviceN/
join://apikey/group
join://apikey/groupA/groupB/groupN
join://apikey/DeviceA/groupA/groupN/DeviceN/
KODI kodi:// or kodis:// (TCP) 8080 or 443 kodi://hostname
kodi://user@hostname
kodi://user:password@hostname:port
Kumulos kumulos:// (TCP) 443 kumulos://apikey/serverkey
LaMetric Time lametric:// (TCP) 443 lametric://apikey@device_ipaddr
lametric://apikey@hostname:port
lametric://client_id@client_secret
Mailgun mailgun:// (TCP) 443 mailgun://user@hostname/apikey
mailgun://user@hostname/apikey/email
mailgun://user@hostname/apikey/email1/email2/emailN
mailgun://user@hostname/apikey/?name=“From%20User”
Matrix matrix:// or matrixs:// (TCP) 80 or 443 matrix://hostname
matrix://user@hostname
matrixs://user:pass@hostname:port/#room_alias
matrixs://user:pass@hostname:port/!room_id
matrixs://user:pass@hostname:port/#room_alias/!room_id/#room2
matrixs://token@hostname:port/?webhook=matrix
matrix://user:token@hostname/?webhook=slack&format=markdown
Mattermost mmost:// or mmosts:// (TCP) 8065 mmost://hostname/authkey
mmost://hostname:80/authkey
mmost://user@hostname:80/authkey
mmost://hostname/authkey?channel=channel
mmosts://hostname/authkey
mmosts://user@hostname/authkey
Microsoft Teams msteams:// (TCP) 443 msteams://TokenA/TokenB/TokenC/
MQTT mqtt:// or mqtts:// (TCP) 1883 or 8883 mqtt://hostname/topic
mqtt://user@hostname/topic
mqtts://user:pass@hostname:9883/topic
Nextcloud ncloud:// or nclouds:// (TCP) 80 or 443 ncloud://adminuser:pass@host/User
nclouds://adminuser:pass@host/User1/User2/UserN
NextcloudTalk nctalk:// or nctalks:// (TCP) 80 or 443 nctalk://user:pass@host/RoomId
nctalks://user:pass@host/RoomId1/RoomId2/RoomIdN
Notica notica:// (TCP) 443 notica://Token/
Notifico notifico:// (TCP) 443 notifico://ProjectID/MessageHook/
Office 365 o365:// (TCP) 443 o365://TenantID:AccountEmail/ClientID/ClientSecret
o365://TenantID:AccountEmail/ClientID/ClientSecret/TargetEmail
o365://TenantID:AccountEmail/ClientID/ClientSecret/TargetEmail1/TargetEmail2/TargetEmailN
OneSignal onesignal:// (TCP) 443 onesignal://AppID@APIKey/PlayerID
onesignal://TemplateID:AppID@APIKey/UserID
onesignal://AppID@APIKey/#IncludeSegment
onesignal://AppID@APIKey/Email
Opsgenie opsgenie:// (TCP) 443 opsgenie://APIKey
opsgenie://APIKey/UserID
opsgenie://APIKey/#Team
opsgenie://APIKey/*Schedule
opsgenie://APIKey/^Escalation
ParsePlatform parsep:// or parseps:// (TCP) 80 or 443 parsep://AppID:MasterKey@Hostname
parseps://AppID:MasterKey@Hostname
PopcornNotify popcorn:// (TCP) 443 popcorn://ApiKey/ToPhoneNo
popcorn://ApiKey/ToPhoneNo1/ToPhoneNo2/ToPhoneNoN/
popcorn://ApiKey/ToEmail
popcorn://ApiKey/ToEmail1/ToEmail2/ToEmailN/
popcorn://ApiKey/ToPhoneNo1/ToEmail1/ToPhoneNoN/ToEmailN
Prowl prowl:// (TCP) 443 prowl://apikey
prowl://apikey/providerkey
PushBullet pbul:// (TCP) 443 pbul://accesstoken
pbul://accesstoken/#channel
pbul://accesstoken/A_DEVICE_ID
pbul://accesstoken/email@address.com
pbul://accesstoken/#channel/#channel2/email@address.net/DEVICE
Push (Techulus) push:// (TCP) 443 push://apikey/
Pushed pushed:// (TCP) 443 pushed://appkey/appsecret/
pushed://appkey/appsecret/#ChannelAlias
pushed://appkey/appsecret/#ChannelAlias1/#ChannelAlias2/#ChannelAliasN
pushed://appkey/appsecret/@UserPushedID
pushed://appkey/appsecret/@UserPushedID1/@UserPushedID2/@UserPushedIDN
Pushover pover:// (TCP) 443 pover://user@token
pover://user@token/DEVICE
pover://user@token/DEVICE1/DEVICE2/DEVICEN
Note: you must specify both your user_id and token
PushSafer psafer:// or psafers:// (TCP) 80 or 443 psafer://privatekey
psafers://privatekey/DEVICE
psafer://privatekey/DEVICE1/DEVICE2/DEVICEN
Reddit reddit:// (TCP) 443 reddit://user:password@app_id/app_secret/subreddit
reddit://user:password@app_id/app_secret/sub1/sub2/subN
Rocket.Chat rocket:// or rockets:// (TCP) 80 or 443 rocket://user:password@hostname/RoomID/Channel
rockets://user:password@hostname:443/#Channel1/#Channel1/RoomID
rocket://user:password@hostname/#Channel
rocket://webhook@hostname
rockets://webhook@hostname/@User/#Channel
Ryver ryver:// (TCP) 443 ryver://Organization/Token
ryver://botname@Organization/Token
SendGrid sendgrid:// (TCP) 443 sendgrid://APIToken:FromEmail/
sendgrid://APIToken:FromEmail/ToEmail
sendgrid://APIToken:FromEmail/ToEmail1/ToEmail2/ToEmailN/
ServerChan serverchan:// (TCP) 443 serverchan://token/
SimplePush spush:// (TCP) 443 spush://apikey
spush://salt:password@apikey
spush://apikey?event=Apprise
Slack slack:// (TCP) 443 slack://TokenA/TokenB/TokenC/
slack://TokenA/TokenB/TokenC/Channel
slack://botname@TokenA/TokenB/TokenC/Channel
slack://user@TokenA/TokenB/TokenC/Channel1/Channel2/ChannelN
SMTP2Go smtp2go:// (TCP) 443 smtp2go://user@hostname/apikey
smtp2go://user@hostname/apikey/email
smtp2go://user@hostname/apikey/email1/email2/emailN
smtp2go://user@hostname/apikey/?name=“From%20User”
Streamlabs strmlabs:// (TCP) 443 strmlabs://AccessToken/
strmlabs://AccessToken/?name=name&identifier=identifier&amount=0&currency=USD
SparkPost sparkpost:// (TCP) 443 sparkpost://user@hostname/apikey
sparkpost://user@hostname/apikey/email
sparkpost://user@hostname/apikey/email1/email2/emailN
sparkpost://user@hostname/apikey/?name=“From%20User”
Spontit spontit:// (TCP) 443 spontit://UserID@APIKey/
spontit://UserID@APIKey/Channel
spontit://UserID@APIKey/Channel1/Channel2/ChannelN
Syslog syslog:// (UDP) 514 (if hostname specified) syslog://
syslog://Facility
syslog://hostname
syslog://hostname/Facility
Telegram tgram:// (TCP) 443 tgram://bottoken/ChatID
tgram://bottoken/ChatID1/ChatID2/ChatIDN
Twitter twitter:// (TCP) 443 twitter://CKey/CSecret/AKey/ASecret
twitter://user@CKey/CSecret/AKey/ASecret
twitter://CKey/CSecret/AKey/ASecret/User1/User2/User2
twitter://CKey/CSecret/AKey/ASecret?mode=tweet
Twist twist:// (TCP) 443 twist://pasword:login
twist://password:login/#channel
twist://password:login/#team:channel
twist://password:login/#team:channel1/channel2/#team3:channel
XMPP xmpp:// or xmpps:// (TCP) 5222 or 5223 xmpp://user:password@hostname
xmpps://user:password@hostname:port?jid=user@hostname/resource
xmpps://user:password@hostname/target@myhost, target2@myhost/resource
Webex Teams (Cisco) wxteams:// (TCP) 443 wxteams://Token
Zulip Chat zulip:// (TCP) 443 zulip://botname@Organization/Token
zulip://botname@Organization/Token/Stream
zulip://botname@Organization/Token/Email

SMS Notification Support

Notification Service Service ID Default Port Example Syntax
AWS SNS sns:// (TCP) 443 sns://AccessKeyID/AccessSecretKey/RegionName/+PhoneNo
sns://AccessKeyID/AccessSecretKey/RegionName/+PhoneNo1/+PhoneNo2/+PhoneNoN
sns://AccessKeyID/AccessSecretKey/RegionName/Topic
sns://AccessKeyID/AccessSecretKey/RegionName/Topic1/Topic2/TopicN
ClickSend clicksend:// (TCP) 443 clicksend://user:pass@PhoneNo
clicksend://user:pass@ToPhoneNo1/ToPhoneNo2/ToPhoneNoN
DAPNET dapnet:// (TCP) 80 dapnet://user:pass@callsign
dapnet://user:pass@callsign1/callsign2/callsignN
D7 Networks d7sms:// (TCP) 443 d7sms://user:pass@PhoneNo
d7sms://user:pass@ToPhoneNo1/ToPhoneNo2/ToPhoneNoN
DingTalk dingtalk:// (TCP) 443 dingtalk://token/
dingtalk://token/ToPhoneNo
dingtalk://token/ToPhoneNo1/ToPhoneNo2/ToPhoneNo1/
Kavenegar kavenegar:// (TCP) 443 kavenegar://ApiKey/ToPhoneNo
kavenegar://FromPhoneNo@ApiKey/ToPhoneNo
kavenegar://ApiKey/ToPhoneNo1/ToPhoneNo2/ToPhoneNoN
MessageBird msgbird:// (TCP) 443 msgbird://ApiKey/FromPhoneNo
msgbird://ApiKey/FromPhoneNo/ToPhoneNo
msgbird://ApiKey/FromPhoneNo/ToPhoneNo1/ToPhoneNo2/ToPhoneNoN/
MSG91 msg91:// (TCP) 443 msg91://AuthKey/ToPhoneNo
msg91://SenderID@AuthKey/ToPhoneNo
msg91://AuthKey/ToPhoneNo1/ToPhoneNo2/ToPhoneNoN/
Nexmo nexmo:// (TCP) 443 nexmo://ApiKey:ApiSecret@FromPhoneNo
nexmo://ApiKey:ApiSecret@FromPhoneNo/ToPhoneNo
nexmo://ApiKey:ApiSecret@FromPhoneNo/ToPhoneNo1/ToPhoneNo2/ToPhoneNoN/
Sinch sinch:// (TCP) 443 sinch://ServicePlanId:ApiToken@FromPhoneNo
sinch://ServicePlanId:ApiToken@FromPhoneNo/ToPhoneNo
sinch://ServicePlanId:ApiToken@FromPhoneNo/ToPhoneNo1/ToPhoneNo2/ToPhoneNoN/
sinch://ServicePlanId:ApiToken@ShortCode/ToPhoneNo
sinch://ServicePlanId:ApiToken@ShortCode/ToPhoneNo1/ToPhoneNo2/ToPhoneNoN/
Twilio twilio:// (TCP) 443 twilio://AccountSid:AuthToken@FromPhoneNo
twilio://AccountSid:AuthToken@FromPhoneNo/ToPhoneNo
twilio://AccountSid:AuthToken@FromPhoneNo/ToPhoneNo1/ToPhoneNo2/ToPhoneNoN/
twilio://AccountSid:AuthToken@FromPhoneNo/ToPhoneNo?apikey=Key
twilio://AccountSid:AuthToken@ShortCode/ToPhoneNo
twilio://AccountSid:AuthToken@ShortCode/ToPhoneNo1/ToPhoneNo2/ToPhoneNoN/

Desktop Notification Support

Notification Service Service ID Default Port Example Syntax
Linux DBus Notifications dbus://
qt://
glib://
kde://
n/a dbus://
qt://
glib://
kde://
Linux Gnome Notifications gnome:// n/a gnome://
MacOS X Notifications macosx:// n/a macosx://
Windows Notifications windows:// n/a windows://

Email Support

Service ID Default Port Example Syntax
mailto:// (TCP) 25 mailto://userid:pass@domain.com
mailto://domain.com?user=userid&pass=password
mailto://domain.com:2525?user=userid&pass=password
mailto://user@gmail.com&pass=password
mailto://mySendingUsername:mySendingPassword@example.com?to=receivingAddress@example.com
mailto://userid:password@example.com?smtp=mail.example.com&from=noreply@example.com&name=no%20reply
mailtos:// (TCP) 587 mailtos://userid:pass@domain.com
mailtos://domain.com?user=userid&pass=password
mailtos://domain.com:465?user=userid&pass=password
mailtos://user@hotmail.com&pass=password
mailtos://mySendingUsername:mySendingPassword@example.com?to=receivingAddress@example.com
mailtos://userid:password@example.com?smtp=mail.example.com&from=noreply@example.com&name=no%20reply

Apprise have some email services built right into it (such as yahoo, fastmail, hotmail, gmail, etc) that greatly simplify the mailto:// service. See more details here.

Custom Notifications

Post Method Service ID Default Port Example Syntax
Form form:// or form:// (TCP) 80 or 443 form://hostname
form://user@hostname
form://user:password@hostname:port
form://hostname/a/path/to/post/to
JSON json:// or jsons:// (TCP) 80 or 443 json://hostname
json://user@hostname
json://user:password@hostname:port
json://hostname/a/path/to/post/to
XML xml:// or xmls:// (TCP) 80 or 443 xml://hostname
xml://user@hostname
xml://user:password@hostname:port
xml://hostname/a/path/to/post/to