DataHub CLI
DataHub comes with a friendly cli called datahub
that allows you to perform a lot of common operations using just the command line. Acryl Data maintains the pypi package for datahub
.
Installation
Using pip
We recommend Python virtual environments (venv-s) to namespace pip modules. Here's an example setup:
python3 -m venv venv # create the environment
source venv/bin/activate # activate the environment
NOTE: If you install datahub
in a virtual environment, that same virtual environment must be re-activated each time a shell window or session is created.
Once inside the virtual environment, install datahub
using the following commands
# Requires Python 3.7+
python3 -m pip install --upgrade pip wheel setuptools
python3 -m pip install --upgrade acryl-datahub
# validate that the install was successful
datahub version
# If you see "command not found", try running this instead: python3 -m datahub version
If you run into an error, try checking the common setup issues.
Other installation options such as installation from source and running the cli inside a container are available further below in the guide here
User Guide
The datahub
cli allows you to do many things, such as quickstarting a DataHub docker instance locally, ingesting metadata from your sources into a DataHub server or a DataHub lite instance, as well as retrieving, modifying and exploring metadata.
Like most command line tools, --help
is your best friend. Use it to discover the capabilities of the cli and the different commands and sub-commands that are supported.
Usage: datahub [OPTIONS] COMMAND [ARGS]...
Options:
--debug / --no-debug
--version Show the version and exit.
--help Show this message and exit.
Commands:
check Helper commands for checking various aspects of DataHub.
delete Delete metadata from datahub using a single urn or a combination of filters
docker Helper commands for setting up and interacting with a local DataHub instance using Docker.
get Get metadata for an entity with an optional list of aspects to project.
ingest Ingest metadata into DataHub.
init Configure which datahub instance to connect to
put Update a single aspect of an entity
telemetry Toggle telemetry.
version Print version number and exit.
The following top-level commands listed below are here mainly to give the reader a high-level picture of what are the kinds of things you can accomplish with the cli.
We've ordered them roughly in the order we expect you to interact with these commands as you get deeper into the datahub
-verse.
docker
The docker
command allows you to start up a local DataHub instance using datahub docker quickstart
. You can also check if the docker cluster is healthy using datahub docker check
.
ingest
The ingest
command allows you to ingest metadata from your sources using ingestion configuration files, which we call recipes.
Source specific crawlers are provided by plugins and might sometimes need additional extras to be installed. See installing plugins for more information.
Removing Metadata from DataHub contains detailed instructions about how you can use the ingest command to perform operations like rolling-back previously ingested metadata through the rollback
sub-command and listing all runs that happened through list-runs
sub-command.
Usage: datahub [datahub-options] ingest [command-options]
Command Options:
-c / --config Config file in .toml or .yaml format
-n / --dry-run Perform a dry run of the ingestion, essentially skipping writing to sink
--preview Perform limited ingestion from the source to the sink to get a quick preview
--preview-workunits The number of workunits to produce for preview
--strict-warnings If enabled, ingestion runs with warnings will yield a non-zero error code
init
The init command is used to tell datahub
about where your DataHub instance is located. The CLI will point to localhost DataHub by default.
Running datahub init
will allow you to customize the datahub instance you are communicating with.
Note: Provide your GMS instance's host when the prompt asks you for the DataHub host.
Environment variables supported
The environment variables listed below take precedence over the DataHub CLI config created through the init
command.
DATAHUB_SKIP_CONFIG
(defaultfalse
) - Set totrue
to skip creating the configuration file.DATAHUB_GMS_URL
(defaulthttp://localhost:8080
) - Set to a URL of GMS instanceDATAHUB_GMS_HOST
(defaultlocalhost
) - Set to a host of GMS instance. Prefer usingDATAHUB_GMS_URL
to set the URL.DATAHUB_GMS_PORT
(default8080
) - Set to a port of GMS instance. Prefer usingDATAHUB_GMS_URL
to set the URL.DATAHUB_GMS_PROTOCOL
(defaulthttp
) - Set to a protocol likehttp
orhttps
. Prefer usingDATAHUB_GMS_URL
to set the URL.DATAHUB_GMS_TOKEN
(defaultNone
) - Used for communicating with DataHub Cloud.DATAHUB_TELEMETRY_ENABLED
(defaulttrue
) - Set tofalse
to disable telemetry. If CLI is being run in an environment with no access to public internet then this should be disabled.DATAHUB_TELEMETRY_TIMEOUT
(default10
) - Set to a custom integer value to specify timeout in secs when sending telemetry.DATAHUB_DEBUG
(defaultfalse
) - Set totrue
to enable debug logging for CLI. Can also be achieved through--debug
option of the CLI.DATAHUB_VERSION
(defaulthead
) - Set to a specific version to run quickstart with the particular version of docker images.ACTIONS_VERSION
(defaulthead
) - Set to a specific version to run quickstart with that image tag ofdatahub-actions
container.
DATAHUB_SKIP_CONFIG=false
DATAHUB_GMS_URL=http://localhost:8080
DATAHUB_GMS_TOKEN=
DATAHUB_TELEMETRY_ENABLED=true
DATAHUB_TELEMETRY_TIMEOUT=10
DATAHUB_DEBUG=false
check
The datahub package is composed of different plugins that allow you to connect to different metadata sources and ingest metadata from them.
The check
command allows you to check if all plugins are loaded correctly as well as validate an individual MCE-file.
lite (experimental)
The lite group of commands allow you to run an embedded, lightweight DataHub instance for command line exploration of your metadata. This is intended more for developer tool oriented usage rather than as a production server instance for DataHub. See DataHub Lite for more information about how you can ingest metadata into DataHub Lite and explore your metadata easily.
delete
The delete
command allows you to delete metadata from DataHub. Read this guide to understand how you can delete metadata from DataHub.
Deleting metadata using DataHub's CLI and GraphQL API is a simple, systems-level action. If you attempt to delete an Entity with children, such as a Container, it will not automatically delete the children, you will instead need to delete each child by URN in addition to deleting the parent.
datahub delete --urn "urn:li:dataset:(urn:li:dataPlatform:hive,SampleHiveDataset,PROD)" --soft
get
The get
command allows you to easily retrieve metadata from DataHub, by using the REST API. This works for both versioned aspects and timeseries aspects. For timeseries aspects, it fetches the latest value.
For example the following command gets the ownership aspect from the dataset urn:li:dataset:(urn:li:dataPlatform:hive,SampleHiveDataset,PROD)
$ datahub get --urn "urn:li:dataset:(urn:li:dataPlatform:hive,SampleHiveDataset,PROD)" --aspect ownership
{
"value": {
"com.linkedin.metadata.snapshot.DatasetSnapshot": {
"aspects": [
{
"com.linkedin.metadata.key.DatasetKey": {
"name": "SampleHiveDataset",
"origin": "PROD",
"platform": "urn:li:dataPlatform:hive"
}
},
{
"com.linkedin.common.Ownership": {
"lastModified": {
"actor": "urn:li:corpuser:jdoe",
"time": 1581407189000
},
"owners": [
{
"owner": "urn:li:corpuser:jdoe",
"type": "DATAOWNER"
},
{
"owner": "urn:li:corpuser:datahub",
"type": "DATAOWNER"
}
]
}
}
],
"urn": "urn:li:dataset:(urn:li:dataPlatform:hive,SampleHiveDataset,PROD)"
}
}
}
put
The put
group of commands allows you to write metadata into DataHub. This is a flexible way for you to issue edits to metadata from the command line.
put aspect
The put aspect (also the default put
) command instructs datahub
to set a specific aspect for an entity to a specified value.
For example, the command shown below sets the ownership
aspect of the dataset urn:li:dataset:(urn:li:dataPlatform:hive,SampleHiveDataset,PROD)
to the value in the file ownership.json
.
The JSON in the ownership.json
file needs to conform to the Ownership
Aspect model as shown below.
{
"owners": [
{
"owner": "urn:li:corpuser:jdoe",
"type": "DEVELOPER"
},
{
"owner": "urn:li:corpuser:jdub",
"type": "DATAOWNER"
}
]
}
datahub --debug put --urn "urn:li:dataset:(urn:li:dataPlatform:hive,SampleHiveDataset,PROD)" --aspect ownership -d ownership.json
[DATE_TIMESTAMP] DEBUG {datahub.cli.cli_utils:340} - Attempting to emit to DataHub GMS; using curl equivalent to:
curl -X POST -H 'User-Agent: python-requests/2.26.0' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 'Connection: keep-alive' -H 'X-RestLi-Protocol-Version: 2.0.0' -H 'Content-Type: application/json' --data '{"proposal": {"entityType": "dataset", "entityUrn": "urn:li:dataset:(urn:li:dataPlatform:hive,SampleHiveDataset,PROD)", "aspectName": "ownership", "changeType": "UPSERT", "aspect": {"contentType": "application/json", "value": "{\"owners\": [{\"owner\": \"urn:li:corpuser:jdoe\", \"type\": \"DEVELOPER\"}, {\"owner\": \"urn:li:corpuser:jdub\", \"type\": \"DATAOWNER\"}]}"}}}' 'http://localhost:8080/aspects/?action=ingestProposal'
Update succeeded with status 200
put platform
The put platform command (available in version>0.8.44.4) instructs datahub
to create or update metadata about a data platform. This is very useful if you are using a custom data platform, to set up its logo and display name for a native UI experience.
datahub put platform --name longtail_schemas --display_name "Long Tail Schemas" --logo "https://flink.apache.org/img/logo/png/50/color_50.png"
✅ Successfully wrote data platform metadata for urn:li:dataPlatform:longtail_schemas to DataHub (DataHubRestEmitter: configured to talk to https://longtailcompanions.acryl.io/api/gms with token: eyJh**********Cics)
timeline
The timeline
command allows you to view a version history for entities. Currently only supported for Datasets. For example,
the following command will show you the modifications to tags for a dataset for the past week. The output includes a computed semantic version,
relevant for schema changes only currently, the target of the modification, and a description of the change including a timestamp.
The default output is sanitized to be more readable, but the full API output can be obtained by passing the --verbose
flag and
to get the raw JSON difference in addition to the API output you can add the --raw
flag. For more details about the feature please see the main feature page
datahub timeline --urn "urn:li:dataset:(urn:li:dataPlatform:mysql,User.UserAccount,PROD)" --category TAG --start 7daysago
2022-02-17 14:03:42 - 0.0.0-computed
MODIFY TAG dataset:mysql:User.UserAccount : A change in aspect editableSchemaMetadata happened at time 2022-02-17 20:03:42.0
2022-02-17 14:17:30 - 0.0.0-computed
MODIFY TAG dataset:mysql:User.UserAccount : A change in aspect editableSchemaMetadata happened at time 2022-02-17 20:17:30.118
telemetry
To help us understand how people are using DataHub, we collect anonymous usage statistics on actions such as command invocations via Mixpanel. We do not collect private information such as IP addresses, contents of ingestions, or credentials. The code responsible for collecting and broadcasting these events is open-source and can be found within our GitHub.
Telemetry is enabled by default, and the telemetry
command lets you toggle the sending of these statistics via telemetry enable/disable
.
migrate
The migrate
group of commands allows you to perform certain kinds of migrations.
dataplatform2instance
The dataplatform2instance
migration command allows you to migrate your entities from an instance-agnostic platform identifier to an instance-specific platform identifier. If you have ingested metadata in the past for this platform and would like to transfer any important metadata over to the new instance-specific entities, then you should use this command. For example, if your users have added documentation or added tags or terms to your datasets, then you should run this command to transfer this metadata over to the new entities. For further context, read the Platform Instance Guide here.
A few important options worth calling out:
- --dry-run / -n : Use this to get a report for what will be migrated before running
- --force / -F : Use this if you know what you are doing and do not want to get a confirmation prompt before migration is started
- --keep : When enabled, will preserve the old entities and not delete them. Default behavior is to soft-delete old entities.
- --hard : When enabled, will hard-delete the old entities.
Note: Timeseries aspects such as Usage Statistics and Dataset Profiles are not migrated over to the new entity instances, you will get new data points created when you re-run ingestion using the usage
or sources with profiling turned on.
Dry Run
datahub migrate dataplatform2instance --platform elasticsearch --instance prod_index --dry-run
Starting migration: platform:elasticsearch, instance=prod_index, force=False, dry-run=True
100% (25 of 25) |####################################################################################################################################################################################| Elapsed Time: 0:00:00 Time: 0:00:00
[Dry Run] Migration Report:
--------------
[Dry Run] Migration Run Id: migrate-5710349c-1ec7-4b83-a7d3-47d71b7e972e
[Dry Run] Num entities created = 25
[Dry Run] Num entities affected = 0
[Dry Run] Num entities migrated = 25
[Dry Run] Details:
[Dry Run] New Entities Created: {'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,prod_index.datahubretentionindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,prod_index.schemafieldindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,prod_index.system_metadata_service_v1,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,prod_index.tagindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,prod_index.dataset_datasetprofileaspect_v1,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,prod_index.mlmodelindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,prod_index.mlfeaturetableindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,prod_index.datajob_datahubingestioncheckpointaspect_v1,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,prod_index.datahub_usage_event,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,prod_index.dataset_operationaspect_v1,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,prod_index.datajobindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,prod_index.dataprocessindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,prod_index.glossarytermindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,prod_index.dataplatformindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,prod_index.mlmodeldeploymentindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,prod_index.datajob_datahubingestionrunsummaryaspect_v1,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,prod_index.graph_service_v1,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,prod_index.datahubpolicyindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,prod_index.dataset_datasetusagestatisticsaspect_v1,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,prod_index.dashboardindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,prod_index.glossarynodeindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,prod_index.mlfeatureindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,prod_index.dataflowindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,prod_index.mlprimarykeyindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,prod_index.chartindex_v2,PROD)'}
[Dry Run] External Entities Affected: None
[Dry Run] Old Entities Migrated = {'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,dataset_datasetusagestatisticsaspect_v1,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,mlmodelindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,mlmodeldeploymentindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,datajob_datahubingestionrunsummaryaspect_v1,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,datahubretentionindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,datahubpolicyindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,dataset_datasetprofileaspect_v1,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,glossarynodeindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,dataset_operationaspect_v1,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,graph_service_v1,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,datajobindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,mlprimarykeyindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,dashboardindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,datajob_datahubingestioncheckpointaspect_v1,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,tagindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,datahub_usage_event,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,schemafieldindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,mlfeatureindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,dataprocessindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,dataplatformindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,mlfeaturetableindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,glossarytermindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,dataflowindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,chartindex_v2,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:elasticsearch,system_metadata_service_v1,PROD)'}
Real Migration (with soft-delete)
> datahub migrate dataplatform2instance --platform hive --instance
datahub migrate dataplatform2instance --platform hive --instance warehouse
Starting migration: platform:hive, instance=warehouse, force=False, dry-run=False
Will migrate 4 urns such as ['urn:li:dataset:(urn:li:dataPlatform:hive,SampleHiveDataset,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:hive,SampleHiveDataset,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:hive,fct_users_deleted,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:hive,logging_events,PROD)']
New urns will look like ['urn:li:dataset:(urn:li:dataPlatform:hive,warehouse.logging_events,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:hive,warehouse.fct_users_created,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:hive,warehouse.SampleHiveDataset,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:hive,warehouse.fct_users_deleted,PROD)']
Ok to proceed? [y/N]:
...
Migration Report:
--------------
Migration Run Id: migrate-f5ae7201-4548-4bee-aed4-35758bb78c89
Num entities created = 4
Num entities affected = 0
Num entities migrated = 4
Details:
New Entities Created: {'urn:li:dataset:(urn:li:dataPlatform:hive,warehouse.SampleHiveDataset,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:hive,warehouse.fct_users_deleted,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:hive,warehouse.logging_events,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:hive,warehouse.fct_users_created,PROD)'}
External Entities Affected: None
Old Entities Migrated = {'urn:li:dataset:(urn:li:dataPlatform:hive,logging_events,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:hive,SampleHiveDataset,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:hive,fct_users_deleted,PROD)', 'urn:li:dataset:(urn:li:dataPlatform:hive,fct_users_created,PROD)'}
user
The user
command allows you to interact with the User entity.
It currently supports the upsert
operation, which can be used to create a new user or update an existing one.
For detailed information, please refer to Creating Users and Groups with Datahub CLI.
datahub user upsert -f users.yaml
An example of users.yaml
would look like the following. You can refer to the bar.user.dhub.yaml file for the complete code.
- id: bar@acryl.io
first_name: The
last_name: Bar
email: bar@acryl.io
slack: "@the_bar_raiser"
description: "I like raising the bar higher"
groups:
- foogroup@acryl.io
- id: datahub
slack: "@datahubproject"
phone: "1-800-GOT-META"
description: "The DataHub Project"
picture_link: "https://raw.githubusercontent.com/datahub-project/datahub/master/datahub-web-react/src/images/datahub-logo-color-stable.svg"
group
The group
command allows you to interact with the Group entity.
It currently supports the upsert
operation, which can be used to create a new group or update an existing one with embedded Users.
For more information, please refer to Creating Users and Groups with Datahub CLI.
datahub group upsert -f group.yaml
An example of group.yaml
would look like the following. You can refer to the foo.group.dhub.yaml file for the complete code.
id: foogroup@acryl.io
display_name: Foo Group
admins:
- datahub
members:
- bar@acryl.io # refer to a user either by id or by urn
- id: joe@acryl.io # inline specification of user
slack: "@joe_shmoe"
display_name: "Joe's Hub"
Alternate Installation Options
Using docker
If you don't want to install locally, you can alternatively run metadata ingestion within a Docker container. We have prebuilt images available on Docker hub. All plugins will be installed and enabled automatically.
You can use the datahub-ingestion
docker image as explained in Docker Images. In case you are using Kubernetes you can start a pod with the datahub-ingestion
docker image, log onto a shell on the pod and you should have the access to datahub CLI in your kubernetes cluster.
Limitation: the datahub_docker.sh convenience script assumes that the recipe and any input/output files are accessible in the current working directory or its subdirectories. Files outside the current working directory will not be found, and you'll need to invoke the Docker image directly.
# Assumes the DataHub repo is cloned locally.
./metadata-ingestion/scripts/datahub_docker.sh ingest -c ./examples/recipes/example_to_datahub_rest.yml
Install from source
If you'd like to install from source, see the developer guide.
Installing Plugins
We use a plugin architecture so that you can install only the dependencies you actually need. Click the plugin name to learn more about the specific source recipe and any FAQs!
Sources
Please see our Integrations page if you want to filter on the features offered by each source.
Plugin Name | Install Command | Provides |
---|---|---|
file | included by default | File source and sink |
athena | pip install 'acryl-datahub[athena]' | AWS Athena source |
bigquery | pip install 'acryl-datahub[bigquery]' | BigQuery source |
datahub-lineage-file | no additional dependencies | Lineage File source |
datahub-business-glossary | no additional dependencies | Business Glossary File source |
dbt | no additional dependencies | dbt source |
druid | pip install 'acryl-datahub[druid]' | Druid Source |
feast | pip install 'acryl-datahub[feast]' | Feast source (0.26.0) |
glue | pip install 'acryl-datahub[glue]' | AWS Glue source |
hana | pip install 'acryl-datahub[hana]' | SAP HANA source |
hive | pip install 'acryl-datahub[hive]' | Hive source |
kafka | pip install 'acryl-datahub[kafka]' | Kafka source |
kafka-connect | pip install 'acryl-datahub[kafka-connect]' | Kafka connect source |
ldap | pip install 'acryl-datahub[ldap]' (extra requirements) | LDAP source |
looker | pip install 'acryl-datahub[looker]' | Looker source |
lookml | pip install 'acryl-datahub[lookml]' | LookML source, requires Python 3.7+ |
metabase | pip install 'acryl-datahub[metabase]' | Metabase source |
mode | pip install 'acryl-datahub[mode]' | Mode Analytics source |
mongodb | pip install 'acryl-datahub[mongodb]' | MongoDB source |
mssql | pip install 'acryl-datahub[mssql]' | SQL Server source |
mysql | pip install 'acryl-datahub[mysql]' | MySQL source |
mariadb | pip install 'acryl-datahub[mariadb]' | MariaDB source |
openapi | pip install 'acryl-datahub[openapi]' | OpenApi Source |
oracle | pip install 'acryl-datahub[oracle]' | Oracle source |
postgres | pip install 'acryl-datahub[postgres]' | Postgres source |
redash | pip install 'acryl-datahub[redash]' | Redash source |
redshift | pip install 'acryl-datahub[redshift]' | Redshift source |
sagemaker | pip install 'acryl-datahub[sagemaker]' | AWS SageMaker source |
snowflake | pip install 'acryl-datahub[snowflake]' | Snowflake source |
sqlalchemy | pip install 'acryl-datahub[sqlalchemy]' | Generic SQLAlchemy source |
superset | pip install 'acryl-datahub[superset]' | Superset source |
tableau | pip install 'acryl-datahub[tableau]' | Tableau source |
trino | pip install 'acryl-datahub[trino]' | Trino source |
starburst-trino-usage | pip install 'acryl-datahub[starburst-trino-usage]' | Starburst Trino usage statistics source |
nifi | pip install 'acryl-datahub[nifi]' | NiFi source |
powerbi | pip install 'acryl-datahub[powerbi]' | Microsoft Power BI source |
powerbi-report-server | pip install 'acryl-datahub[powerbi-report-server]' | Microsoft Power BI Report Server source |
Sinks
Plugin Name | Install Command | Provides |
---|---|---|
file | included by default | File source and sink |
console | included by default | Console sink |
datahub-rest | pip install 'acryl-datahub[datahub-rest]' | DataHub sink over REST API |
datahub-kafka | pip install 'acryl-datahub[datahub-kafka]' | DataHub sink over Kafka |
These plugins can be mixed and matched as desired. For example:
pip install 'acryl-datahub[bigquery,datahub-rest]'
Check the active plugins
datahub check plugins
Release Notes and CLI versions
The server release notes can be found in github releases. These releases are done approximately every week on a regular cadence unless a blocking issue or regression is discovered.
CLI release is made through a different repository and release notes can be found in acryldata releases. At least one release which is tied to the server release is always made alongwith the server release. Multiple other bigfix releases are made in between based on amount of fixes that are merged between the server release mentioned above.
If server with version 0.8.28
is being used then CLI used to connect to it should be 0.8.28.x
. Tests of new CLI are not ran with older server versions so it is not recommended to update the CLI if the server is not updated.