The computer running the catalog collector should have connectivity to the internet or access to the source instance, a minimum of 2G memory, and a 2Ghz processor.
The user defined to run DWCC must have read access to all resources being cataloged.
Request access to a download link from your data.world representative for the catalog collector. Once you receive the link, download the catalog collector Docker image (or programmatically download it with curl).
Load the docker image into the local computer’s Docker environment:
docker load -i dwdbt-X.Y.tar.gz
where X.Y is the version number of the dbt collector image.
The previous command will return an <image id> which needs to be renamed as 'DWCC'. Copy the <image id> and use it in the docker-load command:
docker tag <image id> dwdbt
The following parameters are used to run the DBT collector. Where available, either short (e.g., -a) or long (--acccount) forms can be used.
Do not forget to replace
x.y in the command
datadotworld/dwcc:x.y catalog with the version of DWCC you want to use.
For JDBC sources, DWCC will harvest the metadata for everything that the user specified for the connection has access to. To restrict what is being cataloged, specify the database and schema as appropriate.
The catalog collector may run in several seconds to many minutes depending on the size and complexity of the system being crawled.
Keep your metadata catalog up to date using cron, your Docker container, or your automation tool of choice to run the catalog collector on a regular basis. Considerations for how often to schedule include:
Frequency of changes to the schema
Business criticality of up-to-date data
For organizations with schemas that change often and where surfacing the latest data is business critical, daily may be appropriate. For those with schemas that do not change often and which are less critical, weekly or even monthly may make sense. Consult your data.world representative for more tailored recommendations on how best to optimize your catalog collector processes.