Logged in as kurzum
The databus artifact will be available under https://databus.dbpedia.org/kurzum/$group/$artifact/$version

CHOICE: 1. Modify existing or 2. add new version of existing or 3. create new

Versioned Dataset

Databus follows software design principles to package data, so that any software you and others build on the data will work. By specifying a group and artifact id you create the following contract or expectations:

Group and Artifact

define the abstract identity of the dataset with versions.
Each Version is therefore still considered to be the same dataset, just a different version. An example here is updating the OS version of your Android mobile phone or using a Web API or software library. Both are versioned as well and if they change too much, apps break. * Datasets can contain any files, but
** A new version should roughly contain the same set of files with similarly structured content.
** Don't put all files in one dataset, follow [Low Coupling and High Cohesion](https://en.wikipedia.org/wiki/Coupling_(computer_programming)), e.g. keep any schematic information (ontologies, XSD, XML Schema) into a different artifact versioned separately.
** Think about the lifecycle of each individual file. Are the all updated together? If some files are updated daily and others yearly, then you probably end up creating new daily versions for the yearly changing files as well.
** Fluctuation and changes are normal in the beginning, don't hesitate to use the Databus to gradually improve data and stabilize it following an Agile Data Engineering methodology.


Software versions are driven by code edits, features and functionality and normally follow the MAJOR.MINOR.PATCH or semver.org versioning pattern. Data, however, is more time-driven, e.g. the weekly database dump, a daily web crawl or are created by running an extraction or AI software. Schema and ontologies follow both patterns. In order to accomodate this, Databus follows only one rule for versioning:
version numbers are sorted alphanumerical
2018.01.01 > 2017.01.01
1.5.2 > 1.11.2 > 01.05.02


1. The system is intended to flexibel and you can also repurpose it
2. Each new version, will require us to store 20 triples or records per file in the Databus API. Please don't update more than 5000 files per month for now.
3. The Databus Client handles compression, format conversion and mapping via the "download AS" function. It is unnecessary to duplicate files, which only differ in compression or format.
see above "[A-Za-z0-9_\\-.]+", locked if existing, editable if new
see above "[A-Za-z0-9_\\-.]+", locked if overwrite/update, editable if new, best to show the order in relation to other versions


We require machine-readable standardized licenses. We collect all valid license URLs in the DBpedia Knowledge Library in collaboration with the DALICC License Library https://dalicc.net/license-library
If you prefer to use customized, hard to understand, unclear or missing licenses, then the Databus is not the right platform for you..
see above one-of, autocomplete
I hereby confirm that I cleared the license of the dataset and entered the url to the best of my knowledge. It is good practice to document the license byy copying a relevant snippet from the original website and include a time stamp. required



obvious markdown

File Urls

Tricky, discussion necessary


Publish the metadata on the bus and also host the metadata for me
I would like to receive the metadata dataid.ttl file to put it next to the data first (Decentralisation)
Shibboleet. I would like to receive the pom.xml to automate posting on the bus.