linked-data
Projects with this topic
-
This repository contains docker compose deployment files for the whole software stack behind the Helmholtz knowledge graph, work from the unHIDE initiative.
Deployment includes containers for the harvesters and utility, the API, SOLR, Virtuoso, Web Frontend as well as for nginx and letsencrypt.
More information on the unHIDE initiative, which was launched by the Helmholtz Metadata Collaboration (HMC), you can find under https://docs.unhide.helmholtz-metadaten.de.
Updated -
Set of tools to harvest, process and uplift (meta)data from metadata providers within the Helmholtz association to be included in the Helmholtz Knowledge Graph (Helmholtz-KG). The harvested linked data in the form of schema.org jsonld is aggregated and uplifted in data pipelines to be included into a single large knowledge graph (KG). The tool set and harvesters can be used as a python library or over a commandline interface (CLI, hmc-unhide). Provenance of metadata changes is tracked rudimentary by saving graph patches of changes on rdflib Graph data structures on the semantic triple level. Harvesters support extracting data via sitemap, gitlab API, datacite API and OAI-PMH endpoints.
Updated -
Documentation of the UnHIDE initiative, which is hosted under: https://docs.unhide.helmholtz-metadaten.de
The documentation includes detailed information on the scope, stakeholder and user information as well as developer information. For data providers it provides basic information about metadata and how to connect to the graph, as well as how FAIR metadata can look like and should be exposed for a high visibility on the internet. Users find documentation of the available endpoints, i.e. Web front end, SPARQL API, API. Developers will find technical documentation on the software stack and deployment.
Updated