From faf06775d33b4690ee1c73c7e5e4f938fa7a8059 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Jens=20Br=C3=B6der?= <j.broeder@fz-juelich.de> Date: Wed, 13 Dec 2023 16:10:06 +0100 Subject: [PATCH] Add mission statement, harvesting information and data pipeline information. --- docs/_toc.yml | 2 +- docs/introduction/about.md | 14 ++++++++++++-- docs/tech/datapipe.md | 20 ++++++++++++++++---- docs/tech/harvesting.md | 31 +++++++++++++++++++++++++++++-- 4 files changed, 58 insertions(+), 9 deletions(-) diff --git a/docs/_toc.yml b/docs/_toc.yml index 8fde307f..8741fb2c 100644 --- a/docs/_toc.yml +++ b/docs/_toc.yml @@ -9,7 +9,7 @@ parts: numbered: False chapters: - file: introduction/about.md - title: "About" + title: "About & Mission" - file: introduction/implementation.md title: "Implementation overview" - file: introduction/data_sources.md diff --git a/docs/introduction/about.md b/docs/introduction/about.md index ff39d4af..0c5798d9 100644 --- a/docs/introduction/about.md +++ b/docs/introduction/about.md @@ -1,3 +1,13 @@ -# About UnHIDE +# About UnHIDE and its mission - +## Mission + +The efforts of the unHIDE initiative are one part of the efforts by the Helmholtz metadata collaboration (HMC) to improve the quality, knowledge management and conservation of research output of the Helmholtz association with respect and through metadata. This is accomplished by making research output `FAIR` through better metadata or differently formulated creating to a certain extend in a certain form of a semantic web encompassing Helmholtz research. + +With the unHIDE initiative our goal is to improve the metadata at the source and make data providers as well as scientists more aware of what metadata they put out on the web, how and with what quality. +For this we create and expose the Helmholtz knowledge graph, which contains open high-level metadata exposed by different Helmholtz infrastructures. Also such a graph allows for services which serve needs of certain stakeholder groups to empower their work in different ways. + +Beyond the knowledge graph in unHIDE we communicate and work together with Helmholtz infrastructures to improve metadata, (or make it available in the first place), through consulting, help and fostering networking between the infrastructures and respected experts. + + + \ No newline at end of file diff --git a/docs/tech/datapipe.md b/docs/tech/datapipe.md index a1446d8c..e9025771 100644 --- a/docs/tech/datapipe.md +++ b/docs/tech/datapipe.md @@ -1,8 +1,8 @@ # Data pipeline -In UnHIDE data is harvested from connected providers and partners. -Then data is 'uplifted', i.e semantically enriched and or completed, -where possible from aggregated data or schema.org semantics. +In UnHIDE metadata about research outputs is harvested from connected providers and partners. +Then the original metadata is 'uplifted', i.e semantically enriched and or completed, +where possible from aggregated data or schema.org semantics as an example of how it can be. ## Overview @@ -36,4 +36,16 @@ The second direction is there to provide full text search on the data to end use For this an index of each uplifted data record is constructed and uploaded into a single SOLR index, which is exposed to a certain extend via a custom fastAPI. A web front end using the javascript library React provides a user interface for the full text search and supports special use cases as a service -to certain stakeholder groups. \ No newline at end of file +to certain stakeholder groups. + + +The technical implementation is currently a minimal running version, by exposing each +component and functionality through the command line interface `hmc-unhide` and then using +cron jobs to run them from time to time. On the deployment instance this can be run monthly or +weekly. In the longer term, the pipeline orchestration itself should become more sophisticated. +For this one could deploy a workflow manager with provenance tracking like (AiiDA) +or one with less overhead depending on the needs, also if one wants to move to a more event based system +with more fault tolerance for errors of individual records or data sources. Currently, +in the minimal implementation there is the risks that a not caught failure in a subtask +fails a larger part of the pipeline. Which is then only logged, but has to be resolved in a manual way. + diff --git a/docs/tech/harvesting.md b/docs/tech/harvesting.md index f5240436..5970eecf 100644 --- a/docs/tech/harvesting.md +++ b/docs/tech/harvesting.md @@ -1,3 +1,30 @@ -# Data harvesting +# Data harvesting: extracting metadata from the web -How does UnHIDE harvested data? \ No newline at end of file +How does UnHIDE harvested data? + +Data harvesting and mining for the knowledge graph is done by `Harvester classes`. +For each interface a specific Harvester class should be implemented. +All Harvester classes should inherit from existing Harvesters or the [`BaseHarvester`](https://codebase.helmholtz.cloud/hmc/hmc-public/unhide/data_harvesting/-/blob/main/data_harvesting/base_harvester.py?ref_type=heads), which currently specifies that: + +1. Each harvester needs a `run` method +2. Can read from the [`config.yml`](https://codebase.helmholtz.cloud/hmc/hmc-public/unhide/data_harvesting/-/blob/main/data_harvesting/configs/config.yaml?ref_type=heads) +3. Reads from a `<harvesterclass>.last_run` file the time the harvester was last run + +Implemented harvester classes include: + +| Name (Cli) | Class Name | Interface | Comment | +|-------------|------------|-----------|---------| +|sitemap | SitemapHarvester | sitemaps | Selecting record links from the sitemap requires expression matching. Relies on the advertools lib.| +|oai | OAIHarvester | OAI-PMH | Relies on the oai lib. For the library providers, dublin core is converted to schema.org | +|git | GitHarvester | Git, Gitlab/Github API | Relies on codemetapy and codemeta-harvester as well as gitlab/github APIs. | +|datacite | DataciteHarvester | REST API & GraphQL endpoint | schema.org extracted through content negotiation.| +|feed | FeedHarvester | RSS & Atom Feeds | Relies on the atoma library, and also only works if on the landing pages schema.org metadata can be extracted. Can only get recent data, useful for event metadata.| +|indico | IndicoHarvester | Indico REST API | Directly extracts schema.org metadata through API, requires an access token | + +Json-ld metadata from landing pages of records is extracted via the `extruct` library, if it cannot be directly retrieved through some standardized interface. + +All harvesters are exposed on the `hmc-unhide` commandline interface. +They store the extracted metadata per default in the internal data model [`LinkedDataObject`](https://codebase.helmholtz.cloud/hmc/hmc-public/unhide/data_harvesting/-/blob/main/data_harvesting/data_model.py?ref_type=heads). +Which has a serialization with some provenance information, original source data and uplifted data and provides method for validation. + +In a single central yaml configuration file called [`config.yml`](https://codebase.helmholtz.cloud/hmc/hmc-public/unhide/data_harvesting/-/blob/main/data_harvesting/configs/config.yaml?ref_type=heads), specifies for each harvester class the sources to harvest and harvester or source specific configuration. \ No newline at end of file -- GitLab