Skip to content
Snippets Groups Projects
Commit faf06775 authored by Jens Bröder's avatar Jens Bröder
Browse files

Add mission statement, harvesting information and data pipeline information.

parent bc1dbd39
Branches main
No related tags found
1 merge request!7New release of docs
Pipeline #325646 passed
......@@ -9,7 +9,7 @@ parts:
numbered: False
chapters:
- file: introduction/about.md
title: "About"
title: "About & Mission"
- file: introduction/implementation.md
title: "Implementation overview"
- file: introduction/data_sources.md
......
# About UnHIDE
# About UnHIDE and its mission
![unhide_overview](../images/unhide_overview.png)
## Mission
The efforts of the unHIDE initiative are one part of the efforts by the Helmholtz metadata collaboration (HMC) to improve the quality, knowledge management and conservation of research output of the Helmholtz association with respect and through metadata. This is accomplished by making research output `FAIR` through better metadata or differently formulated creating to a certain extend in a certain form of a semantic web encompassing Helmholtz research.
With the unHIDE initiative our goal is to improve the metadata at the source and make data providers as well as scientists more aware of what metadata they put out on the web, how and with what quality.
For this we create and expose the Helmholtz knowledge graph, which contains open high-level metadata exposed by different Helmholtz infrastructures. Also such a graph allows for services which serve needs of certain stakeholder groups to empower their work in different ways.
Beyond the knowledge graph in unHIDE we communicate and work together with Helmholtz infrastructures to improve metadata, (or make it available in the first place), through consulting, help and fostering networking between the infrastructures and respected experts.
![unhide_overview](../images/unhide_overview.png)
\ No newline at end of file
# Data pipeline
In UnHIDE data is harvested from connected providers and partners.
Then data is 'uplifted', i.e semantically enriched and or completed,
where possible from aggregated data or schema.org semantics.
In UnHIDE metadata about research outputs is harvested from connected providers and partners.
Then the original metadata is 'uplifted', i.e semantically enriched and or completed,
where possible from aggregated data or schema.org semantics as an example of how it can be.
## Overview
......@@ -36,4 +36,16 @@ The second direction is there to provide full text search on the data to end use
For this an index of each uplifted data record is constructed and uploaded into a single SOLR index,
which is exposed to a certain extend via a custom fastAPI. A web front end using the javascript library
React provides a user interface for the full text search and supports special use cases as a service
to certain stakeholder groups.
\ No newline at end of file
to certain stakeholder groups.
The technical implementation is currently a minimal running version, by exposing each
component and functionality through the command line interface `hmc-unhide` and then using
cron jobs to run them from time to time. On the deployment instance this can be run monthly or
weekly. In the longer term, the pipeline orchestration itself should become more sophisticated.
For this one could deploy a workflow manager with provenance tracking like (AiiDA)
or one with less overhead depending on the needs, also if one wants to move to a more event based system
with more fault tolerance for errors of individual records or data sources. Currently,
in the minimal implementation there is the risks that a not caught failure in a subtask
fails a larger part of the pipeline. Which is then only logged, but has to be resolved in a manual way.
# Data harvesting
# Data harvesting: extracting metadata from the web
How does UnHIDE harvested data?
\ No newline at end of file
How does UnHIDE harvested data?
Data harvesting and mining for the knowledge graph is done by `Harvester classes`.
For each interface a specific Harvester class should be implemented.
All Harvester classes should inherit from existing Harvesters or the [`BaseHarvester`](https://codebase.helmholtz.cloud/hmc/hmc-public/unhide/data_harvesting/-/blob/main/data_harvesting/base_harvester.py?ref_type=heads), which currently specifies that:
1. Each harvester needs a `run` method
2. Can read from the [`config.yml`](https://codebase.helmholtz.cloud/hmc/hmc-public/unhide/data_harvesting/-/blob/main/data_harvesting/configs/config.yaml?ref_type=heads)
3. Reads from a `<harvesterclass>.last_run` file the time the harvester was last run
Implemented harvester classes include:
| Name (Cli) | Class Name | Interface | Comment |
|-------------|------------|-----------|---------|
|sitemap | SitemapHarvester | sitemaps | Selecting record links from the sitemap requires expression matching. Relies on the advertools lib.|
|oai | OAIHarvester | OAI-PMH | Relies on the oai lib. For the library providers, dublin core is converted to schema.org |
|git | GitHarvester | Git, Gitlab/Github API | Relies on codemetapy and codemeta-harvester as well as gitlab/github APIs. |
|datacite | DataciteHarvester | REST API & GraphQL endpoint | schema.org extracted through content negotiation.|
|feed | FeedHarvester | RSS & Atom Feeds | Relies on the atoma library, and also only works if on the landing pages schema.org metadata can be extracted. Can only get recent data, useful for event metadata.|
|indico | IndicoHarvester | Indico REST API | Directly extracts schema.org metadata through API, requires an access token |
Json-ld metadata from landing pages of records is extracted via the `extruct` library, if it cannot be directly retrieved through some standardized interface.
All harvesters are exposed on the `hmc-unhide` commandline interface.
They store the extracted metadata per default in the internal data model [`LinkedDataObject`](https://codebase.helmholtz.cloud/hmc/hmc-public/unhide/data_harvesting/-/blob/main/data_harvesting/data_model.py?ref_type=heads).
Which has a serialization with some provenance information, original source data and uplifted data and provides method for validation.
In a single central yaml configuration file called [`config.yml`](https://codebase.helmholtz.cloud/hmc/hmc-public/unhide/data_harvesting/-/blob/main/data_harvesting/configs/config.yaml?ref_type=heads), specifies for each harvester class the sources to harvest and harvester or source specific configuration.
\ No newline at end of file
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment