Skip to content
Snippets Groups Projects
Commit bc4531ab authored by FionaDmello's avatar FionaDmello
Browse files

cleaned up documentation repo for docusaurus based website

parent 54098f7d
No related branches found
No related tags found
1 merge request!9Docusaurus based documentation
Pipeline #410497 failed
Showing
with 46 additions and 1154 deletions
# Created by https://www.toptal.com/developers/gitignore/api/python,jupyternotebooks
# Edit at https://www.toptal.com/developers/gitignore?templates=python,jupyternotebooks
### JupyterNotebooks ###
# gitignore template for Jupyter Notebooks
# website: http://jupyter.org/
/docs/unhide-docs
/docs/.python-version
.ipynb_checkpoints
*/.ipynb_checkpoints/*
# IPython
profile_default/
ipython_config.py
# Remove previous ipynb_checkpoints
# git rm -r .ipynb_checkpoints/
### Python ###
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
# IPython
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock
# pdm
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
# in version control.
# https://pdm.fming.dev/#use-with-ide
.pdm.toml
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# PyCharm
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
### Python Patch ###
# Poetry local configuration file - https://python-poetry.org/docs/configuration/#local-configuration
poetry.toml
# End of https://www.toptal.com/developers/gitignore/api/python,jupyternotebooks
# Dependencies
/node_modules
# Production
/build
# Generated files
.docusaurus
.cache-loader
# Misc
.DS_Store
.env.local
.env.development.local
.env.test.local
.env.production.local
npm-debug.log*
yarn-debug.log*
yarn-error.log*
<img src="https://s3.desy.de/hackmd/uploads/upload_c3ba77674d5c58417c6df0f195b0c4ac.png" alt="unHIDE logo" height = "75">
# Website
## unHIDE - unified Helmholtz Information and data exchange
This website is built using [Docusaurus](https://docusaurus.io/), a modern static website generator.
### Authors
Pier Luigi Buttigieg [<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/ORCID_iD.svg/2048px-ORCID_iD.svg.png" alt="ORCID Logo" height ="20"> 0000-0002-4366-3088](https://orcid.org/0000-0002-4366-3088)
Volker Hofmann [<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/ORCID_iD.svg/2048px-ORCID_iD.svg.png" alt="ORCID Logo" height ="20"> 0000-0002-5149-603X](https://orcid.org/0000-0002-5149-603X)
### Installation
## Introduction & Scope
```
$ yarn
```
Research across the Helmholtz Association (HGF) depends and thrives on a complex network of inter- and multidisciplinary collaborations which spans across its 18 Centres and beyond.
### Local Development
However, the (meta)data generated through the HGF's research and operations is typically siloed within institutional infrastructure and often within individual teams. The result is that the wealth of the HGF's (meta)data is stored and maintained in a scattered manner, and cannot be used to its full value to scientists, managers, strategists, and policy makers.
```
$ yarn start
```
To address this challenge, the Helmholtz Metadata Collaboration (HMC) is launching the **unified Helmholtz Information and Data Exchange (unHIDE)**. This initiative seeks to create a lightweight and sustainable interoperability layer to interlink data infrastructures and provide greater, cross-organisational access to the HGF's (meta)data and information assets. Using proven and globally adopted knowledge graph technology (Box 1), unHIDE will develop a comprehensive association-wide Knowledge Graph (KG) the "Helmholtz-KG": a solution to connect (meta)data, information, and knowledge.
This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server.
> *Box 1*
>
> What is a Knowledge Graph?
> - A "graph", from graph theory, is a structure that models pairwise connections between objects using "nodes" connected by "edges".
> - A "knowledge graph" uses such a graph structure to capture knowledge about how a collection of things (represented as nodes) relate to one another (via edges). This helps organisations keep track of their collective knowledge, especially in complex and rapidly changing scenarios.
> - Social networks are perhaps the best known graphs, that store knowledge about who knows whom and how, what their interests are, what groups they belong to, and what content they create and interact with.
### Build
With the implementation of the Helmholtz-KG, unHIDE will create substantial additional value for the Helmholtz digital ecosystem and its interconnectivity:
```
$ yarn build
```
**With the development of the Helmholtz-KG, unHide will:**
- increase discoverability and actionability of HGF data across the whole Association*
- motivate enhancement of (meta)data quality [1] and interoperability
- provide overviews and diagnostics of the HGF dataspace and digital assets
- allow for traceable and reproducible recovery of (meta)data to enhance research
- support connectivity of HGF data to interact with global infrastructures and projects
- act as a central access and distribution point for stakeholders within and beyond the HGF
This command generates static content into the `build` directory and can be served using any static contents hosting service.
## Architecture & Implementation
### Deployment
### Foundational architecture
Using SSH:
The Helmholtz Knowledge Graph (Helmholtz-KG) aims to enhance the HGF's digital capacities, transparency, and productivity through dissemination and implementation of Linked Data principles (Box 2). Thus, unHIDE will build the Helmholtz-KG on mature web architecture and state-of-the-art semantic web technologies. This will ensure reliability and compatibility with global systems, while also exploring innovative approaches to maximise the Helmholtz-KG's ability to accelerate research and operations.
```
$ USE_SSH=true yarn deploy
```
> Box 2
>
> Graph data is:
> - Open-world, allowing resilient operations with novel or unexpected data flows
> - Faster than using SQL and associated JOIN operations
> - Better suited to integrating data from heterogeneous sources
> - Better suited to situations where the data model is complex and (rapidly) evolving
>
> **[Learn more: https://www.w3.org/2013/data/](https://www.w3.org/2013/data/)**
Not using SSH:
To ensure ease of use, the Helmholtz-KG will be based on a lightweight and internationally adopted interoperabiliy architecture based on schema.org semantics and JSON-LD serialisation [2]. This architecture is widely used by data producers - including public, private, and governmental data systems - to link and expose scattered, diverse digital assets. By reusing this architecture, unHIDE will ensure that the Helmholtz-KG is able to natively interoperate with global systems.
```
$ GIT_USER=<Your GitHub username> yarn deploy
```
### Modular design & Extensibility
While the foundation of the Helmholtz-KG will reuse standard web architectural elements and proven, globally adopted conventions, the KG itself is modular by nature: Graphs can be merged, split, independently managed, and readily interfaced with other digital resources without compromising core integrity and functionality. In this manner, Helmholtz data scientists and engineers will be able to propose and test extensions to the graph with minimal overhead, which will support the ability to extend into existing and well-established systems in the HGF.
This modularity (especially the ability to securely and independently manage parts of the overall graph) will also allow to realize different modes of access to digital assets (e.g. respecting sensitivity and confidentiality but also permitting full openness). The initial implementation of the Helmholtz-KG will not contain sensitive or confidential data, but such capacities (e.g. user management, license recognition across (meta)data holdings, and authentication) can be explored and implemented when the core technology and operational procedures are stabilised.
The backbone architecture of the Helmholtz Knowledge Graph will be licensed under [CC0/CCBY](https://creativecommons.org/about/cclicenses/) to enable crosswalks to the outside world and gain visibility as e.g. a sub-cloud of the [Linked Open Data Cloud](https://lod-cloud.net/).
### Inspiration
The implementation of the Helmholtz-KG architecture is inspired by the federation of stakeholders in IOC-UNESCO's Ocean Data and Information System (ODIS), interconnected by the [ODIS Architecture](https://book.oceaninfohub.org/) [2], and rendered into a knowledge graph federating over 50 partners across the globe by the Ocean InfoHub Project (OIH). Personnel from the HMC's Earth and Environment Hub chair the ODIS federation and lead the technical implementation of OIH, offering direct alignment with unHIDE.
## Data Sources
#### Initial Scope
Initial efforts of the Helmholtz-KG implementation will focus on the representation of (meta)data describing the following digital core assets:
- Documents / Publications
- published Datasets
- Software
- Institutions
- Infrastructure & Resources
- Researchers & Experts
- Projects
The representation of these instances will be semantically aligned with the [schema.org](https://schema.org/docs/full.html) vocabulary, a globally adopted standard offering a relaxed frame for the representation of heterogeneous data. Following the initial implementation the semantic expressiveness of the graph can be increased by integrating domain ontologies such as the HMC developed [Helmholtz Digitization Ontology](https://codebase.helmholtz.cloud/hmc/hmc-public/hob/hdo) (HDO), which provides precise and comprehensive semantics of the concepts and practices used to manage digital assets.
#### Data Ingestion Process
The Helmholtz-KG will offer multiple options for existing and emerging HGF infrastructures, data providers, and communities to declare their resources and digital assets in the graph for discoverability. We will prioritise the recommended publishing process for structured data on the web (as used by ODIS/OIH and many others): data providers would either 1) provide a sitemap or robots.txt file which will direct harvesting software to a collection of JSON-LD/schema.org documents or 2) expose JSON-LD snippets in the document head element of a web resource (i.e. HTML document). Both approaches are described in the publisher documentation of the Ocean InfoHub Project [3].
Alternative publication patterns may include -- HTTP-accessible [RO-Crate](https://www.researchobject.org/ro-crate/) [4] metadata in `ro-crate-metadata.json`, the exposure of structured metadata records via the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) [5] or properly documented RESTful APIs in general [6]. We will explore the need and feasibility of alternate publishing and harvesting modes during the course of unHIDE.
HMC personnel will support the onboarding of data providers as well as the implementation of custom (meta)data pipelines / connectors and mapping to RDF / JSON-LD, if necessary and where appropriate.
#### Potential Data Providers
Within the HGF a number of relevant web-based data architectures exist. These will be targeted by unHIDE to collaborate on building interfaces to the Helmholtz-KG.
The initial implementation fill focus on:
- HGF institutional (data) repositories
- Central Libraries of Helmholtz Research Centers
- Domain-specific (data) repositories relevant to HGF
- Helmholtz GitLab Instances
Subsequent efforts will include further resources such as:
- Helmholtz FAIR digital objects (via HMC)
- Helmholtz Ontology Base (HOB) (via HMC)
- The Helmholtz [software directory](https://helmholtz.software/) (centrally maintained by HIFIS)
- [Helmholtz Data Challenges Platform](https://helmholtz-data-challenges.de/)
- other resources of the Helmholtz Metadata Collaboration (HMC)
- Content management systems (CMS)
- Helmholtz Computing centers (e.g. JSC)
- Helmholtz Federated IT Services (HIFIS)
- Helmholtz instruments and sensor databases (e.g. @GFZ, DEPAS @AWI, RDMInfoPool, etc.)
- Helmholtz Scientific Project Workflow Platform (HELIPORT)
- [Helmholtz Imaging Modalities database](https://modalities.helmholtz-imaging.de/)
- Laboratory information management systems (LIMS)
- Helmholtz Open Science Office
## References
- [1] https://5stardata.info/en/
- [2] https://www.w3.org/TR/2014/REC-json-ld-20140116/
- [3] https://book.oceaninfohub.org/publishing/publishing.html
- [4] https://www.researchobject.org/ro-crate/
- [6] https://www.openarchives.org/pmh/
If you are using GitHub pages for hosting, this command is a convenient way to build the website and push to the `gh-pages` branch.
File moved
File moved
File moved
This folder contains the all files of the documentation pages.
They are created using jupyter-book and hasted with gitlab pages (example see https://gitlab.com/pages/jupyterbook).
Therefore, you can build the docs with:
```
jupyter-book build docs
```
Also ideally the terms should go somewhere else and be automatically included in this.
The same goes for code documentation of the pipelines and their usage.
#######################################################################################
# A default configuration that will be loaded for all jupyter books
# Users are expected to override these values in their own `_config.yml` file.
# This is also the "master list" of all allowed keys and values.
#######################################################################################
# Book settings
title : The unified Helmholtz Information and Data Exchange (UnHIDE) Project # The title of the book. Will be placed in the left navbar.
author : The Helmholtz Metadata Collaboration (HMC) # The author of the book
copyright : "2022" # Copyright year to be placed in the footer
logo : images/unhide_logo.png # A path to the book logo
# Patterns to skip when building the book. Can be glob-style (e.g. "*skip.ipynb")
exclude_patterns : [_build, Thumbs.db, .DS_Store, "**.ipynb_checkpoints"]
# Auto-exclude files not in the toc
only_build_toc_files : false
bibtex_bibfiles:
- references.bib
#######################################################################################
# Execution settings
execute:
execute_notebooks : auto # Whether to execute notebooks at build time. Must be one of ("auto", "force", "cache", "off")
cache : "" # A path to the jupyter cache that will be used to store execution artifacts. Defaults to `_build/.jupyter_cache/`
exclude_patterns : [] # A list of patterns to *skip* in execution (e.g. a notebook that takes a really long time)
timeout : 30 # The maximum time (in seconds) each notebook cell is allowed to run.
run_in_temp : false # If `True`, then a temporary directory will be created and used as the command working directory (cwd),
# otherwise the notebook's parent directory will be the cwd.
allow_errors : false # If `False`, when a code cell raises an error the execution is stopped, otherwise all cells are always run.
stderr_output : show # One of 'show', 'remove', 'remove-warn', 'warn', 'error', 'severe'
#######################################################################################
# Parse and render settings
parse:
myst_enable_extensions: # default extensions to enable in the myst parser. See https://myst-parser.readthedocs.io/en/latest/using/syntax-optional.html
# - amsmath
- colon_fence
# - deflist
- dollarmath
# - html_admonition
# - html_image
- linkify
# - replacements
# - smartquotes
- substitution
- tasklist
myst_url_schemes: [mailto, http, https] # URI schemes that will be recognised as external URLs in Markdown links
myst_dmath_double_inline: true # Allow display math ($$) within an inline context
#######################################################################################
# HTML-specific settings
html:
favicon : images/favicon.png # A path to a favicon image
use_edit_page_button : false # Whether to add an "edit this page" button to pages. If `true`, repository information in repository: must be filled in
use_repository_button : true # Whether to add a link to your repository button
use_issues_button : true # Whether to add an "open an issue" button
use_multitoc_numbering : true # Continuous numbering across parts/chapters
extra_navbar : Powered by <a href="https://jupyterbook.org">Jupyter Book</a> # Will be displayed underneath the left navbar.
extra_footer : "" # Will be displayed underneath the footer.
google_analytics_id : "" # A GA id that can be used to track book views.
home_page_in_navbar : true # Whether to include your home page in the left Navigation Bar
baseurl : "" # The base URL where your book will be hosted. Used for creating image previews and social links. e.g.: https://mypage.com/mybook/
comments:
hypothesis : false
utterances : false
announcement : "" # A banner announcement at the top of the site.
#######################################################################################
# LaTeX-specific settings
latex:
latex_engine : pdflatex # one of 'pdflatex', 'xelatex' (recommended for unicode), 'luatex', 'platex', 'uplatex'
use_jupyterbook_latex : true # use sphinx-jupyterbook-latex for pdf builds as default
#######################################################################################
# Launch button settings
launch_buttons:
notebook_interface : classic # The interface interactive links will activate ["classic", "jupyterlab"]
binderhub_url : https://mybinder.org # The URL of the BinderHub (e.g., https://mybinder.org)
jupyterhub_url : "" # The URL of the JupyterHub (e.g., https://datahub.berkeley.edu)
thebe : false # Add a thebe button to pages (requires the repository to run on Binder)
colab_url : "" # The URL of Google Colab (https://colab.research.google.com)
repository:
url : https://codebase.helmholtz.cloud/hmc/hmc-public/unhide/documentation # The URL to your book's repository
path_to_book : docs # A path to your book's folder, relative to the repository root.
branch : main # Which branch of the repository should be used when creating links
provider : gitlab
#######################################################################################
# Advanced and power-user settings
sphinx:
extra_extensions :
- 'sphinx.ext.autodoc'
- 'sphinx.ext.autosummary'
- 'sphinx.ext.viewcode' # A list of extra extensions to load by Sphinx (added to those already used by JB).
local_extensions : # A list of local extensions to load by sphinx specified by "name: path" items
recursive_update : false # A boolean indicating whether to overwrite the Sphinx config (true) or recursively update (false)
config : # key-value pairs to directly over-ride the Sphinx configuration
autosummary_generate: true
add_module_names: false
html_theme_options:
repository_provider: custom
# Autodoc config reference
# https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html#configuration
autodoc_default_options:
members: true
member-order: bysource
undoc-members: true
private-members: false
# Table of contents
# Learn more at https://jupyterbook.org/customize/toc.html
format: jb-book
root: intro.md
parts:
- caption: Introduction
numbered: False
chapters:
- file: introduction/about.md
title: "About & Mission"
- file: introduction/implementation.md
title: "Implementation overview"
- file: introduction/data_sources.md
- caption: Data in UnHIDE
numbered: True
chapters:
- file: data/overview.md
title: "Overview"
- file: data/dataset.md
title: "Dataset"
- file: data/documents.md
title: "Documents"
- file: data/experts.md
title: "Experts"
- file: data/Institutions.md
title: "Institution"
- file: data/Instruments.md
title: "Instruments"
- file: data/software.md
title: "Software"
- file: data/training.md
title: "Training"
- caption: Interacting with UnHIDE data
numbered: False
chapters:
- file: interfaces/usecases.md
title: "Use case examples"
- file: interfaces/web.md
title: "Web search"
- file: interfaces/sparql.md
title: "SPARQL endpoint"
- file: interfaces/api.md
title: "REST API"
- caption: Related Knowledge
numbered: False
chapters:
- file: knowledge/overview.md
title: "Structured data on the web"
- file: knowledge/tools.md
title: "Tools around Linked Data"
- file: knowledge/other_graphs.md
title: "Other Graphs"
- caption: Technical implementation
numbered: True
chapters:
- file: tech/datapipe.md
title: Data pipeline
sections:
- file: tech/harvesting.md
title: "Data Harvesting"
- file: tech/uplifting.md
title: "Data uplifting"
- file: tech/backend.md
title: "Architecture"
sections:
- file: dev_guide/architecture/01_introduction_and_goals.md
- file: dev_guide/architecture/02_architecture_constraints.md
- file: dev_guide/architecture/03_system_scope_and_context.md
- file: dev_guide/architecture/04_solution_strategy.md
- file: dev_guide/architecture/05_building_block_view.md
- file: dev_guide/architecture/06_runtime_view.md
- file: dev_guide/architecture/07_deployment_view.md
- file: dev_guide/architecture/08_concepts.md
- file: dev_guide/architecture/09_architecture_decisions.md
- file: dev_guide/architecture/10_quality_requirements.md
- file: dev_guide/architecture/11_technical_risks.md
- file: dev_guide/architecture/12_glossary.md
# - caption: Code Documentation
# numbered: false
# chapters:
# - file: dev_guide/code_documentation/index.rst
# title: "Data Harvesting"
# Institutions
Metadata template for institutions. This is just schema.org organizations.
UnHIDE extracts these from individual records and it tries to uplifted this data with metadata provided
with ROR as well as orcid.
```{literalinclude} ./graphs/institutionTemplate.json
:linenos:
```
# Instruments
Metadata template for Instruments:
```{literalinclude} ./graphs/instrumentTemplate.json
:linenos:
```
# Dataset
```{literalinclude} ./graphs/datasetTemplate.json
:linenos:
```
# Documents
Metadata template for Documents, which is a combined category from the schema.org terms:
[CreativeWork](https://schema.org/CreativeWork), [DigitalDocument](https://schema.org/DigitalDocument), [Thesis](https://schema.org/Thesis), [Report](https://schema.org/Report), [Article](https://schema.org/Article), ...
So this all entries of these types will end up in the 'Document' search bucket.
Be aware that this category may contain also entries which are in reality no documents but typed in the metadata
by the high level schema.org class 'CreateWork'.
```{literalinclude} ./graphs/documentTemplate.json
:linenos:
```
# Experts
The expert category contains Persons and and Institutions combined, which are extracted from individual records.
```{literalinclude} ./graphs/expertTemplate.json
:linenos:
```
{
"@context": {
"@vocab": "https://schema.org/"
},
"@type": "Dataset",
"@id": "https://registry.org/permanentUrlToThisJsonDoc",
"name": "A concise but descriptive name of the dataset",
"description": "An extended, free-text description of what's in the dataset, who created it, and other attributes",
"url": "https://urlToTheDatasetOrLandingPage.org/",
"sameAs": [
"http://alternativeUrlToTheDatasetOrLandingPage.org"
],
"license": "This work is licensed under a Creative Commons Attribution (CC-BY) 4.0 License",
"citation": [
"Citation to other work relevant to this dataset",
"Citation to other work relevant to this dataset",
"Citation to other work relevant to this dataset"
],
"version": "2021-04-24T06:34:56.000Z",
"keywords": [
"Keyword 1",
"Keyword 2",
"Keyword 3"
],
"measurementTechnique": "The URL to or text about the methods, technique or technology used to generate this Dataset",
"variableMeasured": [
{
"@type": "PropertyValue",
"name": "Name of a variable in the dataset",
"description": "Extended description of this variable"
},
{
"@type": "PropertyValue",
"name": "Name of a variable in the dataset",
"url": "http://ontology.org/uriToSemanticDescriptorOfThisVariable",
"description": "Extended description of this variable?"
},
{
"@type": "PropertyValue",
"name": "SamplingDeviceApertureSurfaceArea",
"url": "http://ontology.org/uriToSemanticDescriptorOfThisVariable",
"description": "Extended description of this variable"
}
],
"includedInDataCatalog": {
"@id": "https://registryOfCatalogs.org/permanentUrlIdentifiyingCatalog",
"@type": "DataCatalog",
"url": "https://urlOfDataCatalog.org"
},
"temporalCoverage": "2007/2007",
"distribution": {
"@type": "DataDownload",
"contentUrl": "http://urlToDirectDownloadOfThisDataset.org/",
"encodingFormat": "text/csv"
},
"spatialCoverage": {
"@type": "Place",
"geo": {
"@type": "GeoShape",
"polygon": "142.014 10.161667,142.014 18.033833,147.997833 18.033833,147.997833 10.161667,142.014 10.161667"
},
"additionalProperty": {
"@type": "PropertyValue",
"propertyID": "http://dbpedia.org/resource/Spatial_reference_system",
"value": "http://www.w3.org/2003/01/geo/wgs84_pos#lat_long"
}
},
"provider": [
{
"@type": "Organization",
"legalName": "Legal Name of Organisation which generated the dataset",
"name": "Other Name of Organisation which generated the dataset",
"url": "https://organisationWebsite.org/"
}
],
"subjectOf": {
"@type": "Event",
"description": "Describe the event which is the subject of this dataset. For example, a cruise ID.",
"name": "Concise and descriptive name of the Event",
"potentialAction": {
"@type": "Action",
"name": "Concise but descriptive name of action that was part of an Event. For example, the name of a CTD cast",
"agent": [
"Name or permanent ID of person or thing that performed this action",
"Name or permanent ID of person or thing that performed this action",
"Name or permanent ID of person or thing that performed this action"
],
"startTime": "2007-03-11T14:45UTC",
"endTime": "2007-03-11T15:42UTC",
"instrument": {
"@type": "Thing",
"name": "The name of the instrument used in the action. For example, the specific model of a CTD, a glider, a moored sensor",
"url": "http://ontology.org/uriToSemanticDescriptorOfThisInstrument",
"description": "Extended description of the sampling instrument"
}
}
}
}
This diff is collapsed.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment