Review FAIR assessment tools for APIs/interfaces, docker etc
The goal of this issue is to review automated FAIR assessment tools and fill a table relevant for decision making.
We start with the list of tools exctracted from fairassist.org:
PDF: FAIRassist_org_excerpt_automated_tools.pdf; DOCX: FAIRassist_org_excerpt_automated_tools.docx
For outlining our future strategy, we are interested in analyzing which tools may be appropriate to include into possible, future versions of the toolbox and dashboard.
We therefore compile an overview table with the following information:
- For each tool listed in the excerpt above (see appended PDF):
- tool name (full name and abbreviation)
- url to the tool (see fairassist.org and beyond)
- url to code repository (see fairassist.org and beyond)
- url to documentation of the tool (see fairassist.org and beyond)
- doi of a research article about the tool (see fairassist.org and beyond)
- has API (does it have an API that we can use to retrieve scores for any publication DOI?)
- test of API was successful
- metadata returned by the API appears to provide FAIR scores in a format that hypothetically seems to allow including it in a future version of the toolbox
- comments on the test of the API
- is it possible to deploy the code on our own server?
- under which license is the code published
- other comments
A schema to be filled for each tool could, for example, look like this:
tool name:
full name: string
abbreviation: string
urls:
main: string
repository: string
documentation: string
research_article:
doi: string
api:
available: boolean
test_successful: boolean
fair_scores_compatible: boolean
test_comments: string
deployment:
self_hostable: boolean
license: string
additional_comments: string