Skip to content
Snippets Groups Projects
Commit 8fa74ccb authored by Uwe Jandt (DESY, HIFIS)'s avatar Uwe Jandt (DESY, HIFIS)
Browse files

Merge branch 'update_cp_documentation' into 'master'

update Cloud Portal documentation

See merge request !283
parents 269d6452 c02976e9
No related branches found
No related tags found
1 merge request!283update Cloud Portal documentation
Showing
with 127 additions and 493 deletions
......@@ -31,7 +31,7 @@ check_links:
# Allow the web server to successfully spawn
- sleep 10
script:
- blc -get http://localhost:8080 -ro --exclude 'https://b2access.eudat.eu/saml-idp/SLO-WEB' --exclude 'https://nubes.helmholtz-berlin.de/f/*' --exclude 'https://webfts.fedcloud-tf.fedcloud.eu/' --exclude 'https://hdf-cloud.fz-juelich.de' --exclude 'https://fts3-public.cern.ch' --exclude 'https://events.hifis.net' --exclude 'https://login.helmholtz.de/oauth2' --exclude 'https://login.helmholtz.de/saml-idp/' --exclude 'https://www.hzdr.de' --exclude 'https://kroki.hzdr.de/' --exclude 'https://geant3plus.archive.geant.net' --exclude 'https://rancher.desy.de' --exclude 'https://sonar.desy.de' --exclude 'https://portswigger.net'
- blc -get http://localhost:8080 -ro --exclude 'https://b2access.eudat.eu/saml-idp/SLO-WEB' --exclude 'https://nubes.helmholtz-berlin.de/f/*' --exclude 'https://webfts.fedcloud-tf.fedcloud.eu/' --exclude 'https://hdf-cloud.fz-juelich.de' --exclude 'https://fts3-public.cern.ch' --exclude 'https://events.hifis.net' --exclude 'https://login.helmholtz.de/oauth2' --exclude 'https://login.helmholtz.de/saml-idp/' --exclude 'https://www.hzdr.de' --exclude 'https://kroki.hzdr.de/' --exclude 'https://geant3plus.archive.geant.net' --exclude 'https://rancher.desy.de' --exclude 'https://sonar.desy.de' --exclude 'https://portswigger.net' --exclude 'https://gitlab.desy.de/thomas.beermann/hifiscp-deployment-triggers'
allow_failure: true
review:
......
# HIFIS Cloud Portal - Architecture
## Scope
This document describes the general architecture of the Cloud Portal (CP) for the Helmholtz Federated IT Services (HIFIS).
The CP is a central part of the Helmholtz Cloud. It is the main entrypoint for users to find and access HIFIS / Helmholtz Cloud Services. In the position, the CP has to interact with almost all components of the system. [Figure 1](#img-high-level-system-architecture) shows and overview of the current system and the interaction with other systems.
![Login](images/Cloud_Portal_Architecture.png)
<a name="img-high-level-system-architecture">**Figure 1:** High-level system architecture</a>
## Technology
### System Architecture HIFIS Cloud Portal
There are a couple of high-level components in the CP:
- Database
- Backend (Cerebrum)
- Frontend (Webapp)
- Helmholtz Cloud Agent (HCA) Service
- Availability Service
### Database
The CP uses a MongoDB server as database. MongoDB is a document-oriented database and as such is well suited as a backend for the CP. The main purpose of the CP is to present HIFIS services and their metadata. Most interactions with the database will only read data and use a variety of ad-hoc filters on flexible schemas which is well-supported in MongoDB.
Currently, there are 5 different collections:
- `marketService`: this is the main collection storing all service information, one document per service. It stores information like the service name, description, entrypoint, service provider and the used software.
- `image`: this collection stores base64 encoded PNG images that are linked from the marketService collection and show service and provider logos.
- `availability`: this collection stores availability information per services that is regularly updated by the availability service.
- `hcaRequest`: this collection stores the request that are sent to the Helmholtz Cloud Agents.
- `hcaResource`: this collection stores the resources that are created by HCA requests.
### Backend
The backend, a.k.a Cerebrum, provides access to the database using a REST API. It is written in Java and uses the Spring Boot framework.
### Frontend
The frontend is split in two parts: a single page application webapp written in HTML/Javascript and a server written in Java using Spring Boot. The server takes care of the authentication with the Helmholtz AAI and forwards the requests from the user to the backend. The webapp is the actual user interface with which the user interacts.
### HCA Service
The HCA service takes care of all interaction with the Helmholtz Cloud Agent. It is written in Python. It takes the requests that are sent by the user from the frontend and sends them to the corresponding RabbitMQ host. It handles the responses from the HCA and updates the resources in the database accordingly.
### Availability Service
The Availability service regularly checks if the services are still reachable and updates the corresponding DB collection. It is written in Python. If a service is not available and error message is stored. The availability information is displayed in the frontend using color-coded dots on the service cards.
\ No newline at end of file
# HIFIS Cloud Portal - Developer Manual
## Document History
Current Version: 1.0
Current Version Date: 2022-03-01
## Scope
This document contains all information necessary to participate in the development of the HIFIS Cloud Portal
## Development Setup
All code related the Cloud Portal can be found in Gitlab at https://gitlab.hzdr.de/hifis/cloud/access-layer/portal.
The easiest way to participate in the development is to use the provided container-based environment, which you can find in the [tools](https://gitlab.hzdr.de/hifis/cloud/access-layer/portal/-/tree/main/tools) directory of the main repository. The environment can be set up using docker-compose or podman-compose. The environments always use a base `portal-dev` container that has all necessary tools available to build/test/run the different parts of the Cloud Portal and a `mongodb` database container. Additionally, there can be other containers for Selenium testing or Helmholtz Cloud Agent (HCA) development. The different configuration are:
* `docker-compose.yaml`: default environment with with one `portal-dev` and one `mongodb` container.
* `docker-compose-with-selenium.yaml`: same as default with an additional `chrome` container to run web application tests.
* `docker-compose-with-hca.yaml`: same as default with an additional `rabbitmq` and an `hca` container to test the HCA integration.
If cannot or don't want to use the container you can also setup your own environment. You need at least the following software:
* OpenJDK 11
* Maven >= 3.6
* Node >= 12.x (LTS)
* MongoDB >= 4
* Python 3
You can find more details in the [Dockerfile](https://gitlab.hzdr.de/hifis/cloud/access-layer/portal/-/blob/main/tools/Dockerfile).
## Documentation
The general system architecture is being described in the [architecture](CP_Architecture.md) part. The source code should be documented sufficiently without describing boilerplate code (e.g. getters and setters).
## Development Infrastructure
### VCS
We are using git as VCS in the project. All source code is maintained in the GitLab repository from HZDR. The repositories for Helmholtz Cloud access layer components are located in the group <https://gitlab.hzdr.de/hifis/cloud/access-layer>, and for the portal specifically in <https://gitlab.hzdr.de/hifis/cloud/access-layer/portal>.
You should fork the main repository and develop in branches. Code changes are only potentially accepted as merge requests from your fork to the main branch.
### CI/CD
In order to always have a compilable and executable code base, we use the CI features from Gitlab. For each MR a pipeline with a set of stages will run. The stages make sure that the code compiles, runs the given unit tests and package everything in containers, which are hosted in the integrated Gitlab container repository. In the last step a deployment to a Kubernetes test cluster at DESY is triggered. The MR will be deployed as a fully functional Cloud Portal instance with its own database that can be used for further testing and reviews. Only after the pipeline has run successfully and the code was reviewed the MR may be merged into the main branch.
#### CD Infrastructure
The main deployment of the Cloud Portal is split in to different environments, integration and production, both hosted on a Kubernetes cluster at DESY and managed by FluxCD.
The integration testbed can be found at <https://hifis-cloud-portal-int.desy.de> and is automatically redeployed after each push to the main branch. In that case a pipeline runs that builds containers with the `latest` tag and pushes them to the Gitlab repository. In the last step of the pipeline a script is running that triggers a redeployment and will therefore pull the new containers.
The production deployment can be found at <https://helmholtz.cloud> or <https://hifis-cloud-portal.desy.de>. This deployment uses tagged releases which are created from the main branch. The the deployment is not automatic an instead has to be adapted in Flux whenever a new tagged release is created.
#### Clusters and Configurations
Both the clusters for the MR deployments and the integration testbed and production are hosted on Rancher-managed Kubernetes cluster. To get access to these cluster you first have to ask for access to Rancher (<https://rancher.desy.de>).
The two clusters are:
* `guest-k8s`: MR deployments
* `kube-cluster1`: Integration and Production
The MR deployments are created using manual Helm releases that are created by a pipeline running on a repository at the DESY Gitlab (<https://gitlab.desy.de/thomas.beermann/hifiscp-deployment-triggers>). It is necessary to run this pipeline directly at DESY and not HZDR since the Kubernetes API is not accessible from outside DESY. The MR deployments are using the dynamic `nip.io` DNS mapping to make the MR deployments easily accessible for reviews. The deployments are automatically accessible outside DESY.
The integration testbed and production deployment is managed using FluxCD. The configuration repository is available at DESY Gitlab (<https://gitlab.desy.de/it-paas/internal/gitops/apps/cloud-portal>). It is a private repository and access has to granted manually. Redeployment of the integration testbed is again triggered from the HZDR repository to the `hifiscp-deployment-triggers` repository at DESY. The production deployment can only be changed from the Flux repository.
#### Data import
In the future the service catalogue information will be fetched regularly from [Plony](https://gitlab.hzdr.de/hifis/cloud/access-layer/plony) but at the moment it is still stored in the main repository (<https://gitlab.hzdr.de/hifis/cloud/access-layer/portal/-/tree/main/cerebrum/data>). MongoDB provides functionality to easily export all data in JSON format and also import this data again. This is used to store a backup of the data in Gitlab. Currently, this is also the place to change any service related information that will be shown in the Cloud Portal using MRs. For the integration testbed this data is automatically imported whenever the MR is merged to main. To make the data available in production a pipeline has to be manually started in the `hifiscp-deployment-triggers`. This can be done directly from the repository (<https://gitlab.desy.de/thomas.beermann/hifiscp-deployment-triggers/-/pipelines/new>) by setting the variable `CP_BUILD_STAGE` to `production`.
docs/cloud-portal/images/Cloud_Portal_Architecture.png

43.6 KiB

# HIFIS Technical Platform - Developer Manual
!!! warning
Note that this documentation is outdated.
[TOC levels=2-4]: # "## Table of Contents"
## Table of Contents
- [Document History](#document-history)
- [Scope](#scope)
- [Development Setup](#development-setup)
- [Conventions](#conventions)
- [Documentation](#documentation)
- [Development Infrastructure](#development-infrastructure)
- [VCS](#vcs)
- [CI/CD](#cicd)
- [CD Infrastructure](#cd-infrastructure)
- [Hosts/Virtual Machines in Use](#hostsvirtual-machines-in-use)
- [Quality](#quality)
- [Code Reviews](#code-reviews)
- [SonarQube](#sonarqube)
## Document History
Current Version: 0.1
Current Version Date: 2020-08-21
## Scope
This document contains all information neccessary to participate in the development of the technical platform (TP) of the HIFIS project.
## Development Setup
In order to participate in the development, the following pre-requisites are necessary:
* OpenJDK 11
* Maven >= 3.6
* Node >= 12.x (LTS)
For the IDE we strongly recommend using IntelliJ IDEA 2020 in order to have the same code formatting as all other team members and in order to facilitate the setup of the development environment.
You will need to have access to the following resources:
* GitHub repositories in https://github.com/helmholtz-marketplace
* Documentation https://gitlab.hzdr.de/hifis/hifis-cloud-documentation
* Stories and Requirements https://gitlab.hzdr.de/hifis/cloud-technical-platform
## Conventions
In order to maintain code readability and to avoid merge conflicts caused by different formatting, the project makes use of the standard code formatter of IntelliJ for the languages used. If we have to change formatting rules, this change has to be documented in this developer manual. The developers decide together whether it is necessary to provide the formatter configuration as a file or if it is sufficient to descibe the necessary configuration options in this manual.
Necessary changes in the IntelliJ preferences:
* disable package/wildcard imports
* https://www.jetbrains.com/help/idea/creating-and-optimizing-imports.html#disable-wildcard-imports)
* Change `Class count to use import with '*':` to `999`
* Change `Names count to use static import with '*':` to `999`
All code is indented with 4 spaces. We do not use tabs.
## Documentation
The general system architecture is being described in the [whitepaper](TP_Whitepaper.md). The source code should be documented sufficiently without describing boilerplate code (e.g. getters and setters).
## Development Infrastructure
### VCS
We are using git as VCS in the project. All source code is maintained in the GitLab repository from HZDR. The repositories are located in the group https://gitlab.hzdr.de/hifis-technical-platform
Development occurs strictly on branches in private repositories and code is only integrated in master after a successful check during the merge request.
Since the GitLab instance from HZDR does not support, there are "personal" subgroups in the main group which provide the namespace for repositories for personal use. If a new developer joins the development, a neew subgroup with his/her name should be created and he/she should be made owner of that subgroup.
### CI/CD
In order to always have a compilable and executable code base, we use the CI features from GitHub. After all pushes and for all merge requests, a build pipeline is being executed, which prevents erroneous code from being integrated in the code base.
Additionally, we have an autodeployment setup, which deploys the new artifacts after a successful build to our CD infrastructure. In order to have an easily maintainable and reproducible environment, we are using a Docker deployment infrastructure for this:
* Successful build triggers build of a Docker image
* Docker image is being pushed to Docker Hub (https://hub.docker.com/)
* Watchdog on CD server detects a new image and updates the container
#### CD Infrastructure
The CD uses a Docker setup on the DESY Openstack infrastructure. In order to minimize the administration effort, the build infrastructure is mostly set up automatically via a heat template (https://eosc-pan-git.desy.de/hifis/tp-dev-deployment-stack). The basic setup consists of the following containers:
* Docker-compose setup from https://github.com/evertramos/docker-compose-letsencrypt-nginx-proxy-companion
* `nginx-web` - reverse proxy
* `nginx-gen` - Handles the generation of the reverse proxy configuration
* `nginx-letsencrypt` - Handles the generation of Let's Encrypt certificates
* Watchtower (https://containrrr.dev/watchtower/) - Update running containers if their base image has been updated
* one container for every project that has to be deployed continuously
* Helmholtz Cerebrum
* Helmholtz Marketplace (server + web app)
#### Hosts/Virtual Machines in Use
The DNS names are either from DESY Auto-DNS (`os-234-tp-deployment.desy.de`), DESY QIP (`hifis-tp-dev.desy.de`) or from the Dynamic DNS service for EGI Federated cloud (https://nsupdate.fedcloud.eu/).
The DynDNS-Entries are continuously being updated via crontab. Every
DynDNS entry has an own line in the crontab of root:
```
24 */2 * * * curl -s https://helmholtz-cerebrum-dev.test.fedcloud.eu:cQCtwmXYaf@nsupdate.fedcloud.eu/nic/update > /opt/var/log/dyndns-helmholtz-cerebrum-dev.test.fedcloud.eu.log 2>&1
```
- os-234-tp-deployment.desy.de
- Virtual Machine on Open Stack@DESY
- Login with SSH: ubuntu
- helmholtz-cerebrum-dev.test.fedcloud.eu
- Host - os-234-tp-deployment.desy.de
- Docker container
- Helmholtz Cerebrum
- helmholtz-marketplace.test.fedcloud.eu
- Host - os-234-tp-deployment.desy.de
- Not Configured
- helmholtz-cloud-cd.test.fedcloud.eu
- Host - os-234-tp-deployment.desy.de
- Not Configured
- helmholtz-marketplace-dev.test.fedcloud.eu
- Host - os-234-tp-deployment.desy.de
- Docker container
- Helmholtz Marketplace Server
- hifis-tp-dev.desy.de
- Virtual Machine on Open Stack@DESY
- Login with SSH: ubuntu
- Designated for deployment of the public test versions
- atm hosting for HighRes mockups
### Quality
#### Code Reviews
All code that gets merged into master has to be reviewed by another developer. For this purpose we use the code review system which comes with GitHub.
#### SonarQube
For the automatic monitoring of the code quality, a SonarQube instance is running automated checks on the code. The goal is to keep the count of problems at zero.
The following projects are configured in SonarQube:
* Helmholtz Cerebrum - `de.helmholtz.marketplace.cerebrum.helmholtz-cerebrum`
* Helmholtz Marketplace Server - `de.helmholtz.marketplace.helmholtz-marketplace-server`
* Helmholtz Marketplace Web App - `de.helmholtz.marketplace.hifis-marketplace`
The SonarQube instance is installed at https://sonar.desy.de You can login with the Helmholtz AAI. For using the SonarQube rules already during development, you can include them via the SonarLint plugin (https://www.sonarlint.org/intellij/) in IntelliJ.
1. Configure the Access Token in Sonar
* Login with Helmholtz AAI
![Choose OpenID login](./images/sonar-integration/screenshot-02.png)
* Choose your provider
![Choose provider](./images/sonar-integration/screenshot-03.png)
* Enter your credentials
![Enter your credentials](./images/sonar-integration/screenshot-04.png)
* Review information and consent
* Go to "My Account"
![Sonar Account Details](./images/sonar-integration/screenshot-07.png)
* Enter a name for a new token (e.g. `intellij`)
![Sonar Account Details](./images/sonar-integration/screenshot-09.png)
* Copy the token value (optionally save it for later reuse)
![Copy token value](./images/sonar-integration/screenshot-10.png)
2. install the plugin from the marketplace
* Open the project in IntelliJ (this example uses `helmholtz-cerebrum`) and go to plugins (<kbd>Ctrl</kbd>+<kbd>Shift</kbd>+<kbd>A</kbd>/<kbd>&#8984;</kbd>+<kbd>Shift</kbd>+<kbd>A</kbd> and enter "Plugins")
![Open plugin settings in IntelliJ](./images/sonar-integration/screenshot-11.png)
* Find the SonarLint plugin in the marketplace and install it
![Find and install SonarLint plugin](./images/sonar-integration/screenshot-12.png)
3. configure this token in the plugin
* Open the SonarLint view (Step 1)
* Open the configuration dialog (Step 2)
![Open the SonarLint view](./images/sonar-integration/screenshot-14.png)
* Check "Bind project to SonarQube / SonarCloud" and click on configure the connection
![Configure the connection](./images/sonar-integration/screenshot-15.png)
* Add a new connection with the "+" sign
![Add a new connection](./images/sonar-integration/screenshot-17.png)
* Choose SoanrQube and enter the URL of our server (`https://sonar.desy.de` )
![Choose SonarQube and enter the URL of our server](./images/sonar-integration/screenshot-18.png)
* Choose token authentication and enter the token you generated in the first step
![Choose token authentication and enter the token](./images/sonar-integration/screenshot-19.png)
4. choose the project mapping(s) (see above)
* Choose the correct project mapping - 1
![Choose the correct project mapping - 1](./images/sonar-integration/screenshot-20.png)
* Choose the correct project mapping - 2
![Choose the correct project mapping - 2](./images/sonar-integration/screenshot-21.png)
5. You have instantaneous feedback to recognized code problems in your currently opened files!
![Direct feedback in your opened files](./images/sonar-integration/screenshot-23.png)
# HIFIS Technical Platform - Mockups
!!! warning
Note that this documentation is outdated.
[TOC levels=2-4]: # "## Table of Contents"
## Table of Contents
- [Document History](#document-history)
- [Scope](#scope)
- [General](#general)
- [Mockups](#mockups)
- [Login](#login)
- [Initial List View](#initial-list-view)
- [Initial List View With Categories](#initial-list-view-with-categories)
- [Detail View](#detail-view)
## Document History
Current Version: 0.1
Current Version Date: 2020-05-04
## Scope
This document is intended to contain first UI mockups as a base for the design of the technical platform (TP) within HIFIS.
## General
The mockups provide a method to give a first idea, how the user interface can look like. They mainly are a mean to support further discussions about possibilities to develop the final design.
## Mockups
### Login
![Login](images/cloud-tp-mockups-login.png)
<a name="img-cloud-tp-mockups-login">**Figure 1:** Login</a>
### Initial List View
![Inital list view](images/cloud-tp-mockups-starting page (list).png)
<a name="img-cloud-tp-mockups-starting-page-list">**Figure 2:** Inital list view</a>
### Initial List View With Categories
![Inital list view with categories](images/cloud-tp-mockups-starting page (categories).png)
<a name="img-cloud-tp-mockups-starting-page-categories">**Figure 3:** Inital list view with categories</a>
### Detail View
![Detail view](images/cloud-tp-mockups-detail view.png)
<a name="img-cloud-tp-mockups-detail-view">**Figure 4:** Detail view</a>
# HIFIS Technical Platform - Whitepaper
!!! warning
Note that this documentation is outdated.
## Document History
Current Version: 0.3
Current Version Date: 2020-07-20
## Scope
This document is a basic description of the technical design for the technical platform (TP) within the Helmholtz Infrastructure for Federated ICT Services (HIFIS).
The TP is a central part of the Helmholtz Cloud. Its purpose is to be the main entry point for users looking for a service which could be used for their purpose. In this position, the TP has to interact with almost all components of the system.
The schema in [Figure 1](#img-high-level-system-architecture)<!-- @IGNORE PREVIOUS: anchor --> shows the designated architecture for the Helmholtz Cloud from the view of the TP. For all components outside of the TP, there are more detailed architecture schemas. For the sake of clarity about the architecture of the TP these parts are excluded here.
![High-level system architecture](images/high-level-system-architecture.png)
<a name="img-high-level-system-architecture">**Figure 1:** High-level system architecture</a>
## Evaluation of existing Solutions
In order to avoid duplicate work, existing solutions for the functionality of the desired functionality of the technical platform have been reviewed in order to gain insight about the functionality provided and in order to determine, if they are reusable for the purpose of HIFIS.
### EOSC Whitelabel Marketplace (WLMP)
#### General
The EOSC Whitelabel Marketplace is currently in development. There is already a deployed version available ([https://marketplace.eosc-portal.eu](https://marketplace.eosc-portal.eu)) which provides the existing functionality.
#### License
GPL v3.0
#### Technologystack
- Ruby 2.6
- Database: PostgreSQL
- Elasticsearch
- Redis
#### Evaluation Result
The EOSC Whitelabel Marketplace solution offers a wide range of functionality also expected as part of the technical platform for HIFIS:
- Login via AAI (EGI AAI)
- Search services
- Hierarchical navigation to services
- Detail page with different flavors
- Request access to services
Although the WLMP seems to offer quite a lot of functionality which is also needed for HIFIS, there are some major disadvantages:
- tight integration with Jira
- tight integration with xGUS (helpdesk system, based upon BMC Remedy Action Request system (commercial)
- usage of Google ReCaptcha (mandatory)
- usage of Google Analytics (optional)
On top of that, the existing documentation is insufficient in order to setup a running system in a production environment. First attempts to setup a test instance on Ubuntu 20.04 with nginx as reverse proxy have not resulted in a usable system.
### EUDAT Data Project Management Tool - EUDAT (DPMT)
#### General
DPMT is in production for the use of the EUDAT network. It serves as the administrative backend for all information related to the service portfolio management. The actual user-facing part of the service catalog is much simpler and seems to be another application (https://www.eudat.eu/catalogue).
#### License
Onknown, no license file in GitHub repository
#### Technologystack
- Current version
- Python 2
- Plone 4
- Upcoming version
- Python 3
- Plone 5
#### Evaluation Result
DPMT offers a very complex and detailed object model.
## Development Methodology
Although there are a lot of requirements formulated in the HIFIS proposal, very few of them are so detailed, that they can be implemented without further discussions and without further evaluation of possible implementation ways. During the discussions in the various groups there are also new requirements being created, which have to be considered and can be added to the list of already existing requirements.
This situation is difficult in a classic development process, where all requirements should be listed, prioritized, specified and then implemented in a determined order. In order to address such uncertainties, software development is more and more often organized in a process following the Scrum model. Scrum defines ways how development teams can work in situations, where not all requirements have already been defined and where even new requirements can come up during the development process.
Based on those properties, the development of the TP will follow the Scrum process, which allows for the rapid development of a first version and ensures that there will always be a working version at the end of a development cycle.
## Technology
### General
At the moment of writing, the possible technologies for the implementation of the TP are still being evaluated. This whitepaper is also intended to give an overview about the proposed technologies.
### System Architecture HIFIS TP
The system architecture from the chapter [General](#img-high-level-system-architecture)<!-- @IGNORE PREVIOUS: anchor --> has been broken down in further details, so that the individual components and their behavior can be described better. This has been done in [Figure 2](#img-tp-system-architecture)<!-- @IGNORE PREVIOUS: anchor -->.
![System Architecture](images/tp-system-architecture.png)
<a name="img-tp-system-architecture">**Figure 2:** System architecture for the technical platform</a>
From a high level there are the following components which are an integral part of the TP:
- TP Frontend
- TP Backend
Aside from that, there are at a minimum the components
- Service Provider
- Service(s)
- Authentication and Authorization Infrastructure (AAI)
- Service Catalogue
The service catalogue has a special place, since at the time of writing, it is still to be decided, how this part will be implemented. It is possible, that HIFIS will use existing software for the implementation of it, so that the TP itself will only act as a proxy and keep a cached copy of the service metadata which are relevant for the TP.
One question which concerns the backend as well as the frontend is the internationalization. At the time of writing, it has been agreed upon that the language for the user interface in the first public version will be English. At later stages it will probably be necessary to add more languages for the user interface.
Separated from the internationalization of the usere interface is the question of the internationalization of the contents. At the time of writing, it is assumed, that the content of the service catalogue will **not** be multilingual.
### TP Backend
The backend will contain at least the following components:
- ReST API for the user interface (UI) frontend and for use by 3rd-party applications which may at a later time want to use the TP functionality without the use of the web UI
- Monitoring
- Accounting
- Authentication/Authorization
- Service Catalogue Cache/Proxy
- User Settings
- Support for Service Orchestration/Meta-Services
While there are some components which will be in a minimal version necessary already for the first version (ReST API, Service Catalogue, Monitoring, Authentication/Authorization), there are other components which will be implemented at a later stage. Nonetheless, it is important to include them at least as components without further details about the implementation, so that their role and position in the HIFIS architecture can be determined.
The backend will be build on top of the Spring framework, which is an established dependency injection (DI) framework that is very mature and stable and it is used widely for building robust and maintainable server side applications. On top of providing the foundation for the implementation of the TP, Spring also provides existing modules for different tasks. In our case, this means for example that the integration with the Helmholtz AAI can be done very easy with the use of the Spring Security framework.
For the persistence layer of the components in the backend a database system will be used. One possible database is Neo4J \[[Neo4J](#biblio_neo4j)<!-- @IGNORE PREVIOUS: anchor -->\].
The build management will be done with Apache Maven \[[Maven](#biblio_maven)<!-- @IGNORE PREVIOUS: anchor -->\].
As a testing framework Robot \[[Robot](#biblio_robot)<!-- @IGNORE PREVIOUS: anchor -->\] will be used. Robot is a versatile framework which provides an easy to use API to different underlying frameworks. The test cases can be written using in a syntax resembling natural language which allows adding more test cases very easily also by people without extended developer skills.
### TP Frontend
For the implementation of the UI frontend, a stack of modern technologies will be used. In order to build on a solid base for the generation of the UI, a framework for WebComponents, most likely LitElement \[[LitElement](#biblio_lit_element)<!-- @IGNORE PREVIOUS: anchor -->\] or Polymer 3 \[[PolymerLibrary](#biblio_polymer_library)<!-- @IGNORE PREVIOUS: anchor -->\] will be used.
From the perspective of the system architecture is has yet to be decided, if there will be a separate server component for the frontend which then communicates with the backend ReST API.
The build management for the frontend will also be done with Apache Maven \[[Maven](#biblio_maven)<!-- @IGNORE PREVIOUS: anchor -->\], combined with npm \[[npm](#biblio_node_package_manager)<!-- @IGNORE PREVIOUS: anchor -->\] for the JavaScript part.
As a testing framework the before mentioned Robot \[[Robot](#biblio_robot)<!-- @IGNORE PREVIOUS: anchor -->\] will be used. This is especially useful for UI testing, as it allows for writing UI tests with a very simple syntax which is then translated to Selenium \[[Selenium](#biblio_selenium)<!-- @IGNORE PREVIOUS: anchor -->\]
In the implementation of the frontend, aspects like responsive design and accessibility will have to be considered and taken into account.
### User Management
Although not a separate part of the TP, it is necessary to give some consideration to the management of users and user data on the TP. In HIFIS, the main component for the managament of users is the AAI infrastructure. This means, that there will be no local user management within the technical platform. All authentication/authorization is being done with the external AAI. Nonetheless, there is the need, to handle data from users on the TP. This can be
- user preferences
- information about requests to use certain services
- in the future information about the personal dashboard
From the view of the TP, the user-specific data handled on the platform should be kept at an absolute minimum. It has always to be considered if additional data could not be better handled by means of the AAI.
## Source Code and Build system
### Source Code
For the management of the source code, git will be used. In order to support the workflows around the version control, Gitlab will be used in order to provide additional tooling.
### Build Management
As mentioned in the sections [TP Backend](#tp-backend) and [TP Frontend](#tp-frontend), Maven will be used as the primary build management tool. For the management of the UI libraries, the package manager npm from Node.js will be used. Both tools are very mature and are widely used for software development projects.
### CI/CD
CI/CD will be handled through Gitlab pipelines. This provides a very tight integration with the VCS, so that every commit automatically triggers a build and errors are visible immediately.
Successful builds will trigger an automated deployment, either to an existing server with a combination of reverse proxy and application server or using a containerízed infrastructure. The method of deployment is still being evaluated at the time of writing.
## Operational Concerns
The software for the TP is meant to be deployed on a central server for the whole Helmholtz community. Since it is impossible to determine the average/maximum system load at this time, there are no explicit developments planned for high availability of the platform. The platform is however designed, so that it will be possible to operate it in an environment with load balancing and other means to ensure HA.
Load balancing for the provided services is considered, but is not targeted for in the first versions of the TP. In order to allow some kind of load balancing for similar services, it will be necessary to determine, which preconditions have to be fulfilled so that similar services are considered for a load balancing approach.
## Bibliography
<a name="biblio_lit_element"></a>
\[LitElement\] LitElement. A simple base class for creating fast, lightweight web components. URL: <https://lit-element.polymer-project.org/> (2020-04-24)
<a name="biblio_maven"></a>
\[Maven\] Apache Maven. URL: <https://maven.apache.org/> (2020-04-24)
<a name="biblio_node_package_manager"></a>
\[npm\] NPM. URL: <https://docs.npmjs.com/about-npm> (2022-06-10)
<a name="biblio_neo4j"></a>
\[Neo4J\] Neo4J. URL: <https://neo4j.com/product/> (2020-04-24)
<a name="biblio_npm"></a>
\[npm\] Node Packet Manager. URL: <https://www.npmjs.com/> (2020-04-24)
<a name="biblio_polymer_library"></a>
\[PolymerLibrary\] Polymer Library. URL: <https://polymer-library.polymer-project.org/> (2020-04-24)
<a name="biblio_robot"></a>
\[Robot\] Robot Framework. URL: <https://robotframework.org/> (2020-04-24)
<a name="biblio_selenium"></a>
\[Selenium\] Selenium UI Testing Framework. URL: <https://www.selenium.dev/> (2020-04-24)
## Glossary
| Term | Definition |
| ------------- |-------------------------------------------------------|
| AAI | Authentication and Authorization Infrastructure |
| API | Application Programming Interface |
| CI | Continuous Integration |
| CD | Continuous Deployment |
| HA | High Availability |
| HIFIS | Helmholtz Infrastructure for Federated ICT Services |
| TP | Technical Platform |
| UI | User Interface |
| VCS | Version Control System |
docs/technical-platform/images/cloud-tp-mockups-detail view.png

115 KiB

docs/technical-platform/images/cloud-tp-mockups-login.png

58.4 KiB

docs/technical-platform/images/cloud-tp-mockups-starting page (categories).png

356 KiB

docs/technical-platform/images/cloud-tp-mockups-starting page (list).png

95 KiB

docs/technical-platform/images/high-level-system-architecture.png

29.3 KiB

docs/technical-platform/images/sonar-integration/screenshot-02.png

573 KiB

docs/technical-platform/images/sonar-integration/screenshot-03.png

679 KiB

docs/technical-platform/images/sonar-integration/screenshot-04.png

719 KiB

docs/technical-platform/images/sonar-integration/screenshot-07.png

965 KiB

docs/technical-platform/images/sonar-integration/screenshot-09.png

800 KiB

docs/technical-platform/images/sonar-integration/screenshot-10.png

821 KiB

docs/technical-platform/images/sonar-integration/screenshot-11.png

128 KiB

0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment