setup scheduling of requests with distributed
to make this framework better usable for production, we should think about an additional (optional) way to handle the web requests. Currently, it's based on pythons built-in async library (which is nice, so the backend can handle two requests at the same time), but it is foreseeable that this will break in production if the backend module faces multiple requests at once.
One option might be, that the functions in the backend module outsource everything to a scheduler, but actually, we can directly provide this functionality from the framework, using the distributed python library. So we would start a cluster on the local machine (or connect to an existing one, we could be a SlurmCluster, for instance, generated with dask-jobqueue and then, for every web request, we use an async client to connect to this cluster.
We definitely need a functionality like this to make it usable in production. What do you think @daniel-eggert?
PS: BTW, I am writing currently on a project plan to use the de-messaging-python framework for our model analysis tool and will spend more time into the development within the next few months.