Our work in composing and executing the analysis has led to several reflections on possible improvements
and further work. These reflections lead to the following recommendations:
Two parallel analysis tracks. The effect of service provider features and end-user behaviour has
been difficult to separate. Practical interoperability should take both aspects into consideration.
Further study of end-user behaviour on the individual services is expected to lead to more clarity on
how practical interoperability could be achieved.
Removal of overlapping questions. A handful of our questions include overlapping issues. For
instance the availability of API’s is addressed through two questions and a few questions concerning
the conceptual model seem to get copy/paste responses. This might indicate redundancy amongst
Improvements on unclear questions. For instance, we have a question concerning the choice of
licensing models for datasets. This question could focus on whether the service requires purchase
of third party software licenses by end-users or future software license renewals that may affect the
service provider or its end-users. There is also a need to have a common meaning of the ‘yes/no’
answers over all the questions e.g. in asking about the risk of vendor-lock in, answering ‘no’ was
actually a good thing, even though it was having a ‘yes’ that counted towards compliance to the
Enhanced focus on metadata. Compliance with FAIR through interoperability could be a relevant
future focus point. This could be achieved by examining the generation and traversal of metadata
through the use of standardised communication protocols, service tools, repositories and compute
services. Metadata should be either maintained or enriched during this process.
Examination of more services. More responses would enable compiling an analysis that is less
influenced by one or more services that may be atypical.
Compilation of relevant baselines or “golden standards” that cover interoperability for
repositories, compute services and/or service aggregators. This could be a tool to indicate what an
individual service could or should strive to achieve.
Build a more coherent IF. Moving from general recommendations produced by various authors and
with various intentions to specific questions aimed at measuring compliance was naturally difficult.
Further mapping between IF recommendations from various sources would be helpful and bring
much needed clarity.