Decentralised Data Infrastructure for Science
Momentum is growing among research funding agencies, data preservation advocates, and individual researchers to increase the extent to which the scientific process and record is fully reproducible and transparent. In the context of data, this is most commonly referred to as making research data (and code, models, etc). findable, accessible, interoperable and re-usable/reproducible (FAIR).
Distributed web technologies are well suited to meeting some of the challenges of creating a network infrastructure which natively facilitates FAIR data. Unlike existing approaches, where data is “published” long after the research has been carried out, technologies like InterPlanetary File System (IPFS) and DAT fit into the researcher's daily workflow throughout the research lifecycle, much as git fits into a software developer's daily flow throughout a given project lifecycle. However, for example, while IPFS natively simplifies the “A” (accessible) aspect of FAIR, there are gaps in terms of how it currently addresses the findability and interoperability of data.
Our group is actively working on use-cases that bridge the gap between IPFS (and other distributed technologies, e.g. ledgers and smart contracts) and existing scholarly communication infrastructures, from data repositories and linked data to DOIs and ORCIDs.
You are cordially invited to join our efforts to make scientific data verifiable and discoverable in the long term.
During two-days of presentations, workshops and discussions you will be able to learn about new opportunities to employ openly licensed distributed technologies for creating the tree of knowledge.
You will have hands-on experience of setting up distributed infrastructures.
You will be able to contribute your skills, talents and ideas to making research data FAIR universally.