As some of you might have heard we are planning to revive the DDI Developers Group which has been dormant since 2014. To initiate this revival we plan to host a two days DDI Developers Hackathon from Friday 24th of March until Saturday 25th of March, 2023 at the Swedish National Data Service (SND) in Gothenburg. This event is directly following the Research Data Alliance (RDA) plenary in the same week. Therefore we believe some participants could already be at the location saving travel costs.
Therefore, if you are a developer, software engineer or programmer using or implementing tools around the DDI suite of metadata standards this event might be the chance to exchange ideas with similar people plus during the two days we would like to create some prototypical software implementations of current pain points or needed features for DDI tools.
The event is sponsored by the DDI Alliance, the Swedish National Data Service (SND) and the University of Applied Sciences of the Grisons which will provide catering and the location for the whole event. For a limited number of people also sponsorships for travel costs can be provided if the member organizations cannot sponsor it.
Please treat this mail as a Save-the-Date for this event and forward it to interested technical personnel in your organizations. We will soon follow up with another mail after EDDI containing a link for registration as well as an online document where proposals for topics can be entered.
Thanks in advance.
The DDI Developers Hackathon Organizing Team
Johan Fihn Marberg
Ingo Barkow and Hilde Orten, Chair and Vice Chair of the DDI Scientific Board, distributed a newsletter about Scientific Board activities during the fall. See:
Part of the European DDI User Conference, the training workshop will show how DDI metadata can describe single and longitudinal data collections, as well as preview the upcoming DDI-Cross Domain Integration (DDI-CDI) version that helps share and reuse data across domain boundaries and within and between research infrastructures. Finally, DDI tools developers will demo tools and services for implementing DDI metadata.
To attend, please register by 22 November 2022 via the EDDI conference web site: https://eddi22.sciencesconf.org/resource/page/id/5
Looking for DDI promotional materials? Visit https://ddialliance.org/about/promotion to find digital and physical items, including logos, handouts, brochures, and presentation templates.
Need physical materials -- like brochures, buttons, and stickers -- shipped to you? Just let us know!
The DDI Technical Committee is asking for ideas and thoughts on the identification and use of implementation languages in the DDI suite of products.
The purpose of the this is to:
- Identify priority implementation languages for DDI products (e.g. RDF, JSON, UML, XML, etc.)
- Identify style options for implementation languages
- Mappings to produce syntax representations
- Moving from conceptual models to serialization
- What aspects of implementation should be consistent
- Document options, decisions, and reasoning
- Provide guidance for variation from the agreed model
- based on applied use of product
- what needs to be noted and how (e.g. a consistent expression of exceptions and reasons)
There will be several ways to get involved, including:
- Email ideas, proposals and thoughts to the DDI Technical Committee Chair, Wendy Thomas, at firstname.lastname@example.org
- Attend the consultation and requirements gathering meeting on 2 December in Paris (after the European DDI User Conference)
If you wish to attend the meeting on 2 December, registration is not mandatory, but it would be helpful to know, for numbers.
A workshop will be held on 5-6 December 2022 in Paris, France to produce a workplan and recommendations arising from the consultation.
Panel Session at DCMI Conference: The Cross-Domain Interoperability Framework: Coordinating Standards for Scalable, Practical FAIR Sharing
Oct 4, 2022 1:15 PM Eastern Daylight Time
We are now witnessing the emergence of FAIR data-sharing mechanisms in many areas, with the focus having shifted from the "what" to the "how" in many organizations. In many domains, there are a number of common standards – some which can apply equally across domains, and some specific to the data, processes, and practices within that domain. The challenge of FAIR data sharing – ubiquitous, automated reuse of data and metadata – is particularly acute across domain and infrastructure boundaries, demanding a change in how data are described.
To meet this challenge, it is important to first understand how the different standards and models used to describe data can be employed, so that they speak not only to traditional users, but also to users coming from other domains. One major development in this area is the idea of a FAIR Digital Object Framework (FDOF), where information - both data and metadata - of interest for the discovery and reuse of data can be identified and obtained. The FDOF represents an initial step, but does not address many of the practical issues of interoperability. We must look at the intersection of standards of different types and how they fit into this picture: the idea that every FAIR resource is implemented according to an entirely new set of technical standards is not realistic. The FDOF serves as an agreed way to obtain needed FAIR resources and to learn enough about them to understand some related resources (e.g., metadata schemas) at the level of a protocol. It is not sufficient on its own to produce interoperability, which will require an ability to actually understand the metadata schemas being used. When it comes to standards, some parts of FAIR are better supported than others.
Discovery of FAIR resources increasingly relies on standards and approaches which are widely adopted, and often much the same across domains and institutional boundaries. DCAT, Schema.org, and Dublin-Core-based cataloguing metadata is commonly found in many areas. For other aspects of FAIR however, this degree of domain-agnostic standardization does not exist. Semantics and vocabularies are often deeply domain-dependent, and other important types of metadata needed for effective reuse - structural metadata, provenance, etc. - are also seen in many different forms, reflecting domain practice. Within any given domain, the standards requiring support may be well-understood, and limited in number. The same cannot typically be said when data from other domains is the target of reuse. If we are to make use of the FDOF as intended, we need to have a second tier of domain-agnostic standards which makes this profusion of models, schemas, etc. tractable. Such a second tier should be developed as a mechanism for domain-specific standards to be more easily exchanged and transformed. Technical standards such as RDF, JSON, XML (etc.) may provide a useful foundation, but they are not themselves sufficient.
The standard vocabularies and models which are understandable across domains provide an additional needed layer of interoperability. One good example of this is SKOS: many domains use concept systems of different types. If they are described in SKOS, they can at least be exchanged and processed in a coherent way across domain boundaries, even if the specifics of the concepts themselves need further attention. The EOSC Interoperability Framework introduced this idea of a leveled hierarchy of standards, and it is a useful way to understand what a practical approach to interoperability looks like as we progress from the universal toward the domain- and community specific. This session presents the requirements which lead us to a middle tier of domain-agnostic standards in support of the FDOF, and proposes some candidates for consideration based on implementations and explorations to date. Some examples of such standards are provided, showing how they can work together to provide the complete information set needed to reuse data in a FAIR data-sharing scenario across domain and institutional boundaries.
The focus of the session is on the "interoperability" and "reuse" elements of FAIR, but the session will touch on all aspects of FAIR data sharing, and how it might practically be realized. In particular, we aim to present these ideas to the DCMI community, to get feedback and to understand how this approach may intersect with current activities and thinking in the DCMI community and with related initiatives.
Speakers: Arofan Gregory (DDI Alliance and CODATA), Flavio Rizzolo (Statistics Canada), Franck Cotton (INSEE), Simon Hodson (CODATA).
Wednesday, 12 October, 14:00-15:30 UTC
Controlled vocabularies are an important form of metadata, and are used for a broad range of purposes. This webinar looks at how controlled vocabularies and similar types of concept schemes are used in and supported by the DDI standards. An introduction is provided for their use as qualifiers for metadata fields, as codelists and classifications for representing variables and response domains, as terms for describing coverage and supporting search, and for their use in defining variables, units, and populations. The support provided by DDI standards – including the XKOS RDF vocabulary – is highlighted. Real-world examples are provided, to illustrate the purposes described.
Presenters: Franck Cotton (INSEE), Christophe Dzikowski (INSEE), Arofan Gregory (DDI Alliance and CODATA)
The DDI Alliance is happy to sponsor the 2nd IASSIST Africa Regional Workshop, which will be held October 4-7, 2022 at the at the University of Ibadan, Nigeria. Jared Lyle, Executive Director of the DDI Alliance, will give a virtual presentation on Thursday, October 6th. For more information about the 2nd IASSIST Africa Regional Workshop, please visit: http://iassistafrica.org/
The Technical Committee met in Minneapolis, Minnesota, U.S.A. August 1-5, 2022. The focus of this meeting was finalizing the work needed to move DDI-Lifecycle to the COGS production platform.
The COGS production platform, currently used by SDTL, captures the content of DDI-Lifecycle in a set of CSV files and related documentation text which is then transformed into multiple implementation languages (XML, RDF, JSON, UML/XMI, C++, etc.) and a documentation file in Sphinx. The primary goal is to provide a platform that supports intermittent testing, greater access for developers, and generated documentation and implementation structures. This will move DDI-Lifecycle out of being a hand-crafted structure.
In a series of sessions, the members also discussed the focus of the Technical Committee over the next few years given the role of the new Scientific Board and the diversification of the DDI product line.
An executive summary of the meeting is posted on the Technical Committee section of the DDI wiki: https://ddi-alliance.atlassian.net/wiki/spaces/DDI4/pages/2847703041/Aug...