Jason Wrage asks:
Would you mind taking a moment to summarize the process of making a specification semantically compatible? I assume that this might entail development of a vocabulary and embedding RDF within the target specification?
That is an excellent question, and something I've spent a few years contemplating. To begin with, there is a huge difference between designing a specification semantically from scratch, and "semanticizing" it after the fact.
In general, it depends a lot on the specification at hand, and in particular on things like:
- Is the specification based on some form of vocabulary-independent abstract model
- Is the specification expressed in some kind of modeling language (UML etc)
- Are the entities in the specification explicit?
- How does the specification handle identity for the metadata terms?
and so on. I have experiences with semanticizing IEEE LOM, and the answers to the above in the LOM case is:
- Not explicitly - but the LOM tree structure is almost an abstract model.
- No
- No - there are many entities in the model that are not explicit (The Educational category/entity is a major issue)
- Tree-based identification such as General.Title
Based on the above, one can start to see the issues:
- Tree-based and semantic models don't fit well. We will have to disassemble the tree to semanticize, and then reconstruct it afterwards
- No UML model means no alternative to the tree view, so we need to base our decisions on the tree directly.
- We will have major headaches trying to identify the entities.
- We will need to make sure that information about the position in the hierarchy when introducing new properties. Compare General.Description and Educational.Description - very different semantics.
I wrote in length about the process
here. The general method for LOM was:
- Isolate properties and objects. The first step involves extracting an object-oriented view of the LOM data model. What LOM elements are objects, and which are relations between objects? This sounds relatively easy, but it's in effect the core of the semantic translation.
- Find related Dublin Core elements and encodings. For the LOM case, it was very important to try to reuse existing vocabulary. After having found the relevant Dublin Core elements, the precise relation to the Dublin Core element needed to be defined. There are essentially four ways in which a LOM element might be related to Dublin Core:
- By being identical to some Dublin Core Element.
- By being a sub-property (=refinement) of a Dublin Core Element.
- By being a super-property of a Dublin Core Element
- By using literal values that could be specified using a Dubin Core Syntax Encoding Scheme.
- Define RDF vocabulary matching your model
- Making RDF namespaces available on the web, following vocabulary publishing guidelines
Nowadays, there are a few additional steps that might be interesting.
- The Dublin Core Description Set Profile model allows for the construction of application profiles of RDF data, promising syntactic validation of Dublin Core metadata. This is otherwise something that many people miss when going from XML to RDF. A general RDF equivalent is something Alistair Miles has written about.
- GRDDL support in your XML formats will allow semantic web clients to extract RDF information from your XML data. With the above vocabularies, such data can be of high quality.
I'm sure there are more things as well.
See also the articles linked from
this page.
Not sure this is summary, but still....
1 comment:
Thank you for the follow up! I apologize for my own delayed response. I do a lot of work in and around the SIF specification. We are beginning to discuss semantics with respect to our data model. This is very interesting! Thanks again!
Post a Comment