You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

This page describes the principles of creating core vocabularies and application profiles. It is important to understand these principles so that any ambiguities are avoided.

Unlike with many UML class diagrams, there is no possibility that the contents of a core vocabulary or application profile could be interpreted in a multitude of ways, as they are both based on formal knowledge representation languages and thus can be unambiguously validated.

For this reason, it is important that the vocabularies and profiles you create are semantically equivalent to what you intend them to be. It is not enough to describe with human-readable annotations what your classes, attributes, associations and constraints mean, but necessary to specify this human-readable intent with logic. This document helps you to achieve this goal.

1.1. The Linked Data Modeling paradigm

First you need to consider the type of model you are creating. When modeling on the FI-Platform, you should not think in the traditional terms of conceptual, logical and physical models, nor in the terms of typical UML profiles. Models created on the FI-Platform can be used for these purposes, but the RDF-based knowledge representation language used here has more expressivity than typical data description languages.

When modeling on the FI-Platform, you should keep in mind these essential principles:

You're working with a graph

The RDF data model is a very generalized graph which is able to describe many kinds of data structures. Both data models and instance data are described with the same structure: triples of two nodes and an edge connecting them. RDF graphs can be represented in a very simple three column tabular form: <subject, predicate, object>. Each subject and object are some entities, or resources in the linked data jargon (for example classes or instances of classes, literal values etc.) and predicates are entities that link them together. For example a subclass association between classes A and B would be represented as <B, subclass, A>, or visually as two nodes in a graph linked by a subclass edge. Attribute values are represented with the same structure, with the attribute entity acting as the edge and the literal attribute value acting as the object node: <A, someAttribute, "foobar">.

Polyhierarchies are supported

In traditional data modeling multiple inheritance (typically the only way to represent hierarchical structures) is typically not allowed or at least severely limited. Instead, building hierarchies with multiple superclasses is allowed and in some cases even necessary.

All entities have identities

Usually some entities in data modeling languages don't have an identity as they are inherently part of their defining entities. As an example, UML attributes are not entities that can be individually referenced, they exist only as part of the class that defined them. This means that a model might have multiple attributes with the same identifying name and meaning but there is no technical way to straightforwardly identify these attributes as being "the same".

In RDF, every resource (entity) has an unique identifier, thus allowing RDF the reuse of any defined resource which reduces data duplication and overlapping definitions. Both data structures and instance data share the same URI-based naming principle, which on the FI-Platform is HTTPS IRI.

Resource identifiers can generally be minted (declared) as anything that adheres to the URI RFC 3986, but on the FI-Platform minting is controlled by enforcing a namespace which is under https://iri.suomi.fi/. This ensures that model resources will not accidentally collide with resources elsewhere on the web.

No strict separation between data and metadata

Due to the abovementioned identifiers, it is possible to add descriptive metadata to any entity either by stating the metadata by the entity itself, or externally by referring to the entity by its identifier.

No strict separation between classes and instances

So-called punning means that classes can act also as instances, there is no hard line separating them (though this doesn't mean the situation would be ambiguous, there are clear logical rules for deducing the state of affairs).

No strict separation between conceptual and logical model

When properly annotated, the actual model itself acts as a machine-readable conceptual model with a rich layer of logical model features on top of it. This also applies to schemas (application profiles), where the schema itself can be directly annotated. The conceptual link can be achieved for example by describing the entities conceptually as a SKOS vocabulary (also RDF-based) and referring to the SKOS concepts from the data model (thus creating a machine-readable link between the terminological and logical models).

1.2. Which one to create: a Core Vocabulary or Application profile?

Which model type should you start with? This naturally depends on your use-case. You might be defining a database schema, building a service that distributes information products adhering to a specific schema, trying to integrate two datasets... In general, all these and other use-cases start with the following workflow:

  1. In general the expectation is that your data needs to be typed, structured and annotated by metadata at least on some level. This is where you would traditionally employ a conceptual model and terminological or controlled vocabularies as the semantic basis for your modeling. You might have already done this part, but if not, the FI-Platform can assist you with ending up with a logically consistent conceptual model.
  2. Your domain's conceptual model needs to be created in a formal semantic form as a core vocabulary. The advantage compared to a traditional separate conceptual and logical model is that here they are part of the same definition and the logical soundness of the definitions can be validated. Thus, there is no risk of ending up with a logical model that would be based on a conceptually ambiguous (and potentially internally inconsistent) definition.
  3. When you've defined and validated your core vocabulary, you have a sound basis to annotate your data with. Annotating in principle consists both of typing and describing the data (i.e. adding metadata). Annotating your data allows for inferencing (deducing indirect facts from the data), logical validation (checking if the data adheres to the definitions set in the core vocabulary), harmonizing datasets, etc.
  4. Finally, if you intend to create schemas e.g. for validating data passing through an API or ensuring that a specific message payload has a valid structure and contents, you need to create an application profile. Application profiles are created based on core vocabularies, i.e. an application profile looks for data annotated with some core vocabulary concepts and then applies validation rules.

Thus, the principal difference between these two is:

  • If you want to annotate data, check its logical soundness or infer new facts from it, you need a core vocabulary. With a core vocabulary you are essentially making a specification stating "individuals that fit these criteria belong to these classes".
  • If you want to validate the data structure or do anything you'd traditionally do with a schema, you need an application profile. With an application profile you are essentially making a specification stating "graph structures matching these patterns are valid".

Core Vocabularies in a Nutshell

As mentioned, the idea of a Core Vocabulary is to describe semantically the resources (entities) you will be using to describe your data with. In other words, what typically ends up as a conceptual model documentation or diagram, is now described by a formal model.

Core vocabularies are technically speaking ontologies based on the Web Ontology Language (OWL) which is a knowledge description language. OWL makes it possible to connect different data models (and thus data annotated with different models) and make logical inferences on which resources are equivalent, which have a subset relationship, which are complements etc. As the name implies, there is a heavy emphasis on knowledge - we are not simply labeling data but describing what it means in a machine-interpretable manner.

A key feature in OWL is the capability to infer facts from the data that are not immediately apparent - especially in large and complex datasets. This makes the distinction between core and derived data more apparent than in traditional data modeling scenarios, and helps to avoid modeling practices that would increase redundancy or potential for inconsistencies in the data. Additionally, connecting two core vocabularies allows for inferencing between them (and their data) and thus harmonizing them. This allows for example revealing parts in the datasets that represent the same data despite having being named or structured differently.

The OWL language has multiple profiles for different kind of inferencing. The one currently selected for the FI-Platform (OWL 2 EL) is computationally simple, but still logically expressive enough to fulfill most modeling needs. An important reminder when doing core vocabulary modeling is to constantly ask: is the feature I am after part of a specific use case (and thus application profile) or is it essential to the definition of these concepts?

Application Profiles in a Nutshell

Application profiles fill the need to not only validate the meaning and semantic consistency of data and specifications, but to enforce a specific syntactic structure and contents for data.

Application profiles are based on the Shapes Constraint Language (SHACL), which does not deal with inferencing and should principally be considered a pattern matching validation language. A SHACL model can be used to find resources based on their type, name, relationships or other properties and check for various conditions. The SHACL model can also define the kind of validation messages that are produced for the checked patterns.

Following the key Semantic Web principles, SHACL validation is not based on whitelisting (deny all, permit some) like traditional closed schema definitions. Instead, SHACL works by validating the patterns we are interested in and ignoring everything else. Due to the nature of RDF data, this doesn't cause problems, as we can simply dump all triples from the dataset that are not part of the validated patterns. Also, it is possible to extend SHACL validation by SHACL-SPARQL or SHACL Javascript extensions to perform a vast amount of pre/postprocessing and validation of the data, though this is not currently supported by the FI-Platform nor within the scope of this document.

1.3 Core Vocabulary modeling

When modeling a core vocabulary, you are essentially creating three types of resources:

Attributes

Attributes are in principle very similar to attribute declarations in other data modeling languages. There are some differences nevertheless that you need to take into account:

  1. Attributes can be used without classes. For an attribute definition, one can specify rdfs:domain and/or rdfs:range. The domain refers to the subject in the <subject, attribute, literal value> triple, and range refers to the literal value. Basically what this means is that when such a triple is found in the data, its subject is assumed to be of the type specified by rdfs:domain, and the datatype is assumed to be of the type specified by rdfs:range.
  2. The attribute can be declared as functional, meaning that when used it will only have at most one value. As an example, one could define a functioanl attribute called age with a domain of Person. This would then indicate that each instance of Person can have at most one literal value for their age attribute. On the other hand, if the functional declaration is not used, the same attribute (e.g. nickname) can be used to point to multiple literal values.
  3. Attribute datatypes are by default XSD datatypes, which come with their own datatype hierarchy (see here).
  4. In core vocabularies it is sometimes preferable to define attribute datatype on a very general level, for example as rdfs:Literal. This allows using the same attribute in a multitude of application profiles with the same intended semantic meaning but enforcing a context-specific precise datatype in each application profile.
  5. Attributes can have hierarchies. This is an often overlooked but useful feature for inferencing. As an example, you could create a generic attribute called Identifier that represents the group of all attributes that act as identifiers. You could then create sub-attributes, for example TIN (Tax Identification Number), HeTu (the Finnish personal identity code) and so on.
  6. Attributes can have explicit equivalence declarations (i.e. an attribute in this model is declared to be equivalent to some other attribute).

Associations

Associations are similarly not drastically different compared to other languages. There are some noteworthy things to consider nevertheless:

  1. Associations can be used without classes as well. The rdfs:domain and rdfs:range options can here be used to define the source and target classes for the uses of a specific association. As an example, the association hasParent might have Person as both its domain and range, meaning that all triples using this association are assumed to describe connections between instances of Person.
  2. Associations in RDF are binary, meaning that the triple <..., association, ...> will always connect two resources with the association acting as the predicate.
  3. Associations can have hierarchies similarly to attributes.
  4. Associations have flags for determining whether they are reflexive meaning that both the subject and object of the association are assumed to be the same resource, whether they are transitive (meaning that if classes A and B as well as B and C are connected by association X, then this is equivalent to declaring that class A is connected to C by association X).
  5. Associations can have explicit equivalence declarations (i.e. an association in this model is declared to be equivalent to some other association).

Classes

Classes form the most expressive backbone of OWL. Classes can simply utilize the rdfs:subClassOf association to create hierarchies, but typically classes contain property restrictions - in the current FI-Platform case really simple ones. A class can simply state existential restrictions requiring that the members of a class must contain specific attributes and/or associations. Further cardinality restrictions are not declared here, as the chosen OWL profile does not support them, and cardinality can be explicitly defined in an application profile.



  • No labels