Posts Tagged ‘standards’


15.02.2011

Ontology – 101

posted by Karsten

in Uncategorized

In my competency work I’ve used ontologies extensively, and for those who don’t know what they are, I’ve decided to make a small simple “101” blog about them.

Ontology stems from a rather old classical philosophical study which deals with the nature and organisation of reality. When I say old I do mean it – I even referenced Aristotle in my PhD thesis, a fact I found incredibly cool!

Within computer science, shortly put, it is a logical theory that gives an explicit (partial) account of a conceptualization, i.e. rules and constraints of what exist in a reality. In other words a description of the individual objects in a “world”, their (actual and possible) relationships and the constraints upon them. Take the simple “boxes on a table” world in the picture. The objects would be the table and the boxes = {a, b, c, d, e}. Then several relationships could be made such as table (the boxes that rest on the table, so in this world c and e), on (boxes upon a specific other box, so in this world [a,b], [b,c], [d,e]) and more relationships could be defined. All these relationships are “intentional”, that means they a specifications of what can be not the actual relationships, they can be used to specify other worlds, such as the below world:

This gives the opportunity to share the ontologies and use them to describe different worlds and compare them with each other.  This is what make them so interesting in computing. There are traditionally 4 different kinds of ontologies: top-level/upper, domain, task and application . (Guarino)

  • Top-level ontology (also called upper ontology) should describe high level general concepts, which are independent of domain or specific problems. This would typically be concepts like matter, object, event, time, etc. These should allow for large communities of users, thus enabling tools that work across domains and applications of all the users.
  • Domain ontologies and task ontologies should define the vocabularies and intensional relations respectively of generic domains (e.g. medicine, competency or education) and generic tasks and activities (e.g. diagnosis, hiring or reflection). This is achieved by specialising the terms defined in the top-level ontology layer. In practice these two ontology types are often put in a single combined domain and task ontology at this level with one ontology for each domain, as this is easier for the ontology engineer to practically do.
  • Application ontologies describe specialised concepts that correspond to roles in the domain performing certain activities. These make use of both the domain and task ontologies.

When developing ontologies for use in real world there are 5 principles that one ought to follow (Gruber):

  1. Clarity – the formalism of ontological languages ensures this.
  2. Coherence – there should not be contradictions in the ontology.
  3. Extendibility – develop with other uses in mind.
  4. Minimal encoding bias – (an encoding bias results when representation choices are made purely for the convenience of notation or implementation.) This enables several computer systems to speak together.
  5. Minimal ontological commitment – should only make the assumptions needed for agents to work together leaving the freedom of the agents to create the “worlds” they need internally.
I think this is enough ontology theory for a blog post, if you are interested go to the first chapter in my PhD thesis.

There then exist a large range of different tools that uses these theories to describe the world, and several ontologies that can be used to share knowledge. OWL is probably the most popular ontology language in use, as it is the W3C language for the Semantic Web, it can be used in XML/RDF but there exist many other ways of using it.

A good  starting point for using ontologies would be Protégé, which is an ontology editor from Stanford University. There is tons of help on their web site, and the tool really helps the understanding of ontologies, sometimes it just makes sense to see things graphically rather than read it. Having said that I don’t use it anymore, it isn’t even installed on my computer, because it becomes much faster to write ontologies by hand when you’ve understood them.

If you want a more comprehensive “beginners guide” then I personally found this paper by Natalya F. Noy and Deborah L. McGuinness useful when I started down this ontological road.

Share
11.02.2011

Competency Standards / Representation

posted by Karsten

in Uncategorized

This blog post is the first in a series of blog posts about competency map / mapping. You will be able to find them through the tag “competency maps” on my site. The material is primarily taken from my thesis (work done through the TRACE project together with Prof Keith Baker and Prof Shirley Williams), and you are encourage to find it in there. This is “merely” a taster…

Competencies are a “funny” concept – a made up concept – to describe something about people, learning / courses outcome, job requirements etc. There are many different definitions of them, and there are much confusion about what they really mean. I have sat through numerous meeting and conferences where the subtle difference between what a competency and a competence is, and when which should be used. Regarding this, I’m very pragmatic, i.e. I am agnostic to the definition and the difference between ‘y’ and ‘e’ – I usually think of them as synonymous! What I had is a tool which can be used to describe them, no matter the definition, and interrelate them, again, no matter the definition. The user of the tool can do what they like, and therefore follow their own “conviction”.

What I have done though is to follow standards, and set up a system of ontological inference that is very loose – contrary to “normal” ontology tools, which are rigid – and based on a closed world assumption rather than an open world assumption.

RCD standard

The system is based on Reusable Competency Definitions, which is a very simple standard that functions as a wrapper for definitions of competencies, the user decides what that might be. It is an IEEE standard (“Learning Technology Data Model for Reusable Competency Definitions,” New York, IEEE 1484.20.1, 2007.). It has been used as the foundation in other standards, such as HR-XML.

The RCD for a specific competency should contain in natural language:
  • A unique identifier
  • A title
Optionally it could also have
  • A description (natural language)
  • A definition (a reference to another repository or definition)
  • Metadata (further information about a particular competency, this is not limited , it can be any size or format)

The main problem with this standard is that the main parts (title, description and definition) are in human readable form, so if any semantic meaning is to be made available for computers there must be additional knowledge, e.g. attached in the metadata part, connections to other RCDs with metadata or external bindings to other data structures such as ontologies. Furthermore RCDs are only a partial representation of competencies as they are only supposed to define competencies; the evidence, context, dimensions etc. are not included. Evidence is an especially important issue for many competency descriptions, and the RCD therefore needs to be “backed” up by some other material to be able to validate the competencies.

Competency mapping is a technique where different competencies, usually in RCD form, are related to each other with some semantic links. The first real attempt at making a standard for these, as far as I know, was made by Claude Ostyn with his Simple Reusable Competency Mappings (SRCM). The biggest problem with this proposed standard is, in my opinion, that it doesn’t have any way of describing a competency that somebody “has”, but only preferred or necessary competencies, and therefore making it rather difficult to use in real world applications. In my competency suite I created a simplified version of this standard, which included a “has” relation. I called it the VSRCM.

I defined VSRCMs like RCDs consisting of

  • A unique identifier
  • A title

Optionally it could also have

  • A description
  • Metadata (further information about a particular competency, this is not limited)

(Note VSRCM does not have a definition section as RCD’s have, the graph provides an improved equivalent functionality.)

Additionally the VSRCM has a graph of nodes with attached competencies. The graph must have at least one entry node (the default entry node).

Each node has properties:

Competency

  • RCD

or

  • VSRCM (note this could be recursive)

Proficiency (levelling can be user-defined, with support for ontological definitions)

  • Required
  • Desired
  • Current (has)

Relationship to other nodes within the graph:

  • All – That is, all the proficiencies of the competencies of the “sub-nodes” need to be “fulfilled” for this relationship to be successful
  • Any – That is, one or more of the proficiencies of the competencies of the “sub-nodes” need to be “fulfilled” for this relationship to be successful.
  • If (either True or False). This is used to represent alternate proficiencies of competencies, for example a taxi driver based in London is required to have specific knowledge of the area, while a taxi driver elsewhere may only require general map reading.
The “RCD and VSRCM” figure shows the different components of this “standard” and how they interrelate with RCD.

RCD and VSRCM

Share