*there are probably more than 4
**it's not all that easy
I've been preachy recently in complaining about how the ontology world doesn't apply enough software engineering practices in producing ontologies. I thought it was about time I explained some of the things I thought they could do by talking specifically about the things we do here to help us. There's an expanded version of this in a paper accepted for 2013 OWLED workshop for those attending.
1. Whatcha gonna do?
First thing we steal from software engineering is our overall methodology. I have talked a
bit about this previously at ICBO 2012 where I presented on how we applied Agile Software Engineering Methods to the development of
the Software Ontology. There are a few things this gives us. It helps us prioritise development. Collecting requirements is not usually a problem - there are always bucket loads. As with most projects, there is always more work than people and we need to focus on the things that are most important - which can change month to month.
|
The red stuff means we're doing it right (that is,
we're catching the stuff we're doing wrong early). |
We use a few agile methods to help with this.
Priority poker and buy-a-feature have been of particular use when engaging with users and also reasonably fun to do. It also helps keep our major stakeholders involved with the process of development, which is useful because it means there are no big surprises at end of each sprint (i.e. cycle of development). This way everyone knows what we're gonna do and so do we.
2. Building a house with bricks on your back
One of the primary ontology I'm currently involved with developing is the
Experimental Factor Ontology. EFO is an
application ontology - that is to say it is built to serve application use cases, distinct from a
reference ontology which are built as a de facto reference for a given domain. When building EFO we try to reuse as many reference ontologies as we deem suitable (I won't expand on what this means here). But needless to say, our reliance on external resources introduces a coupling - in the same way it does in software projects using libraries. I often refer to this as trying to build a house with the bricks strapped to your back; nice to know you have them close by, but they're heavy. We have some
importing code we use to help us manage these imports, based on
MIREOT. This still gives us issues to look out for. For example, there is much variation in annotation property names, for example for 'synonyms', so we need to merge these so our applications know where to find them. Where imports are not possible or suitable, we mint new EFO classes. Since multiple developers from various sites can mint these, we have built some tooling for central URI management to avoid clashes which could otherwise easily occur.
URIGen is this tool - see
my previous blog for more on this.
To keep track of external changes and to produce our release notes we use our
Bubastis tool which does a simple syntactic diff across two ontologies to tell you what's changed, been added and been deleted. Keeping track of what's going on externally is a complicated process and brings
baggage with it. There is a discussion to be had as to when the balance of keeping track introduces an unacceptable overhead as you are effectively at the mercy of external developers. Examples of changes we've had to deal with include: upper ontology refactoring, mass URI refactoring, funding ending, general movement of classes, changes to design patterns (and axiomatisation therein) and so on. For what it's worth I think we're in a better place now than we started building EFO five and a bit years ago, although my opinion on this will change if the new (non-backwards compatible) BFO temporal relations are adopted.
3. Test Driven Development
Another agile process we adopt is test driven development. In a continuous integration framework, it is necessary to test each commit of code to ensure that it does not break previously working components and introduce new bugs and we treat OWL with the same respect. We have developed a series of automated tests using
Bamboo that the ontology is ran against after each commit which performs checks such as for: invalid namespaces; IRI fragments outside accepted conventions; duplicate labels between different classes; synonyms duplicated between classes; obsolete classes used in axiomatisation; unit tests for expected class subsumption (e.g. cancer should be subclass of disease).
4. Design Patterns
Another aspect is performance and the OWL DL profile we restrict to. In order to fully exploit the querying power of the ontology, we use reasoning to infer various hierarchies of interest, such as classifications of cell lines by disease and species, and we need this to happen in a time that is responsive. There are several methods we use to ensure this remains the case. The first is the use of design patterns. We restrict axiomatisation to a set of patterns that we have developed to answer our priority competency questions. The second is to disallow the addition of new object properties and characteristics on those properties. The third is to classify the ontology on every commit (and run the above test code).
HermiT gives us best performance for this interested in reasoning and has done for quite some time now.
We also employ an automated release cycle to release a new version of EFO monthly, in order to best coordinate with our application needs. The release is programmatically performed using a Bamboo build plan which performs tasks such as creating the inferred version of the ontology, converting the ontology to
OBO format, publishing files to the web, building the EFO website and creating URLs for classes in the EFO namespace to ensure that concepts described in EFO fully dereference.
Agility, reality, profanity
Our overall approach has improved the production quality immensely over the last few years. To quantify this with an example: over our last 3 months of work, 74% of the time our EFO continuous integration testing has passed on check in. This means that 26% of the time it has not. Although this sounds like a bad thing it's actually good to know we're catching this now before it goes for release to applications. Much of this is actually relatively minor stuff like white space in labels which we are fairly strict on but sometimes it's more serious stuff that we're glad we caught.
We've also become more dynamic in prioritising and sharing tickets meaning more important stuff gets done more quickly and by a number of people, tickets being picked off the top of the priority pile as people become available.
We still struggle with a few things and these are challenges that hit most ontology consumers I think. The biggest is balancing correctness with 'doing something'. This is a tricky brew to get right as we don't want the ontology to be wrong, but we do want to get things out and working as quickly as possible. Thinking about the metaphysical meaning of a term over a period of months does not help when you have data to annotate covering 1,000 species and 250,000 unique annotations as your target; this is the reality we face. In the same breath though, getting things very wrong doesn't provide the sort of benefits you want from using an ontology - and using an ontology adds an overhead so there should be benefits.
There is a dirty word in the ontology world that most dare not utter, but we do so here; 'compromise'. We do stuff, if it's wrong we fix it, we release early, release often and respond rapidly to required changes from users.
Sound familiar?