Saturday, 23 June 2012

Ontology Turing Test

Alan Turing
Wikimedia Under fair use licence
National Portrait Gallery, London

Today is Alan Turing's 100th birthday and is therefore an appropriate day to write about something AI inspired. I attended a meeting in the US a few months ago in which an opinion was offered about computers, as fact, which prodded at the AI researcher that lives inside a deep, dark cave inside me. It was a statement which spoke to over half a century's worth of AI blood, sweat and tears - a lot of tears - and I profess one that I have also questioned over the years. It was this;

computers will never write good textual definitions in ontologies.

There are many ways one could interpret such a statement as the language is, ironically, loose but I took it to mean the following;

computers will never write English definitions in an ontology that are of the same quality as a human.

My interpretation is still a little loose. In the interest of being a good scientist then, let me recast this as a research question which speaks to the beating heart of AI;

Given two textual definitions, can a person determine which is written by machine and which by a human?

This line of thought is, of course, nothing new. For those familiar with AI, Alan Turing first proposed a similar question in what famously became the Turing Test; could a human player determine which of the two (hidden) opponents was human (and therefore which was machine) based on the imagination game?

In 2011 I undertook some work with Robert Stevens of University of Manchester and Richard Power, Sandra Williams and Allan Third of Open Universit to see if we could automate the generation of English definitions based on axiomatisation of ontology classes in EFO. The motivation is fairly straightforward - EFO had a lot of classes which were richly axiomatised but that lacked textual definitions. Could we utilise one to inform the other? There is much to be gained from this. Writing textual definitions is laborious and time consuming and axiomatisation done by hand can be similarly so. If we could automate one we might reduce the cost significantly.

So back to our Ontology Turing Test then. Simple question, can you tell the human from the machine? Here is a smattering of definitions, some machine derived some hand written by human that I've hand picked (to ensure fair comparison I have modified a few to ensure they all start 'an x is a y...'). Answers are at the bottom of the page When you finish you should also question the original statement - computers never write good textual definitions in ontologies.

  1. A Leydig cell is both something that is located in a testis, and something that is part of an endocrine system.
  2.  A planned process is a processual entity that realizes a plan which is the concretization of a plan specification.
  3.  A Metabolic Encephalopathy (disorder) is a metabolic disease and is a disorder of the brain.
  4. A LY2 cell line is all of the following: something that is bearer of a breast carcinoma, something that derives from a Homo sapiens, something that derives from an epithelial cell, and something that derives from a mammary gland. 
  5. A laboratory test is a measurement assay that has as input a patient-derived specimen, and as output a result that represents a quality of the specimen.
  6. A role is a realizable entity the manifestation of which brings about some result or end that is not essential to a continuant in virtue of the kind of thing that it is but that can be served or participated in by that kind of continuant in some kinds of natural, social or institutional contexts.

A wider question then to finish;

Can ontologies help to make machines think more like humans?

Alas, I can not even start with an answer as I barely have the questions to test this.

Spoiler alert!
Answers for above: 1: machine, 2: human (OBI), 3: machine,  4: machine, 5: human (OGMS), 6: human (BFO) taken from latest versions in BioPortal correct as of 23rd June 2012


  1. Oh James - it's almost as though you're trying to make the point that machines are BETTER at writing definitions than humans - at least when it comes to BFO, OBI, OGMS ;)

  2. All we are saying is give PCs a chance

    John Lennon, 1969

  3. Got 2 and 3 wrong, all the others right. I was not sure for 3, because it is too simple. For 2, I suggest to find the author and help him seek help.

    I don't know the answer to your last question, but for "Can ontologies help to make humans think more like machines? ", the answer is yes, and I am very glad about it.

  4. Pretty good post. I just stumbled upon your blog and wanted to say that I have really

    enjoyed reading your blog posts.

    custom assay development