Hyde & Rugg:

We solve hard problems


Much of our research is based around our Verifier approach, which is the subject of Gordon's book Blind Spot.
 In brief, Verifier is designed to be a powerful, efficient way of detecting flaws in previous research into difficult problems. A key point is that people tend to make the same types of mistake, regardless of the field in which they are working, and of how expert they are. If you know how to spot those errors, and if you know how to check the accuracy of the information with which you are working, then you should be able to detect flaws in research more swiftly and efficiently than experts in the relevant field who don't know about error types.
In our first test of concept, Gordon applied a "light" version of Verifier to a problem which had defied the world's best codebreakers for almost a century, known as the Voynich Manuscript. Within a few weeks, using Verifier, he found a solution which previous researchers had missed. His findings were published in the leading peer-reviewed journal of historical cryptography, Cryptologia, and in Scientific American.
Sue Gerrard subsequently applied Verifier to autism research, resulting in a paper in the Journal of Autism and Developmental Disorders.
We had originally expected Verifier to be slow and systematic, but it turned out to be much swifter than we had expected. When we looked through our case studies, we realised that a significant issue was using expert human pattern-matching to spot key errors, rather than using slow and systematic serial processing and formal logics.
This opens up the prospect of applying Verifier to a wide range of major unsolved problems, with a realistic chance of finding significant new insights, swiftly and efficiently.


What's inside Verifier:
The Verifier approach integrates concepts and methods from an extensive range of disciplines.
Elicitation is needed to find out exactly what is really going on; for example, whether the back version of what happened in previous reported research is significantly different from the front version.
Representation is important because faulty choice of representation can lead to errors in reasoning.
The literatures on human expertise and on human error provide guidance on how experts actually behave, including potential sources of error arising from that behaviour. A related issue is that if experts are performing outside their core field, they are typically little or no better than non-experts, which can lead to unsuspected errors.
Specialist detail
We've drawn heavily on a range of work in the J/DM (Judgment/Decision Making) tradition, including Kahneman and Gigerenzer, to identify common errors and biases. We've usually found this more useful than traditional formal logics.
We've tried using software support for annotation, tabulation, etc of errors in published texts, but that produced a very high rate of false positives, because the chains of reasoning and evidence in published literature typically omit links where a reasoning step is taken for granted and is therefore not mentioned. This ties back into our categorisation of different types of semi-tacit and tacit knowledge, which make a lot of sense of apparent gaps and flaws in expert reasoning and use of evidence.