Rules, as created by systems such as AMIE or RUDIK, are useful for detecting and curating errors in knowledge bases. However, many knowledge bases are created automatically resp. semi-automatically and often contain incorrect entries. When these knowledge bases are then used to automatically derive logical rules, the data quality of the underlying knowledge base also affects the quality of the generated rules. This raises the following question:
How can we and be confident that a rule derived from an imperfect knowledge base is actually good?
Our COLT approach aims to answer this very question. With COLT, we present an approach that leverages deep kernel learning to estimate both the confidence as well as the quality of a rule in terms of its impact on the facts contained in a knowledge base. To estimate the true confidence of a rule, COLT requires only a few user interactions.