Since Jeopardy questions are not fashioned in SQL, and are in fact designed to obfuscate rather than elucidate, IBM’s Watson program used a number of indirect DeepQA processes to beat human competitors in January 2011 (Figure 1). A good deal of the program’s success may be attributable to algorithms that handle language ambiguity, for example, ranking potential answers by grouping their features to produce evidence profiles. Some of the twenty or more features analyzed include data type (e.g.; place, name, etc.), support in text passages, popularity, and source reliability.
Since managing ambiguity is critical to successful natural language processing, it might be easier to develop AI in some human languages as opposed to others. Some languages are more precise. Languages without verb conjugation and temporal indication are more ambiguous and depend more on inferring meaning from context. While it might be easier to develop a Turing-test passing AI in these languages, it might not be as useful for general purpose problem solving since context inference would be challenging to incorporate. Perhaps it would be most expedient to develop AI in some of the most precise languages first, German or French, for example, instead of English.