Einstein’s “relativity theory” is mentally challenging.
An interesting experiment would be to mix George Bush’s language patterns into this complexity. This process will be shown – in analogy – to mirror human-to-machine communication where it is possible for juxtaposing datasets to be integrated and information to be “smoothed” for interpretation at many entry points.
Why is it still not possible for machines to “see” our human world and thus allow computers to converse and communicate with humans in natural language in an “everyday” manner?
With an abundance of unstructured data and with traffic flowing over the internet growing faster than the current network will be able to carry by 2013, we almost compulsively expect to extract useful information continuously to enhance personal and business decisions. Additionally, as the accumulation of data in proprietary databases and data repositories increases, it is essential to find more efficient ways of making information retrieval and data usage super-accessible.
A senior manager at a bank wants to obtain information about clients’ aggregated personal circumstances and financial needs: from a large repository of “unstructured” data; how does he or she know what to ask in order to identify the most relevant information? A new or extended bond for a client could be on offer if the manager knew of a planned home move by the client, for instance. Study loans could be on offer for children requiring further education, or a larger insurance package suggested if the manager knew that the client’s existing insurance was currently inadequate. The unstructured repository causes difficulties because the same question can be phrased in many different ways with different grammar analysis and semantic word combinations. Frustratingly, the retrieval results always differ according to options selected.
Current semantic technologies use extensive ontologies and categorisation systems, the mere design of which leads to severe “permutation” problems. Even the smallest body of text simply structured can generate trillions upon trillions of grammar-semantic equivalents. Given the mathematics, it is not surprising that we still do not have the concept-combination multiplication power able to adequately address the permutation problem.
What is needed is a compact and powerful representational matrix that can act as interpreter between human language and data to generate the trillions of concept permutations that adequately represent our real world. A machine that is in this way – able to “see” – could ensure that all of our human world concept combinations and language patterns relate to everything else logically and realistically in a stored electronic format.
Coming back to George Bush and Einstein’s relativity theory – a semantic “engine” that can easily perform the integration of language patterns between totally disparate textual sources, could also in a corresponding manner enable us to create multiple equivalent searches in the whole search engine- and proprietary database query universe. Searching for a needle in a haystack using a thousand magnifying glasses is a handy analogy to mirror the trillions of concept permutations that can in this way be provided to meaningfully and adequately represent any mass of unstructured data.
The application is definitely underpinned by a need, as most search engines are still three-word, caveman-speak search phrase solutions limited by keywords, at best only interchangeable with synonyms or related words. Effective multi-word to multi-word exchange technology is almost non-existent.
If successful, we might perhaps soon be able to expand beyond – and finally say goodbye to – classic keyword search and information retrieval. The immediate goal is “humanising” of cyber dataspace with seamless application of a semantic technology that enables mankind to converse and communicate effectively with data across a wide information spectrum.
To see how George Bush subtly explains Einstein’s relativity, have a look at…