Using of notion space map helps to the creation of universal optimal language for communication between different associations, including communication with machine association, which unequivocally determines notions not dependent on cultural specific features.
Each language has its own filters and non-optimal surplus of language. Developing in concrete historical context it in different ways reflects semantic space (language lacunas, idiomatic). Moreover, new terms are borrowed as rule from other languages that brings additional unguided surplus and semantic muddle into language.
For example, popular word “computer” semantically absolutely opaque for Russian language, because it is impossible to understand its meaning and functionality from its spelling. That makes its integration into semantic space of language and perception it by humans more difficult. Unfortunately, more logical term “calculator” doesn’t settle down in language and technical literature.
Of course, it is possible to dispute about degree of language influence on mental activity. But it is fact, that non-optimal language structure frequently complicates transmission of information. It is offered next way out, considered form the CSNT point of view.
It is necessary to work out a code with given surplus (for restricting an entropy and defense from hindrances, based on mutual location of notions in notion space – areas, corresponded to objects (look Compressing and transmitting of information from the CSNT point of view). Then it is necessary to compare widely used phonemes of main alphabets (Indo-European and China-Tibet language family, look Recognizing of speech and manuscripts.
This language will be absolutely flective, because coding rules will be opened and for each notion there will be an opportunity to fast construction of its own code, basing on location of this notion in notion space. That is to say, it is possible to do:
- To correspond obviously to subject areas and to refuse from slang sub-languages.
- To choose code, which is most optimal in length and defense from hindrances), for each concrete situation of communication
- To exclude the results of language entropy: language lacunas, paradoxes, vagueness, which is brought by usual history of human language.
As a result – it is possible to increase the speed of communication in several times.
This language could be required for international communication, for creation human-machine interfaces. That is to say, for acceleration and simplification of entering the commands will be used new language, not natural languages, which accumulate language entropy. It’ll let the technical appliances recognize human commands without any complicated mathematical algorithms.
Beside that, natural languages reflect surrounding world through the prism of historical development of each nation that influence on perception of the world. New language allows thinking without this influence. Moreover, in order of co-ordinate approach of the CSNT, in this language notions, far in meanings, couldn’t have close phonetic, sound; they will be code in different ways. For example, ship and sheep. Hence, it will relieve the recognition of the speech by automates and there won’t be any problems with translation.
Existing approaches to creating artificial languages solve given tasks as rule only partly. They don’t consider the fact that information structure as rule is weakly connected with existing language structure. That is to say, absence of regulation in notion space brings disorder into language space. It is explained by the fact, that authors of these languages are linguists as rule. And language is important for them as end in itself, but not as an instrument for description of the world surrounds us. Exact mathematical approach is alien for them. Analogously, working out the linguistic systems mathematics base on linguists approach, so they try to automate existing language getting automated disorder.
That is why the approach to creation of the universal language “from semantic ordering” is occurred more prospective and well-founded. Hence it contains larger potential.
It is offered 56 consonant and 6 vocal phonemes for coding.
Number of combinations of 4 symbols of such alphabet is 62^4=14776336. It will be enough for coding of 500000 notions of surrounding world with practically 200% surplus.
For comparison – average length of the word in this text is 6-7 symbols. That is to say, using this approach the speed of information transmission between people will increase twice only at expense of transmitting code optimization.
If we connect this symbols with new 60- numeration (each symbol will determine not only letter, but also figure), there will be a universal system.