How does it work?

 
 

Bridging the gap between Processing and Understanding Language

Using brute force in Natural Language Processing and Machine Learning combined with advanced statistics will only approximate meaning and thus will not deliver in terms of real text understanding. These efforts still generate key-word hits and guesswork at best. Knowledge workers in many companies across many industries are struggling to find the needle in the haystack because it takes them a lot of time to find accurate results and there is a fear of missing out on the correct information.

Additionally a massive amount of manpower is needed to manually tag and index the data stack to allow for better results. A lot of text-data is being generated eachday so the need for automatic classification and intelligent information retrieval is imminent.

 

Methodology

Nalantis’ solutions are based on proprietary semantic technology - a set of techniques to find meaning in unstructured text, and to use meaning to find other texts. Finding meaning is done by analyzing text linguistically, mapping words and expressions onto a ConceptNet (a semantic network built from nodes representing words or short phrases of natural language, and labeled relationships between them) and using powerful semantic pattern matching to combine these concepts into meaningful entities.

Machine/Deep Learning and Human Linguists are used to build these ConceptNets semi-automatically and API’s are in place to scale the companies’ Natural Language Understanding products rapidly. The basic semantic analysis and matching engine is language and domain independent. This means that whenever the engine has to handle a new domain, a new ConceptNet has to be built. This is obviously a notoriously expensive procedure when this is done fully manually. The basic approach is to use open-source & domainspecific ontologies and taxonomies. When a new language is added to the domain-dependent application, a dictionary has to be created mapping lexical expressions to the concepts. The ConceptNet then functions as an interlingua, and matching between documents written in different languages becomes possible. The crux in Nalantis’ approach is to find ways to speed up building ConceptNets and mapping lexicons without the need for many computational linguists. In order to do this, Nalantis uses Deep Learning algorithms to generate concept-candidates and lexicons automatically.

 
Nalantis methodology@3x.png
 
 

Deep Learning

The newer generation methodologies like Deep Learning (Neural Networks) and Cognitive Computingare breaking barriers in the Big Data fields of Internetof Things, Robotics and Image/Video Recognition but cannot be successfully deployed for text without huge amounts of training and (labeled) sample data. Nalantis deploys a form of unsupervised machine learning for language understanding.

 

USP

1. language agnostic
2. domain independent
3. cross-language performance 4. self learning
5. API enabled

 
 
 



Nalantis’s language independent concept net linked with language dependent lexeme sets* enables us to compare concepts at a language independent meta level and to match documents that are written in different languages. Nalantis doesn’t need machine translation or multilingual dictionaries to achieve this objective.

* which contain synonyms of the concepts in each specific language

 
cross language badge@2x.png