Directory Image
This website uses cookies to improve user experience. By using our website you consent to all cookies in accordance with our Privacy Policy.

What is Natural Language Processing?

Author: Mansoor Ahmed
by Mansoor Ahmed
Posted: Dec 13, 2020

Description

Natural language processing (NLP) may be a branch of AI that helps computers understand, interpret and manipulate human language. It attracts from many disciplines, containing computing and linguistics, in its pursuit to fill the gap between human communication and computer understanding.

Evolution of language processing

While tongue processing isn’t a replacement science, the technology is rapidly advancing because of an increased interest in human-to-machine communications, plus an availability of massive data, powerful computing and enhanced algorithms. As a person’s, we’ll speak and write in English, Spanish or Chinese. But a computer’s language – referred to as machine language or machine language – is essentially incomprehensible to most of the people. At your device’s lowest levels, communication occurs not with words but through many zeros and ones that produce logical actions.

Indeed, programmers used punch cards to speak with the primary computers 70 years ago. This manual and arduous process was understood by a comparatively small number of individuals. Now we’ll say, "Alexa, i prefer this song," and a tool playing music in our home will lower the quantity and reply, "OK. Rating saved," during a humanlike voice. Then it adapts its algorithm to play that song – et al. loves it – subsequent time we hear that music station. Let’s take a better check out that interaction. Our device activated when it heard we speak, understood the unspoken intent within the comment, executed an action and provided feedback during a well-formed English sentence, beat the space of about five seconds. The whole communication was made possible by NLP, together with other AI elements like machine learning and deep learning.

Why is NLP important?

Large volumes of textual data

The Natural language processing assistances computers communicate with humans in their own language and scale other language-related tasks. For instance, NLP makes it possible for computers to read text, hear speech, interpret it, measure sentiment and determine which parts are important.

Machines nowadays may examine more language-based data than humans, without fatigue and during a consistent, impartial way. Since the amazing amount of unstructured data that’s generated a day, from medical records to social media, automation is going to be critical to completely analyze text and speech data efficiently.

Structuring a highly unstructured data source

Human language is astoundingly complex and diverse. We express ourselves both verbally and in writing in infinite ways. Not only are there many languages and dialects, but within each language may be a unique set of grammar and syntax rules, terms and slang. Once we write, we frequently misspell or abbreviate words, or omit punctuation. We’ve regional accents when we speak, and that we mutter, stutter and borrow terms from other languages.

There’s also a requirement for syntactic and semantic understanding and domain expertise that aren’t necessarily present in these machine learning approaches. The NLP is dynamic because it provides assistance to resolve vagueness in language and adds useful numeric structure to the info for several downstream applications, like speech recognition or text analytics.

Methods: Rules, statistics, neural networks

By symbolic methods many language-processing systems in the youth were designed, i.e., the hand-coding of a group of rules, including a dictionary lookup: like by writing grammars or devising heuristic rules for stemming. More recent systems supported machine-learning algorithms have many advantages over hand-produced rules:

The learning procedures used during machine learning automatically specify in the foremost common cases, while writing rules by hand it’s often not in the least obvious where the trouble should be directed.

Automatic learning procedures can make use of statistical inference algorithms to supply models that are robust to unfamiliar input (e.g. containing words or structures that haven’t been seen before) and to erroneous input (e.g. with misspelled words or words accidentally omitted). Handling commonly such input gracefully with handwritten rules, or, more generally, creating systems of handwritten rules that make soft decisions is very difficult and time-consuming.

Systems supported automatically learning the principles are often made more accurate just by supplying more input file. However, systems supported handwritten rules can only be made more accurate by increasing the complexity of the principles, which may be a far more difficult task. Especially, there’s a limit to the complexity of systems supported handwritten rules, beyond which the systems become more and more unmanageable. Though, creating more data to input to machine-learning systems simply needs a corresponding increase within the number of man-hours worked, generally without significant increases within the complexity of the annotation process.

The symbolic methods are still (2020) commonly used when the quantity of coaching data is insufficient to successfully apply machine learning methods in spite of the recognition of machine learning in NLP research.

Statistical methods

Since the so-called "statistical revolution" within the late 1980s and mid-1990s, much tongue processing research has relied heavily on machine learning. The machine-learning model demands instead for using statistical inference to automatically learn such rules through the analysis of huge corpora of typical real-world examples.

Numerous diverse classes of machine-learning algorithms are applied to natural-language-processing tasks. Some of the earliest-used machine learning algorithms, like decision trees, produced systems of hard if-then rules almost like existing hand-written rules. The reserves language models upon which many speech recognition systems now rely are samples of such statistical models. Since the neural turn, statistical methods in NLP research are largely replaced by neural networks. However, they still are relevant for contexts during which statistical interpretability and transparency is required.

Neural networks

Further information: Artificial neural network: A large disadvantage of statistical methods is that they require elaborate feature engineering. Since the first 2010s, the sector has thus largely abandoned statistical methods and shifted to neural networks for machine learning. General techniques comprise the utilization of word embeddings to capture semantic properties of words, and a rise in end-to-end learning of a higher-level task (e.g. question answering) rather than counting on a pipeline of separate intermediate tasks (e.g., part-of-speech tagging and dependency parsing). This shift in some areas has entailed substantial changes in how NLP systems are designed; such deep neural network-based approaches could also be viewed as a replacement paradigm distinct from statistical tongue processing.

About the Author

Mansoor Ahmed Chemical Engineer,Web developer

Rate this Article
Leave a Comment
Author Thumbnail
I Agree:
Comment 
Pictures
  • montupatel  -  4 years ago

    Ohh good nice this man best informaction thank you

    2
Author: Mansoor Ahmed

Mansoor Ahmed

Member since: Oct 10, 2020
Published articles: 124

Related Articles