<?xml version="1.0" encoding="utf-8"?><!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.2d1 20170631//EN" "JATS-journalpublishing1.dtd">
<ArticleSet>
  <Article>
    <Journal>
      <PublisherName>IJIRCSTJournal</PublisherName>
      <JournalTitle>International Journal of Innovative Research in Computer Science and Technology</JournalTitle>
      <PISSN>I</PISSN>
      <EISSN>S</EISSN>
      <Volume-Issue>Volume 9 Issue 6</Volume-Issue>
      <PartNumber/>
      <IssueTopic>Computer Science</IssueTopic>
      <IssueLanguage>English</IssueLanguage>
      <Season>November - December 2021</Season>
      <SpecialIssue>N</SpecialIssue>
      <SupplementaryIssue>N</SupplementaryIssue>
      <IssueOA>Y</IssueOA>
      <PubDate>
        <Year>2022</Year>
        <Month>01</Month>
        <Day>04</Day>
      </PubDate>
      <ArticleType>Computer Sciences</ArticleType>
      <ArticleTitle>NLP-based Sign Gesture Identification for Disabled People</ArticleTitle>
      <SubTitle/>
      <ArticleLanguage>English</ArticleLanguage>
      <ArticleOA>Y</ArticleOA>
      <FirstPage>41</FirstPage>
      <LastPage>45</LastPage>
      <AuthorList>
        <Author>
          <FirstName>Pankaj Saraswat</FirstName>          
          <AuthorLanguage>English</AuthorLanguage>
          <Affiliation/>
          <CorrespondingAuthor>Y</CorrespondingAuthor>
          <ORCID/>
             
        </Author>
      </AuthorList>
      <DOI>https://doi.org/10.55524/ijircst.2021.9.6.9</DOI>
      <Abstract>There are many methods for identifying signs, each of which generates a word for each one. It focuses on converting sign language into an appropriate English sentence. NLP techniques are also used in addition to sign recognition. The input is a framed and split video of sign language. This booklet teaches deaf and mute people sign language. It&amp;#39;s tough for non-blind persons to engage with blind people due to communication difficulties. To address this issue, the article suggests and describes an effective method. Language technology methods such as POS tagging and the LALR parser are used to convert identified sign words into English phrases. A number of applications are currently on the industry that allow blind people to interact with the world. Combining technology will not be able to address the problem of mobile sign language translation in daily activities. A video interpreter can assist deaf or hearing-impaired people in a variety of situations. People with hearing impairments will be able to learn sign language and have films translated into sign language as a consequence of this research. The present work may be used as a communication interface for both speech-impaired and non-speech-impaired individuals. It will assist bridge the communication gap between speech-impaired people and the rest of the population by capturing and analyzing signals, as well as recognizing and displaying output in the form of comprehensible phrases.</Abstract>
      <AbstractLanguage>English</AbstractLanguage>
      <Keywords>Communication, Hearing and speech, NLP, Parsing, Sign Language.</Keywords>
      <URLs>
        <Abstract>https://ijircst.org/abstract.php?article_id=622</Abstract>
      </URLs>      
    </Journal>
  </Article>
</ArticleSet>