<?xml version="1.0" encoding="utf-8"?><!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.2d1 20170631//EN" "JATS-journalpublishing1.dtd">
<ArticleSet>
  <Article>
    <Journal>
      <PublisherName>IJIRCSTJournal</PublisherName>
      <JournalTitle>International Journal of Innovative Research in Computer Science and Technology</JournalTitle>
      <PISSN>I</PISSN>
      <EISSN>S</EISSN>
      <Volume-Issue>Volume 9 Issue 4</Volume-Issue>
      <PartNumber/>
      <IssueTopic>Computer Science and Engineering</IssueTopic>
      <IssueLanguage>English</IssueLanguage>
      <Season>July - August 2021</Season>
      <SpecialIssue>N</SpecialIssue>
      <SupplementaryIssue>N</SupplementaryIssue>
      <IssueOA>Y</IssueOA>
      <PubDate>
        <Year>2021</Year>
        <Month>07</Month>
        <Day>23</Day>
      </PubDate>
      <ArticleType>Computer Sciences</ArticleType>
      <ArticleTitle>Recognition of Indian Sign Languages</ArticleTitle>
      <SubTitle/>
      <ArticleLanguage>English</ArticleLanguage>
      <ArticleOA>Y</ArticleOA>
      <FirstPage>36</FirstPage>
      <LastPage>38</LastPage>
      <AuthorList>
        <Author>
          <FirstName>Sahara Shetty</FirstName>          
          <AuthorLanguage>English</AuthorLanguage>
          <Affiliation/>
          <CorrespondingAuthor>Y</CorrespondingAuthor>
          <ORCID/>
                      <FirstName>Yashaswi G. P</FirstName>          
          <AuthorLanguage>English</AuthorLanguage>
          <Affiliation/>
          <CorrespondingAuthor>N</CorrespondingAuthor>
          <ORCID/>
           
        </Author>
      </AuthorList>
      <DOI>https://doi.org/10.21276/ijircst.2021.9.4.9 </DOI>
      <Abstract>The focal point of this project is about developing a user interface to aid people who have hearing and speaking disabilities. This application would help in recognizing the different signs with respect to the single user. The real-time image is captured using a webcam and dynamically stored. The elements are initialized and saved in KN Neighbor classifiers. These classifiers are used to load the KN Neighbor model. This model predicts the signs dynamically and is built using machine learning techniques.&amp;nbsp;Mobile Net is used as a machine learning package. This model was implemented&amp;nbsp;in three phases. The first phase deals with the user interface where user images are captured&amp;nbsp;using a webcam. In the second phase, the initialized elements are used by the KN Neighbor classifier and then stored as the KN&amp;nbsp;Neighbor model. The third phase involves the prediction of signs.</Abstract>
      <AbstractLanguage>English</AbstractLanguage>
      <Keywords>Indian sign language, Tensor flow, squeeze net</Keywords>
      <URLs>
        <Abstract>https://ijircst.org/abstract.php?article_id=600</Abstract>
      </URLs>      
    </Journal>
  </Article>
</ArticleSet>