<?xml version="1.0" encoding="utf-8"?><!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.2d1 20170631//EN" "JATS-journalpublishing1.dtd">
<ArticleSet>
  <Article>
    <Journal>
      <PublisherName>IJIRCSTJournal</PublisherName>
      <JournalTitle>International Journal of Innovative Research in Computer Science and Technology</JournalTitle>
      <PISSN>I</PISSN>
      <EISSN>S</EISSN>
      <Volume-Issue>Volume 14 Issue 2</Volume-Issue>
      <PartNumber/>
      <IssueTopic>Electrical &amp; Electronics Engineering</IssueTopic>
      <IssueLanguage>English</IssueLanguage>
      <Season>March - April 2026</Season>
      <SpecialIssue>N</SpecialIssue>
      <SupplementaryIssue>N</SupplementaryIssue>
      <IssueOA>Y</IssueOA>
      <PubDate>
        <Year>2026</Year>
        <Month>04</Month>
        <Day>13</Day>
      </PubDate>
      <ArticleType>Computer Sciences</ArticleType>
      <ArticleTitle>Hybrid Deep Feature Fusion for Facial Emotion Recognition Using VGG19 and ResNet152V2</ArticleTitle>
      <SubTitle/>
      <ArticleLanguage>English</ArticleLanguage>
      <ArticleOA>Y</ArticleOA>
      <FirstPage>74</FirstPage>
      <LastPage>80</LastPage>
      <AuthorList>
        <Author>
          <FirstName>Shivani Singh</FirstName>          
          <AuthorLanguage>English</AuthorLanguage>
          <Affiliation/>
          <CorrespondingAuthor>Y</CorrespondingAuthor>
          <ORCID/>
                      <FirstName>Jay Kumar Pandey</FirstName>          
          <AuthorLanguage>English</AuthorLanguage>
          <Affiliation/>
          <CorrespondingAuthor>N</CorrespondingAuthor>
          <ORCID/>
           
        </Author>
      </AuthorList>
      <DOI>https://doi.org/10.55524/ijircst.2026.14.2.10</DOI>
      <Abstract>Facial Emotion Recognition (FER) has become an area of research in human-computer interaction and affective computing. This study presents a deep learning-based FER framework that can recognize facial emotions in both controlled and real-world environments. The proposed method uses an architecture that brings together VGG19 and ResNet152V2 to make use of their different feature learning strengths. VGG19 helps to pick up spatial and texture information while ResNet152V2 focuses on pulling out deeper semantic representations through residual learning. A full preprocessing pipeline, including normalization, resizing and techniques like rotation and flipping is used to improve robustness against changes in lighting pose variations and class imbalance. The effective framework is tested using two datasets, CK+ and FER2013. The experimental results show that the combined model consistently outperforms baseline architectures. On CK+ it achieves a classification accuracy of 98.0%. On FER2013 it achieves 92.0% along with good precision, recall and F1-score. The results show that combining features at the feature level greatly improves the framework ability to generalize. This makes the proposed framework suitable for FER applications and can be used in various applications.</Abstract>
      <AbstractLanguage>English</AbstractLanguage>
      <Keywords>Deep Learning, Facial Emotion Recognition, Transfer Learning, Computer Vision, VGG-19, ResNet152V2.</Keywords>
      <URLs>
        <Abstract>https://ijircst.org/abstract.php?article_id=1457</Abstract>
      </URLs>      
    </Journal>
  </Article>
</ArticleSet>