The focal point of this project is about developing a user interface to aid people who have hearing and speaking disabilities. This application would help in recognizing the different signs with respect to the single user. The real-time image is captured using a webcam and dynamically stored. The elements are initialized and saved in KN Neighbor classifiers. These classifiers are used to load the KN Neighbor model. This model predicts the signs dynamically and is built using machine learning techniques. Mobile Net is used as a machine learning package. This model was implemented in three phases. The first phase deals with the user interface where user images are captured using a webcam. In the second phase, the initialized elements are used by the KN Neighbor classifier and then stored as the KN Neighbor model. The third phase involves the prediction of signs.