Machine Learning, AI Application, Web coding, Web Development

Interview Assistance

About The Project

Many job seekers struggle to evaluate their interview performance and often repeat mistakes due to a lack of feedback. To address this, I developed an AI-powered tool that analyzes voice, facial expressions, and language, providing visualized feedback to help users identify strengths and weaknesses. By designing and coding the webpage independently, I aimed to help users enhance their interview skills and excel in a competitive job market.

Final Outcome

Video

When users visit the website, they will see a large screen for camera capture, with a clean and intuitive interface to help them quickly get started.

The interface offers three toggle buttons for capture modes:

  1. Normal Camera Mode: Standard camera view.

  2. Facemesh Mode: Facial keypoint capture.

  3. PoseNet Mode: Body keypoint capture.

Users can start and stop interview practice recording with "Start" and "End" buttons, enabling real-time capture of body movements, facial expressions, and speech.

Rating

After the recording ends, the system automatically generates an analysis report and a text transcription, displayed below the screen.

The report includes:

  1. Scores: Ratings (0-5) for body language, facial expressions, and communication.

  2. Evaluation: Summary of performance in each area.

  3. Strengths & Weaknesses: Key highlights and areas for improvement.

  4. Suggestions: Actionable tips to enhance interview skills.

Suggestion

Users can intuitively understand their performance through visualized scores and feedback, enabling them to reflect, practice, and improve body language, facial emotion control, and communication skills in a targeted manner.

Technology Application

Facemesh(ml5)

Identify facial muscle positions to analyze the connection between expressions and emotional or personality traits, interpreting the interviewee's emotional state.

PoseNet(ml5)

Capture and analyze body movements to help the AI identify which gestures convey different emotions.

Whisper

Convert spoken content from interviews into text to facilitate further analysis of language expression and communication quality.

ChatGPT

Combine all collected data for AI analysis to generate performance scores and personalized suggestions, helping users improve their interview skills effectively.

Technology Integration and Implementation Approach

FaceMesh

Facemesh can be used to capture key facial points (such as eyes, mouth corners, forehead, etc.) and track expression changes in real-time

Use the ChatGPT API to perform preliminary analysis of movement patterns and associate them with emotion labels

PoseNet

Use PoseNet to track the interviewee's key body points, such as hands, head, shoulders, and other positions

Defining Expression Patterns and Emotion Labels

Whisper

Use Whisper to perform real-time recording and transcription of interview speech, converting spoken words into text for further analysis

ChatGPT

Use the OpenAI API to pass data to the model and set appropriate prompts

Combine data from PoseNet, Facemesh, and Whisper to generate a personalized interview performance analysis report

ChatGPT provides a detailed report, including feedback and improvement suggestions

Coding

HTML Code

JavaScript Code

© Framer Inc. 2023

Instagram

Email