step 1: face detection
we use MediaPipe Face Mesh (Google's AI) to detect 478 landmarks on ur face. we focus on:
- left cheek (10 points)
- right cheek (10 points)
- forehead (20 points)
step 2: colour extraction
we capture 15 frames over 1.5 seconds and extract RGB (red, green, blue) values from
those facial landmarks. this helps reduce lighting variations!
step 3: colour space conversion
RGB → LAB colour space conversion. LAB is designed to match human perception:
- L*: lightness (0-100, dark to light)
- a*: green to red axis
- b*: blue to yellow axis
step 4: undertone detection
using the a* and b* values, we determine if ur skin has:
- warm undertone: b* > 0 (yellow/peachy tones)
- cool undertone: a* > 0, b* ≤ 0 (pink/red tones)
- neutral: balanced values
step 5: skin tone classification
we calculate the ITA° (Individual Typology Angle):
ITA° = arctan[(L* - 50) / b*] × 180/π
this maps to the Monk Skin Tone Scale (1-10), a modern inclusive scale
developed by Google & Harvard.
step 6: colour recommendations
based on ur Monk scale & undertone, we suggest colours using:
- complementary theory: opposite hues on the colour wheel
- analogous harmony: nearby hues that blend naturally
- contrast levels: lighter skin → richer colours, deeper skin →
vibrant or soft pastels
the draping feature
tap any colour to "drape" it below ur face - this simulates how fabric colours look
against ur skin. rate colours to build ur personal palette!