Google introduces SignGemma, an advanced AI model designed to translate sign language into spoken or written text in real-time.
The model aims to bridge communication gaps between the deaf community and hearing individuals, promoting inclusivity and accessibility.
Trained on video data, SignGemma can recognize complex hand gestures, facial expressions, and body movements to interpret sign language accurately.
Initially focused on American Sign Language (ASL), SignGemma holds promise for expansion to other global sign languages.
Google envisions applications in real-time conversations, education, customer support, and more — making daily interactions smoother for signers.
While impressive, SignGemma is currently a research model — but it's a significant step toward real-world AI solutions for the deaf and hard-of-hearing communities.