Google says it has made it possible for a smartphone to interpret and “read aloud” sign language.

The tech firm has not made an app of its own but has published algorithms which it hopes developers will use to make their own apps.

Until now, this type of software has only worked on PCs.

Campaigners from the hearing-impaired community have welcomed the move, but say the tech might struggle to fully grasp some conversations.

In an AI blog, Google research engineers Valentin Bazarevsky and Fan Zhang said the intention of the freely published technology was to serve as “the basis for sign language understanding”. It was created in partnership with image software company MediaPipe.

“We’re excited to see what people come up with. For our part, we will continue our research to make the technology more robust and to stabilise tracking, increasing the number of gestures we can reliably detect,” a spokeswoman for Google told the BBC.

Google acknowledges this is the first step. Campaigners say an app that produced audio from hand signals alone would miss any facial expressions or speed of signing and these can change the meaning of what is being discussed.

Also, any regionalisms which exist in a local area would not be included.