Popular live social discovery platform, Yubo, allows Gen Zers to meet new friends and connect with other people their age from all over the world. While generally positive for its users, Yubo is still a digital platform and, similar to other major social media networks, there can be people on the app who chose not to respect the Community Guidelines in place. Their solution? On top of a vast safety team and a broad range of safety features including text and visual moderation, Yubo has recently introduced audio moderation to the mix.
What is Yubo’s new audio moderation?
Yubo’s audio moderation is intended to catch anything spoken in livestreams that text or visual-based automated moderation is unable to detect. This would include language that falls into the category of hate speech, bullying, self-harm, violence or threats for example. To help prevent this, Yubo has partnered with Hive, a leading provider of cloud-based AI solutions for understanding content. A trial phase of this technology was conducted in the US in May of 2022, and they recently expanded the technology to additional English-speaking regions including the UK, Australia and Canada.
Yubo becomes the very first social media app to introduce audio moderation. So, now, how does it actually work?
The audio moderation tool records and then transcribes 10-second audio snippets taken from livestreams with at least 10 participants. Then, it will scan the text using AI designed by Hive. To ensure the highest accuracy, the audio moderation tool will only flag transcripts that hold words and phrases violating Yubo’s Community Guidelines. With more than 600 livestreams scanned daily by the audio moderation tool, Yubo will be able to increase safety across the app.
Audio moderation is not only left up to the technology. Yubo’s team of Safety Specialists, who specialize in online safety and moderation, will be receiving the flagged transcripts. From there, they will be able to determine whether moderation actions should be applied to the offender’s account or even if law enforcement needs to be notified.
What happens to the transcripts that aren’t flagged? If no suspected violations are contained within the transcripts, Yubo will not review the transcripts or save them. This type of content would be deleted after 24 hours to protect the user’s privacy, while leaving enough time for moderators to check the transcripts if a user is reported by another.
What does this new audio moderation offer to the Yubo App?
Safety is at the core of Yubo’s values and this new audio moderation method further advances their mission to create a digital safe space for Gen Z.
Why can Yubo trust this audio moderation as a moderation tool?
Machine learning allows the algorithm and AI behind the technology to process large amounts of data quickly and effectively. Compared to if this type of information had to be analyzed manually, machine learning delivers results quicker. Other benefits of machine learning for the audio moderation is that it will continuously work to improve its accuracy and that it can even help the Yubo App unveil negative behaviors and patterns that may never have been detected.
Are there downsides to Yubo’s audio moderation?
The largest challenge Yubo faces with their audio moderation is the possibility of false positives. While audio moderation is programmed to detect any use of a foul word, for instance, it doesn’t always mean the word is being used in a way in which the user is actively violating the app’s Community Guidelines. For instance, the audio moderation may detect a harmful word as said by a user, when in reality, it was merely picking up lyrics from a song the user was listening to.
This isn’t too high of a concern for the Yubo team, as their moderation team will still review all flagged transcripts before applying any actions to the offending user’s account and will allow them to determine the context in which the word was used.
Yubo’s new audio moderation is a step in the right direction to a safer online experience for all.