Unsafe search - why googl's safesearch function is not fit for purpose

CST and Antisemitism Policy Trust (APT) have jointly produced a new report that finds Google’s current tools for filtering offensive images to be inadequate and not fit for purpose.

Analysis conducted on behalf of CST and APT by the University of Cambridge’s Woolf Institute shows that Google’s ‘SafeSearch’ function, which can be enabled to restrict the amount of explicit and harmful content returned in searches, has no effect on the volume of antisemitic images that appear in Google Images search results.

This research demonstrates that whether or not SafeSearch - the only safety tool that Google makes available for public use – is switched on, a high proportion of visual antisemitic material is returned when searches for “Jewish jokes” and “Jew jokes” are entered into the browser.

The findings of this report also indicate that another tool designed by Google, used by web developers to identify violent, adult and inappropriate content, is also incapable of accurately identifying antisemitic images. ‘GCV API’, which is purpose-built for industry use, does not even have a category to specifically capture images of an antisemitic, racist or discriminatory nature. Pictures that this technology did adjudge to contain antisemitic material were usually tagged as “spoof”. Of the 369 images collected for testing, 40% were wrongly classified by CGV API, as antisemitic (when they were not) or vice versa.

As it stands, neither the Safesearch nor GCV API software are sufficiently intelligent to recognise and block antisemitic images that appear in Google Search results. They are respectively ineffective in protecting users from exposure to hateful content and helping web developers to identify and exclude Jew-hating material for their online audience. CST and APT argue that internet regulation is urgently needed.

Dr Dave Rich, Director of Policy of the Community Security Trust, said

“Google is the world’s most popular search engine, yet our research shows that their own so-called ‘SafeSearch’ function is incapable of accurately identifying and blocking antisemitic images. This is yet another example of Internet companies simply not doing enough to proactively stop the spread of hateful material online. There is an urgent need for government regulation to force these companies to properly filter hateful content and protect users from abuse.”

Danny Stone MBE, Chief Executive of the Antisemitism Policy Trust said:

“We expect the Online Safety Bill to be brought before parliament imminently. This report proves why we urgently require that bill to place a Duty of Care on companies like Google. Safety by Design, risk assessments and efforts to counter hate should be at the very centre of platform thinking from the outset, not an afterthought.”

The Trusts also implore Google to improve their tools and use them alongside – not instead of – expert human annotators. Only with this specialist input will they be able to more actively and comprehensively filter antisemitic, racist and discriminatory content. Until then, browsing the internet will always come with an element of risk.

Read CST and APT's report 'Unsafe Search'.

Source: CST