Two weeks after the UK government passed the Online Safety Act into law, the regulator responsible for enforcing the legislation has set out its draft guidance for companies that fall under its scope. The UK’s communications regulator, Ofcom, has set out draft codes of practice and guidance for social media, gaming, pornography, search and sharing sites in the wake of the Online Safety Act coming into force last month. The act aims to keep websites and different types of internet-based services free of illegal and harmful material while defending freedom of expression. It applies to search engines; internet services that host user-generated content, such as social media platforms; online forums; some online games; and sites that publish or display pornographic content. In an online post outlining the draft codes of practice, Ofcom said that companies will be required to assess the risk of users being harmed by illegal content on their platform, and take appropriate steps to minimize these potential harms. A core list of measures that services can adopt to mitigate the risk of all types of illegal harm include: Having a named person accountable to their most senior governance body for compliance with content regulations. Making sure content and search moderation teams are well resourced and trained; monitor performance targets and their progress against toward them. Ensuring users can easily report potentially harmful content, make complaints, block other users and disable comments. Running safety tests for recommender algorithms. When it comes to specific harms, such as protecting children online, larger and higher-risk services — defined by Ofcom as user-to-user services such as social media platforms — should make children’s accounts invisible from suggested friends and connections lists and hide their location information. Accounts outside a child’s connection list must not be able to send them direct messages. Ofcom is also proposing that larger and higher-risk services use “hash matching” technology to identify child sexual abuse material (CSAM) and match it to a database of illegal images. Platforms should also use automated tools to detect URLs that have been identified as hosting CSAM. All large general search services should provide crisis prevention information in response to search requests regarding suicide and queries seeking information regarding suicide methods. In order to tackle fraud and terrorism offenses online, large higher-risk services will be required to deploy keyword detection to find and remove posts linked to the sale of stolen credentials, such as credit card details and block accounts run by proscribed terrorist organisations. Ofcom is now consulting with industry and other experts before publishing the final version of its codes of practice in autumn 2024. Services will then have three months to conduct their risk assessment, while Ofcom’s final Codes of Practice will be subject to Parliamentary approval. “Ofcom is not a censor. We won’t have powers to take content down,” said Ofcom’s CEO, Dame Melanie Dawes in a statement published alongside the announcement. “Instead, our job is to tackle the root causes of harm by setting new standards and requiring firms to design their services with safety in mind. We’ll make sure our rules are practical and take full account of people’s privacy – as well as free expression, the lifeblood of discussion online.” Related content news analysis WWDC: You’ll need an M1 Mac or iPad for Apple AI It looks like 'Apple Intelligence' will require the best Apple Silicon to run. By Jonny Evans Jun 10, 2024 4 mins Apple Developer WWDC news Microsoft makes Windows Recall ‘opt-in’ after privacy, security backlash Recall, which captures and stores screenshots of desktop PC activity every few seconds, was labeled a “privacy nightmare” when first unveiled last month. By Anirban Ghoshal and Matthew Finnegan Jun 10, 2024 2 mins Generative AI Microsoft Windows 11 news analysis Why Nvidia’s $3 trillion valuation might be too low The company’s value is so high because of its dominance in AI chips, surpassing even Apple. Now the company is aiming to transform a much larger $50 trillion market: ‘Physical AI.’ By Mike Elgan Jun 10, 2024 6 mins Nvidia Generative AI Robotics opinion GenAI might be the least-trustworthy software that exists. Yet IT is expected to trust it. If you can't trust the product, can you trust the vendor behind it? By Evan Schuman Jun 10, 2024 6 mins Generative AI Technology Industry Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe