YouTube Denies AI Involvement in Tech Tutorial Removals

by Sophie Williams
0 comments

YouTube Creators Fear AI-Driven Content Moderation is Leading to Arbitrary Video Removals

YouTube creators are expressing concern that increased reliance on artificial intelligence for content moderation is resulting in videos being removed for reasons that are unclear and difficult to appeal, potentially impacting a significant segment of the platform’s tech tutorial community.

One creator, who has amassed approximately 330,000 subscribers, reported that videos demonstrating common tech procedures – such as installing Windows 11 on unsupported hardware – are now being flagged, despite previously being reinstated after human review. “They were striked for the same reason, but at that time, I guess the AI revolution hadn’t taken over,” the creator said. “So it was relatively easy to talk to a real person. And by talking to a real person, they were like, ‘Yeah, this is stupid.’ And they brought the videos back.” He and other creators speculate that YouTube may be using AI to identify potential violations but are hesitant to allow it to issue strikes directly, fearing over-moderation.

The uncertainty stems from a lack of transparency from YouTube regarding its content moderation policies and processes. Creators are worried that even ordinary tech content could be unexpectedly removed, leaving them unsure of what topics are safe to cover. This situation mirrors broader concerns about the impact of AI on content creation, as discussed by the Electronic Frontier Foundation . Some believe the platform is subtly pushing users toward Microsoft accounts, as detailed in reports about Windows 11 activation , and that this could be influencing moderation decisions.

YouTube declined to comment on the specific concerns raised by creators. The platform currently recommends that creators produce content aligned with its guidelines, but this has not alleviated fears about arbitrary takedowns.

YouTube officials have not yet announced any changes to their content moderation policies, but have indicated they are continuing to evaluate the system.

Those users could become loyal to Microsoft, White said. And eventually, some users may even “get tired of bypassing the Microsoft account requirements, or Microsoft will add a new feature that they’ll happily get the account for, and they’ll relent and start using a Microsoft account,” White suggested in his video. “At least some people will, not me.”

Microsoft declined Ars’ request to comment.

To White, it seemed possible that YouTube was leaning on AI  to catch more violations but perhaps recognized the risk of over-moderation and, therefore, wasn’t allowing AI to issue strikes on his account.

But that was just a “theory” that he and other creators came up with, but couldn’t confirm, since YouTube’s chatbot that supports creators seemed to also be “suspiciously AI-driven,” seemingly auto-responding even when a “supervisor” is connected, White said in his video.

Absent more clarity from YouTube, creators who post tutorials, tech tips, and computer repair videos were spooked. Their biggest fear was that unexpected changes to automated content moderation could unexpectedly knock them off YouTube for posting videos that in tech circles seem ordinary and commonplace, White and Britec said.

“We are not even sure what we can make videos on,” White said. “Everything’s a theory right now because we don’t have anything solid from YouTube.”

YouTube recommends making the content it’s removing

White’s channel gained popularity after YouTube highlighted an early trending video that he made, showing a workaround to install Windows 11 on unsupported hardware. Following that video, his channel’s views spiked, and then he gradually built up his subscriber base to around 330,000.

In the past, White’s videos in that category had been flagged as violative, but human review got them quickly reinstated.

“They were striked for the same reason, but at that time, I guess the AI revolution hadn’t taken over,” White said. “So it was relatively easy to talk to a real person. And by talking to a real person, they were like, ‘Yeah, this is stupid.’ And they brought the videos back.”

Now, YouTube suggests that human review is causing the removals, which likely doesn’t completely ease creators’ fears about arbitrary takedowns.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy