The amount of AI-generated videos has absolutely exploded on YouTube (and other video sharing platforms). For some reason (and perhaps unfortunately), YouTube itself is not only A-ok with this, but they are actually encouraging and supporting and promoting it, essentially declaring it the best thing since sliced bread and how it allows people to express themselves better and yada yada.
In reality, at least 99% of AI-generated videos on YouTube are absolute meaningless trash, ie. so-called "AI-slop". It's essentially just people writing some prompt into one of those AI video generator websites and uploading the result, the video often looking uncanny and jarring, and its contents being complete meaningless, pointless, random "brainrot" material.
Or, when it's not completely random or pointless, it's misinformation, completely made-up stuff (of the clickbait kind), or outright scams.
One defining feature of "AI-slop" is that it's extremely low-effort content, where the author spent absolutely no effort nor work, and shows no talent of his own.
Anyway, quite curiously, among all that, a rather curious form of "AI-slop" has also emerged, and that's when the video is actual real-life footage of the channel author, who is eg. speaking directly to the camera, but he is reading an AI-generated script, most often without divulging that he is doing so. The only "talent" in such videos is speaking as fluently and naturally as possible so as to hide the fact that the author is just mechanically reading a script.
It works like this: The author will take two or three online articles about a particular subject (usually a current event), feed it to eg. ChatGPT and ask it to write a script for eg. a 10-minute video summarizing and commenting on the content of those articles.
The contents of the articles are copy-pasted into ChatGPT, thus not telling it the source where the articles are from, or even that they are articles written by someone else, which pretty much stops the AI from adding mentions or references to the source of that information.
It usually takes just a minute or two for such a YouTuber to get a full script for a 10-minute video in this way, which makes the scriptwriting itself extremely low-effort. Usually reading the script on camera is also very straightforward, often without any edits, so even if the author needs to start over because he fumbled some line, it usually takes maybe 15 minutes to an hour to get a full video done and uploaded.
This sits right in that fuzzy border of acceptability.
In their defense, this is technically speaking not copyright infringement nor plagiarism because the script itself is original. Well, "original" in the sense that it's not directly copying someone else's work word-for-word, but every sentence is completely original. Essentially, the script has been "ghostwritten" by the AI. Commenting on articles written by someone using your own original words and comments falls under the Fair Use doctrine.
Ghostwriting is actually extremely common. In fact, all news broadcasts, documentaries etc. are "ghostwritten" for all intents and purposes. The only difference here is that the "ghostwriter" is an AI program rather than a person. Most productions that have been "ghostwritten" by someone, such as news broadcasts, don't usually even give credit to the ghostwriter (although some do, especially if there are some kind of end credits.) Many public speakers also use ghostwriters to write their speeches, and they seldom get credits (at least not directly). Heck, even many YouTube videos by big-time youtubers have been ghostwritten by someone else, although usually a human.
The problem with these videos, where the "ghostwriter" is ChatGPT, is that these youtubers usually don't reveal this information to the viewers, and instead they imply that they either wrote the script themselves, or are actually speaking spontaneously without a script. They hide this information on purpose: It would certainly mar their credibility and reputation if they revealed that "I'm just reading word-for-word a script written by ChatGPT. None of this contains my own original words."
This is deceptive. Not only would the audience generally like to know that this is the case, but it also makes the author appear smarter and more eloquent than he probably actually is.
It becomes particularly deceptive if (and most usually when) the video is monetized, and thus the author is getting ad revenue for this content where the audience has been misled to admire the author for his opinions and his eloquence and intelligence, something they may well not do if they were aware that in reality the author is just mechanically reading a fully chatbot-generated script.
Particularly in the current moment this might be extra deceptive in that a lot of those viewers might be of the sort who is tired of low-effort AI-slop, and is appreciative of actual genuine real content made by real people expressing their actual opinions. The fact that it's still just AI-generated content, even if the person reading it is real, is being hidden from them.
(There's also the problem that not citing the sources of the information, ie. those online articles written by other people, is at a minimum improper, academically speaking.)
Comments
Post a Comment