Recently, an influx of horrifingly realistic AI videos have flooded the timelines of social media users. This has stemmed from the release of the app Sora AI, a text-to-video model that generates videos that appear unparalleled when compared with other flawed AI video outputs we have seen in the past. It is surreal to see social media filled with these fabricated images that are so close to reality, ones that merely come from description.
In the field of journalism, the shift that Sora and future AI models will create seems unimaginable to me. Facts, stories, articles, and quotes can all easily be fabricated in the form of text, but images, and especially videos, have always seemed to be proof of what’s really real. I don’t know what to think, knowing that soon AI will be able to construct video evidence indistinguishable from fact, and any ounce of what seems to be truth will need to be taken with a grain of salt.
Beyond my life and what I consume, and even beyond our school and community, the implications are even sharper. A single clip created by AI could spark public outrage, manipulate results of elections, and could be catalysts for war, especially as AI content becomes more and more prominent to the masses. As if we are living in Orwell’s 1984, history can be rewritten, removed, and reworked with simple text prompts in the age of image and video generation models. Destruction, violence, and oppression could become imagery that are not derived from text prompts but are instead real life.
These consequences of the fake just makes truth all the more valuable. As journalists we need to understand how AI does not undermine our mission, but actually makes it more worthwhile. Without reliable sources that free the public from mass manipulation and confliction, we as people will struggle. In this issue, and every issue, we hope to be a source that you can lean on to free our readers from that struggle, even if it is only from a localized student point of view.

Leave a Reply