Rosanna Pansino has been sharing her baking creations with the web for over 15 years, hoping to please and encourage with enjoyable creations that embody a Star Wars Loss of life Star cake and holographic chocolate bars. However in her newest sequence, she has a brand new purpose: “Kick AI’s butt.”
Blame it on the AI slop overwhelming her social media feeds. Pansino used to see posts from actual bakers and pals; now, they’re being crowded out by AI-generated clips. There’s a complete style of slop movies that characteristic meals, together with a weird development of unlikely objects being unfold “satisfyingly” on toast.
She determined to do one thing about it. She would put her years of ability side-by-side with AI to recreate these slop movies in actual life.
As an illustration: a pile of bitter gummy Peach Rings, effortlessly smeared on toast. The AI video seemed easy sufficient, however Pansino wanted to create one thing solely new. She used butter as her base, infused with peach-flavored oil. Yellow and orange meals coloring gave it the correct pastel hues. She rigorously piped the butter into rings utilizing a silicone mould. After they hardened within the freezer, she used uncolored butter to connect two rings collectively in the correct 3D form. The ultimate contact was to dunk them in a combination of sugar and citric acid for that bitter sweet look and style.
It labored. The butter rings had been excellent replicas of actual sweet rings, and Pansino’s video paralleled the AI model precisely, with the rings easily gliding throughout the toast. Most significantly, she had performed what she got down to do.
“The web is flooded with AI slop, and I needed to discover a strategy to struggle again towards it in a enjoyable means,” Pansino tells me.
It is a uncommon victory for people as AI-generated slop inundates an internet world that had, as soon as upon a time, been constructed by people for people.
AI know-how has been working behind the scenes on the web for years, usually in unnoticeable methods. Then, a couple of years in the past, generative AI burst onto the scene, launching a metamorphosis that has unfolded at breakneck pace. With it got here a flood of AI slop, a time period given to notably lukewarm AI-generated textual content, photos and movies which might be inescapable on-line, from search engines like google to publishing and social media.
“AI slop” is a shabby imitation of content material, usually a pointless, careless regurgitation of current info. It is error-prone, with summaries proudly proclaiming made-up information and papers citing pretend credentials. Pictures are inclined to have a slick, plastic veneer, whereas brainrot movies wrestle to obey primary legal guidelines of physics. Suppose pretend bunnies on trampolines and AI Overviews advising you to put glue on pizza.
The overwhelming majority of US adults who use social media (94%) consider they see AI-generated content material when scrolling, a new CNET research discovered. Solely 11% discovered it entertaining, helpful or informative.
Slop occurs as a result of AI makes it faster, simpler and cheaper than ever to create content material at an unimaginable scale. OpenAI’s Sora, Google’s Nano Banana and Meta AI create movies, photos and textual content with a couple of clicks of a button.
Specialists have loudly voiced issues about AI’s influence on the surroundings, the financial system, the workforce, misinformation, kids and different susceptible of us. They’ve cited its capacity to additional bias, supercharge scams and hurt human creativity, however nothing has slowed down the fast adoption and scaling of AI. It is overtaking the human creators, artists and writers whose work fuels the very existence of those fashions.
AI slop is an oil spill in our digital oceans, however there are lots of people working to wash it up. Many are preventing for higher methods to determine and label AI content material, from memes to deepfakes. Creators are pushing for higher media literacy and altering how we devour media. Publishers, scientists and researchers are testing new methods to maintain unhealthy info from gaining traction and credibility. Builders are constructing havens from slop with AI-free on-line areas. Laws and regulation, or the shortage of it, play a job in every potential resolution.
We cannot ever be utterly rid of AI, however all these efforts are bringing some humanity again to the web. Pansino’s recreations of AI movies spotlight the painstakingly detailed exhausting work that goes into creation, far more than typing a immediate and clicking generate.
“Human creativity is likely one of the most necessary issues we have now on the planet,” says Pansino. “And if AI drowns that out, what do we have now left?”
Creators who push again: ‘AI may by no means’
The web was constructed on movies comparable to Charlie Bit My Finger, Grumpy Cat and the Evolution of Dance. Now, we have now movies of AI-generated cats forming a feline tower and “Really feel the AGI” memes. These innocuous AI posts are why some individuals on social media see slop as leisure or a new type of web tradition. Even when movies are very clearly AI, individuals do not all the time thoughts in the event that they’re perceived as innocent enjoyable. However slop isn’t benign.
You see slop as a result of it is being pressured upon you — not since you’ve indicated to the algorithms that you just like it. In the event you had been to enroll in a brand new YouTube account as we speak, a 3rd of the primary 500 YouTube Shorts proven to you’ll be some type of AI slop content material, in line with a report from Kapwing, a maker of on-line video instruments. There are over 1.3 billion movies labeled as AI-generated on TikTok as of February. Slop is baked into our scrolling the identical means microplastics are a default ingredient in our meals.
Pansino compares her expertise recreating AI meals slop movies to an episode of The Workplace. In it, Dwight is competing with the corporate’s new web site to see if he could make extra gross sales.
“Dwight, single-handedly, is outselling the web site — he is competing towards the machine,” Pansino says. “That is what I really feel like once I’m baking towards AI. It is a good rush.”
(The Workplace followers might recall that Dwight wins on the finish of the episode, and later, due to large errors and fraud, the positioning’s creator, Ryan, is fired.)
Her 21 million-plus followers throughout YouTube, Instagram and TikTok have cheered on her AI recreation sequence, which Pansino attributes to their very own frustrations with seeing slop on their feeds. Plus, her creations are literally edible.
“We’re getting dimensions that AI may by no means,” she says.
Different creators have emerged as “actuality checkers.” Jeremy Carrasco (@showtoolsai) makes use of his background as a technical video producer to debunk viral AI movies. His group would livestream occasions for companies, working to keep away from errors, which has helped him extra simply spot when AI erroneously mimics video qualities comparable to lens flares. His instructional movies assist his greater than 870,000 Instagram, YouTube and TikTok followers acknowledge these abnormalities.
Analyzing a video’s context, Carrasco factors out telltale indicators of generative AI comparable to bizarre leap cuts and continuity points. He additionally finds the primary time a video was shared by an actual individual or a slop account. Everybody can do that, but it surely’s exhausting if you’re being “emotionally baited” by slop, Carrasco says.
“Most individuals aren’t spending their time analyzing movies like I’m. So if it hits their unconscious [signaling], ‘This seems actual,’ their mind may shut off there,” Carrasco says.
Slop producers don’t desire you to second-guess what you are seeing. They need you to get emotional — whether or not that is delighted by bunnies on a trampoline or outraged by political memes — and to argue within the feedback and share the movies with your mates. The purpose for a lot of producers of AI slop is engagement and, due to this fact, monetization. The Kapwing report estimates the highest slop accounts are pulling in tens of millions of {dollars} of advert earnings per 12 months. They’re similar to the unique engagement farmers and ragebaiters on Twitter. What’s previous is now AI-powered.
Seeing is not believing. What now?
It may be troublesome for the net platforms we depend on to determine AI photos and movies. To weed out the worst offenders, the accounts that mass-produce sloppy spam, some platforms encourage their actual customers so as to add verifications to their accounts. LinkedIn has had some success right here, with over 100 million of its members including these new verifications. However AI makes it exhausting to maintain up.
Individuals are utilizing AI-powered neighborhood automation instruments to make AI-generated posts and depart feedback throughout a whole lot of random accounts in a fraction of the time it will take to take action manually. Teams of those customers are known as engagement pods, Oscar Rodriguez, vp of belief merchandise at LinkedIn, tells me. The corporate has eliminated “a whole lot of LinkedIn teams” that show these engagement-farming behaviors in simply the previous few months, however figuring out them is difficult.
“There isn’t any one sign that I can inform you that positively makes [an account] inauthentic or pretend, but it surely’s a mix of various indicators, the habits of the accounts,” says Rodriguez.
Take AI-generated photos, for instance. Many individuals use AI to create new headshots to keep away from paying for pricey photoshoots, and it is not towards LinkedIn’s guidelines to make use of them as profile photos. So an AI headshot alone is not sufficient to warrant suspicion. But when an account has an AI profile picture and has different warning indicators — like commenting extra steadily than LinkedIn internally is aware of is typical for human customers — that raises pink flags, Rodriguez says.
To identify AI content material, platforms depend on labeling and watermarking. Labeling requires individuals to reveal that their work was made with AI. In the event you do not, monitoring methods can try and flag it themselves. One of many strongest indicators these methods depend on is watermarks, that are invisible signatures utilized throughout content material creation and hidden in a chunk of content material’s metadata. They provide you extra details about how and when one thing was created.
Most watermarking methods give attention to two areas: {hardware} firms authenticating actual content material because it’s captured, and AI firms embedding indicators into their artificial, AI-generated media when it is created. The Coalition for Content material Provenance and Authenticity is a significant advocacy group attempting to standardize how artificial media is watermarked with content material credentials.
Many, however not all, AI fashions are suitable with the C2PA’s framework. Meaning its verification software cannot flag every bit of AI-generated media, which creates inconsistency and confusion. Half of US social media customers (51%) need higher labeling, CNET discovered. That is why different options are within the works to fill the gaps.
Abe Davis, a pc science professor at Cornell College, led a group that developed a strategy to embed watermarks in gentle. All that is wanted is to activate a lamp that makes use of the required chip to run the code. This course of is known as noise-coded illumination. Any digital camera that captures video footage of an occasion the place the sunshine is shining will robotically add the watermark.
“As an alternative of making use of the watermark to information that is captured by a selected digital camera, [noise-coded illumination] applies it to the sunshine surroundings. Any digital camera that is recording that gentle goes to file the watermark,” Davis says.
The watermark is hidden within the gentle’s frequencies, unfold throughout a video, undetectable to the human eye and troublesome to take away. These with the key code can decode the watermark and see what components of a video or picture have been manipulated, right down to the pixel degree. This could be particularly useful for dwell occasions, like political rallies and press conferences, the place the audio system are targets for deepfakes.
Although it is not but commercially out there, the analysis exhibits the totally different alternatives so as to add an additional layer of safety from AI. Watermarking is a type of collective motion drawback, Davis says. Everybody would profit if we carried out all these approaches, however nobody particular person advantages sufficient. That is why we have now haphazard efforts unfold throughout a number of industries which might be extremely aggressive and quickly altering.
Labeling and watermarking are necessary instruments within the struggle towards slop, however they will not be sufficient on their very own. Merely having AI labeled does not cease it from filling our lives. However it’s a vital first step.
Publishing pains
In the event you assume it is simpler to single out AI-generated textual content than photos or movies, assume once more. Publishing is likely one of the largest targets of AI slop after social media. Chatbots and Google’s AI Overviews eat up articles from information sources and different digital publications and spit out wonky and probably copyright-infringing outcomes. AI-powered translation and record-keeping instruments threaten the work of translators and historians, however the tech’s superficial understanding of cultures and nuances makes it a poor substitute.
Slop is very pervasive in tutorial publishing. In a “publish or perish” tradition like academia, a few of it could be unintentionally or mistakenly created, particularly by first-time researchers and writers. However it’s slipping into the mainstream journals, like a now-retracted research that went viral for together with an clearly incorrect, overly phallic AI-generated picture of a rat’s reproductive system with many typos. That is one instance, albeit a hilarious and simply recognizable one, of how AI is turbocharging unhealthy analysis, notably for firms that promote pretend analysis to tutorial publishers, often known as paper mills.
The revered and extensively used prepublication database arXiv is likely one of the largest targets for AI slop. Editorial director Ramin Zabih and scientific director Steinn Sigurdsson inform me that submissions sometimes improve about 20% annually; now, it is getting “worrisomely quicker,” Zabih says. AI is responsible, they are saying.
ArXiv will get round 2,000 submissions a day, half of that are revisions. It has automated screening instruments to weed out essentially the most clearly fraudulent or AI-generated research, but it surely closely depends on a whole lot of volunteers who assessment the remaining papers in line with their areas of experience. It is also needed to tighten its submission tips, adopting an endorsement system to make sure solely actual individuals can share analysis. It is not an ideal repair, Sigurdsson acknowledges, but it surely’s essential to “stem the flood” of scientific slop.
“The corpus of science is getting diluted. Lots of the AI stuff is both actively mistaken or it is meaningless. It is simply noise,” says Sigurdsson. “It makes it more durable to seek out what’s actually taking place, and it may well misdirect individuals.”
There’s been a lot slop that one analysis group used these fraudulent papers to construct a machine studying software that may acknowledge it. Adrian Barnett, a statistician and researcher at Queensland College of Expertise, was a part of the group that used retracted journal papers to coach a language mannequin to identify pretend and probably AI-generated research, particularly for most cancers analysis, sadly a excessive goal space.
Paper mill-created articles “have the appearance of a paper,” Barnett says. “They know what a paper ought to seem like, after which they spin the wheel. They could change the illness, they’re going to change a protein, they’re going to change a gene and presto, you’ve got acquired a brand new paper.”
The software acts as a type of scientific spam filter. It identifies patterns, like generally used phrases, within the templates that chatbots and human fabricators depend on to imitate academia’s type. It is one instance of how AI know-how itself is getting used to struggle slop — AI versus AI, in lots of circumstances. However like different AI verification instruments, it is restricted; it may well solely determine the templates it was educated on. That is why human oversight is very necessary.
People have intestine instincts and subject material experience that AI does not. For instance, arXiv’s moderators flagged a pretend sequence of submissions as a result of the authors’ names caught out to them as too stereotypically British, like characters from Jane Eyre. However the demand for human opinions results in threat of a “dying spiral,” Zahib mentioned, the place reviewers’ workloads get bigger and extra disagreeable, which causes them to cease reviewing, including stress to remaining reviewers.
“There is a little bit of an arms race between writing [AI] content material and instruments for robotically figuring out it,” Zahib says. “However at this time limit, I hate to say this, it is a battle we’re shedding slowly.”
Can there be a secure haven from slop?
A part of the issue with slop — if not your complete drawback — is that the handful of firms that run our on-line lives are additionally those constructing AI. Meta slammed its AI into Instagram and Fb. Google built-in Gemini into each phase of its huge enterprise, from search to smartphones. X is virtually inseparable from Grok. It’s totally troublesome, and in some circumstances not possible, to show off AI on sure units and websites. Tech giants say they’re including AI to enhance our expertise. However which means they’ve a reasonably large battle of curiosity in terms of reining in slop.
They’re determined to show their AI fashions are wanted and work effectively. We are the guinea pigs used to inflate their utilization stats for his or her quarterly investor conferences. Whereas some firms have launched instruments to assist take care of slop, it is not almost sufficient. They are not overly thinking about serving to clear up the issue they created.
“You can’t separate the platforms from the individuals making the AI,” Carrasco says. “Do I belief [tech companies] to have the correct compass about AI? No, by no means.”
Meta and TikTok declined to touch upon the file about efforts to rein in AI-generated content material. YouTube spokesperson Boot Bullwinkle mentioned, “AI is a software for creativity, but it surely’s not a shortcut for high quality,” and that to prioritize high quality experiences, the corporate is “much less more likely to suggest low-quality or repetitive content material.”
Different firms are swerving in the wrong way. DiVine is one of some AI-free social media apps, a reimagining of Vine, the short-lived brief video service that predated TikTok. Created by Evan Henshaw-Plath, with funding from Twitter creator Jack Dorsey, the brand new video app will embody an archive of over 10,000 Vines from the unique app — no want to seek out these Vine compilations on YouTube. It is an interesting mix of nostalgia for a less-complicated web and another actuality the place slop hasn’t taken over.
“We’re not anti-AI,” DiVine chief advertising officer Alice Chan says. “We simply assume that folks deserve a spot they’ll come the place there is a excessive degree of belief that the content material they’re seeing is actual and made by actual individuals.”
To maintain AI movies off the platform, the corporate is working with The Guardian Undertaking to make use of its identification system known as proof mode, constructed on high of the C2PA framework, that verifies human-created content material. It additionally plans to work with AI labs to “design checks … that have a look at the underlying construction of those movies,” Henshaw-Plath mentioned in a podcast earlier this 12 months. DiVine customers will even have the ability to report in the event that they see AI movies, although it will not permit video uploads when it launches, which ought to assist stop slop from slipping by way of.
Authenticity issues now greater than ever, and social media executives comprehend it. On New 12 months’s Eve, Instagram chief Adam Mosseri wrote a prolonged publish about needing to return to a “uncooked” and “imperfect” aesthetic, criticizing AI slop and defending AI use in the identical paragraph. YouTube CEO Neal Mohan began 2026 with a letter explicitly stating slop is a matter and that platforms should be “lowering the unfold of low-quality, repetitive content material.”
However it’s exhausting to think about platforms like Instagram and YouTube will have the ability to return to a very people-centric, genuine and actual tradition so long as they depend on algorithmic curation of really helpful content material, push AI options and permit individuals to share solely AI-generated posts. Apps like Vine, which by no means demanded perfection or developed AI, might need a preventing likelihood.
Slopaganda and the messy net of AI in politics
AI is an influence participant in politics, accountable for creating a robust new aesthetic and influencing opinions, culminating in what’s known as slopaganda — AI content material particularly shared to control beliefs to realize political ends, as one early research places it.
AI is already an efficient software for influencing our beliefs, in line with a latest Stanford College research. Researchers needed to know whether or not individuals may determine political messages written by AI and measure how efficient they’re in influencing beliefs. When studying an AI-created message, the overwhelming majority of respondents (94%) could not inform. These AI-generated political messages had been additionally as persuasive as these written by people.
“It is fairly troublesome to craft these persuasive messages in a means that resonates with individuals,” says Jan Voelkel, one of many research’s authors. “We thought this was fairly a excessive bar for giant language fashions to realize, and we had been stunned by the truth that they had been already doing so effectively.”
It is not essentially a nasty factor that AI can craft influential political messages when performed responsibly. However AI can be utilized by unhealthy actors to unfold misinformation, Voelkel says. The chance is that one-person misinformation groups can use AI to sway individuals’s opinions whereas working extra effectively than earlier than.
A technique we see the affect and normalization of slop in politics is with imagery. AI memes are a brand new type of political commentary, as demonstrated by President Donald Trump and his administration: The White Home’s AI picture of a girl crying whereas being deported; Trump’s AI cartoon video of himself sporting a crown and flying a fighter jet after nationwide “No Kings” protests; Protection Secretary Pete Hegseth’s parody ebook cowl of Franklin the Turtle holding a machine gun taking pictures at international boats; an AI-edited picture that altered a girl’s face to seem as if she was crying after being arrested for protesting Immigration and Customs Enforcement.
Governments have the facility to find out whether or not and easy methods to regulate AI. However legislative efforts have been haphazard and scattered. Particular person states have taken motion, as within the case of California’s AI Transparency Act, Illinois’ limits on AI remedy, Colorado’s algorithmic discrimination guidelines and extra. However these legal guidelines are caught in a battle between the states and the federal authorities.
Trump mentioned patchwork state regulation will stop the US from “successful” the worldwide AI race by slowing down innovation, which is why the Division of Justice shaped a activity drive to crack down on state AI laws. The administration’s AI Motion Plan, in the meantime, requires slashing laws for AI information facilities and proposes a brand new framework to make sure AI fashions are “free from top-down ideological bias,” although it is unclear how that might play out.
Tech leaders like Apple’s Tim Prepare dinner, Amazon’s Jeff Bezos, OpenAI’s Sam Altman, Meta’s Mark Zuckerberg, Microsoft’s Invoice Gates and Alphabet’s Sundar Pichai have met with Trump a number of instances since he took workplace. With an more and more cozy relationship to the White Home, Google and OpenAI have welcomed the push to chop authorized pink tape round AI growth.
Whereas governments dither on regulation, tech firms have free rein to proceed as they please, evenly constrained by a couple of AI-specific legal guidelines. Complete, enforceable laws may management the fireplace hose of harmful slop, however as of now, the individuals accountable for it are both unable or unwilling to take action. This has by no means been clearer than with the rise of AI deepfakes and AI-powered image-based abuse.
Deepfakes: Faux content material, actual hurt
Deepfakes are essentially the most insidious type of AI slop. They’re photos and movies so reasonable we will not inform whether or not they’re actual or AI-generated.
We had deepfakes earlier than we had AI. However pre-AI deepfakes had been costly to create, required specialised expertise and weren’t all the time plausible. AI adjustments that, with newer fashions creating content material that is indistinguishable from actuality. AI democratized deepfakes, and we’re all worse off for it.
AI’s capacity to supply abusive or unlawful content material has lengthy been a priority. It is why almost all AI firms embody insurance policies outlawing these makes use of. However we have already seen that their methods meant to stop abuse aren’t excellent.
Take OpenAI’s Sora app, for instance. The app exploded in reputation final fall, letting you make movies that includes your individual face and voice and the likenesses of others. Celebrities and public figures shortly requested OpenAI to cease dangerous depictions of them. Bryan Cranston, the actors’ union SAG-AFTRA and the property of Martin Luther King Jr. all reached out with their issues to the corporate with issues, which promised to construct stronger safeguards.
(Disclosure: Ziff Davis, CNET’s mother or father firm, in 2025 filed a lawsuit towards OpenAI, alleging it infringed Ziff Davis copyrights in coaching and working its AI methods.)
Sora requires your consent earlier than letting different individuals use your likeness. Grok, the AI software made by Elon Musk’s xAI, doesn’t. That is how individuals had been ready to make use of Grok to make AI-generated nonconsensual intimate imagery.
From late December into early January, a rush of X customers requested Grok to create photos that undress or nudify individuals in photographs shared by others, primarily ladies. Over a nine-day interval, Grok created 4.4 million photos, of which 1.8 million had been sexual, in line with a New York Instances report. The Heart on Countering Digital Hate did an analogous research, which estimated that Grok made roughly 3 million sexualized photos over 11 days, with 23,000 of these deepfake porn photos together with kids.
That is tens of millions of incidents of harassment that had been enabled and effectively automated by AI. The dehumanizing development highlighted how simple it’s for AI to be weaponized for harassment.
“The perpetrator might be actually anybody, and the sufferer might be actually anybody. When you’ve got a photograph on-line, you may be a sufferer of this now,” says Dani Pinter, chief authorized officer on the Nationwide Heart on Sexual Exploitation.
X didn’t reply to a number of requests for remark.
Deepfakes and nonconsensual intimate imagery are unlawful beneath the 2025 Take It Down Act, but it surely additionally gave platforms a grace interval (till Might) to arrange processes to take down illicit photos. The enforcement mechanisms within the regulation solely permit for the DOJ and the Federal Commerce Fee to research the businesses, Pinter says, not for people to sue perpetrators or tech firms. Neither group has opened an investigation but.
Deepfakes hit on a core challenge with AI slop: our lack of management. We all know AI can be utilized for malicious functions, however we do not have many particular person levers to tug to struggle again. Even trying on the large image, there’s a lot turmoil round AI laws that we’re largely pressured to depend on the individuals constructing AI to make sure it is secure. The present guardrails may work typically, however clearly not on a regular basis.
Grok’s AI image-based sexual abuse was “so foreseeable and so preventable,” Pinter says.
“In the event you designed a automotive, and also you did not even verify if sure gear would explode, you’ll be sued to oblivion,” Pinter says. “That may be a primary backside line: Affordable habits by a company entity … It is like [xAI] did not even do this primary factor.”
The story of AI slop, together with deepfakes, is certainly one of AI enabling the very worst of the web: scams, spam and abuse. If there’s a optimistic facet, it is that we’re not but on the finish of the story. Many teams, advocates and researchers are dedicated to preventing AI-powered abuse, whether or not that is by way of new legal guidelines, new guidelines or higher know-how.
Preventing an uphill battle
Practically each tech government who’s constructing AI rationalizes that AI is just the newest software that may make your life simpler. There’s some reality to that; AI will in all probability result in welcome progress in drugs and manufacturing, for instance. However we have seen that it is a frighteningly environment friendly instrument for fraud, misinformation and abuse. So the place does that depart us, as slop gushes into our lives with no aid valve in sight?
We’re by no means getting the pre-AI web again. The struggle towards AI slop is a struggle to maintain the web human, one we want now greater than ever. The web is inextricably intertwined with our humanity, and we’re inundated with a lot pretend content material that we’re ravenous for something actual. Buying and selling immediate gratification and the sycophancy of AI for on-line experiences which might be rooted in actuality, possibly with a bit of extra friction but in addition much more authenticity — that is how we get again to utilizing the web in ways in which give to us moderately than drain us.
If we do not, we could also be headed for a very lifeless web, the place AI brokers work together with one another to provide the phantasm of exercise and connection.
Substituting AI for humanity will not work. We have already realized this lesson with social media. The AI slop ocean that was social media is driving us farther from the tech’s authentic objective: connecting individuals.
“AI slop is actively attempting to destroy that. It is actively attempting to interchange that a part of your feed as a result of your consideration is restricted, and it’s actively taking away the connections that you just had,” Carrasco says. “I hope that AI video and AI slop make individuals get up to how far we drifted.”
Artwork Director | Jeffrey Hazelwood
Artistic Director | Viva Tung
Video Presenter | Katelyn Chedraoui
Video Editor | JD Christison
Undertaking Supervisor | Danielle Ramirez
Editors | Corinne Reichert and Jon Reed
Director of Content material | Jonathan Skillings
