AI Versus Human Interactions With YouTube Shorts Skip to main content
Utah's Foremost Platform for Undergraduate Research Presentation
2025 Abstracts

AI Versus Human Interactions With YouTube Shorts

Author(s): Jeffrey Grant
Mentor(s): Meaghan McKasy
Institution UVU

Artificial Intelligence (AI) refers to the simulation of human intelligence by computers through machines (Oxford University Press, 2023). AI excels in certain areas but has weaknesses that human-generated content can overcome (Arshad, 2023). AI can generate vast amounts of content in a short amount of time, but it lacks authenticity. A study showed that AI drastically outperformed average-performing content on a Japanese art-sharing website. However, when it came to the top-performing pieces, human-generated content was superior (Pixiv Case Study Group, 2024). This study aims to determine whether human-generated content will outperform AI-generated content in terms of various engagement metrics in the context of YouTube Shorts. This is important because short-form content is growing rapidly, and due to its easy-to-digest nature, both good and harmful information can be spread at an unprecedented pace. Misinformation, as well as valuable ideas, can now reach people more quickly than ever before. To test this, I designed a twofold experimental approach involving both a survey and the collection of real-world engagement data from social media posts (YouTube Shorts). I attempted to maintain consistency with as many variables as possible across all posts. The experimental procedure involved creating 150 post ideas and generating two variations for each—one with AI-generated content and the other with human-generated content (keeping the same template generated by AI and replacing the creatives with human-made content). This resulted in a total of 300 posts, with each pair of variations posted simultaneously to ensure fairness. While the experimental procedure has great external validity, I am limited in addressing internal validity. As such, the second component of this research is an experimental survey, which will be conducted to test the differences between AI-generated and human-generated content in a controlled environment. Four video conditions, as described above, will be randomly assigned to survey participants. After viewing the videos, the survey will measure engagement metrics (i.e., views, comments, likes, etc.) and perceptions of the poster (i.e., trust, credibility, etc.). The final analysis will compare the survey responses with the actual engagement data to identify any discrepancies between perceived and actual performance. This experiment aims to provide insights into the effectiveness of AI-generated content compared to human-generated content, contributing valuable information to the field of marketing and offering insights into the public perception of AI.