ChatGPT is known for its ability to generate text that mirrors human conversation, making it a widely adopted tool for various industries, including digital marketing.
Yet, a recent study calls into question the model’s ability to generate and understand humor, a key component in engaging and connecting with audiences.
The research conducted by German researchers Sophie Jentzsch and Kristian Kersting suggests that while ChatGPT excels in some areas, it has notable limitations when generating original humor.
Recycled Laughter: The Question Of Originality
A study conducted by Cornell University aims to answer the question: “How does an Artificial Intelligence model handle humor?”
Researchers examine the originality of AI-generated humor, ChatGPT’s ability to understand and explain jokes, and its prowess in detecting humor.
The research team states in their report:
“We discovered that more than 90% of the generated samples were the same 25 jokes. This recurrence suggests that these jokes are not originally generated but are explicitly learned and memorized from the model training.”
The researchers concluded that these responses were likely learned and memorized during the AI model’s training, indicating a limitation in the model’s ability to generate original humor.
In a report, the researchers detail the top 10 most frequently generated jokes, which included classics such as “Why did the scarecrow win an award? Because he was outstanding in his field.”
In addition to revealing the complex way AI handles humor, the study serves as a warning for people hoping to harness ChatGPT to create content with a humorous spin.
The implication for digital marketers relying on AI for content generation is clear – while AI models like ChatGPT can replicate pre-learned patterns to create jokes, the originality is lacking.
Despite the repetition, a small number of the generated responses were unique. Yet, these were largely created by combining elements from different known jokes and didn’t always make sense.
Explaining The Joke: Beyond Surface Humor
The study further examined ChatGPT’s capacity to explain humor, requiring a deeper understanding of the joke’s structure and implications.
While the model could deconstruct and explain stylistic elements like personifications and wordplay, it showed limitations when confronted with more unconventional jokes.
The team observed:
“ChatGPT struggles to explain sequences that do not fit into the learned patterns. It will not indicate when something is not funny or lacks a valid explanation.”
In instances when ChatGPT couldn’t identify unfunny jokes, it would make up plausible-sounding explanations.
For marketers looking to engage their audience through nuanced humor, relying solely on AI may not yield the desired results.
Joke Detection: Decoding The Punchline
Beyond generating and explaining jokes, the research team tested ChatGPT’s ability to detect humor.
They found that while the model can correctly identify jokes based on structure, wordplay, and topic, it failed to classify a sentence as a joke if it only has one of these characteristics.
This underscores the model’s reliance on learned patterns and the lack of a more comprehensive understanding of humor.
What Does This Mean for Marketers?
While ChatGPT has revolutionized the realm of AI-generated content, this research suggests caution when relying on the model for humor generation.
The study concludes:
“Although ChatGPT’s jokes are not newly generated, this does not necessarily take away from the system’s capabilities… However, whether an artificial agent is able to understand what it learned is an exceptionally tough question.”
As digital marketers look to AI to diversify and expand their content offerings, it’s essential to understand the model’s limitations and strengths. In the realm of humor, at least for now, human creativity IS irreplaceable.
The research team plans to conduct similar research on newly released AI models such as LLaMa, and GPT-NeoX, promising further insights into the world of computational humor.
Featured image generated by the author using Midjourney.
!function(f,b,e,v,n,t,s) {if(f.fbq)return;n=f.fbq=function(){n.callMethod? n.callMethod.apply(n,arguments):n.queue.push(arguments)}; if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version='2.0'; n.queue=[];t=b.createElement(e);t.async=!0; t.src=v;s=b.getElementsByTagName(e)[0]; s.parentNode.insertBefore(t,s)}(window, document,'script', 'https://connect.facebook.net/en_US/fbevents.js');
if( typeof window.sopp != "undefined" && window.sopp === 'yes' ){ fbq('dataProcessingOptions', ['LDU'], 1, 1000); } console.log('load_px'); fbq('init', '1321385257908563');
fbq('track', 'PageView');
fbq('trackSingle', '1321385257908563', 'ViewContent', { content_name: 'chatgpts-humor-isnt-quite-human-yet-study-finds', content_category: 'generative-ai news' }); } });