Yesterday, I explored the case for unfiltered AI. OpenAI, Anthropic, Google, and other popular model builders aggressively filter training data to exclude harmful content—adult entertainment, hate speech, extremism and even controversial political perspectives. The result? Polished, sanitized models that align with corporate and legal safety standards.
But the real world is messy, complicated and filled with morally grey areas, which raises the question: Do unfiltered AI models have a valid place in the AI landscape? Here's my argument. -s
P.S. If you're wondering how to sort out foundational model bias for your marketing needs, come join us at MMA CMO AI Transformation Summit (March 18, 2025 | NYC). I'm facilitating and co-producing this half-day invitation-only event which will provide insights into the strategies, technologies, and leadership practices for CMOs who are driving successful AI transformations across the world’s best marketing organizations. Request your invitation.
Shelly Palmer is the Professor of Advanced Media in Residence at Syracuse University’s S.I. Newhouse School of Public Communications and CEO of The Palmer Group, a consulting practice that helps Fortune 500 companies with technology, media and marketing. Named LinkedIn’s “Top Voice in Technology,” he covers tech and business for Good Day New York, is a regular commentator on CNN and writes a popular daily business blog. He's a bestselling author, and the creator of the popular, free online course, Generative AI for Execs. Follow @shellypalmer or visit shellypalmer.com.