AI helps create content, but what about its bias?

Innovation & Insights Read Time: 4 minutes
AI helps create content, but what about its bias?

Over the last few years, a heated public debate has erupted over artificial intelligence. Its use has promised to advance our society and make life easier in some ways, but it also has its drawbacks. 

For those in fields like marketing and related creative fields, the pull to use AI technology is strong, as teams work toward trying to do more with often fewer resources. What should professionals who work in these fields be considering as they begin using AI for their day-to-day work? 

“I think it helps us to create content faster. It’s like a thought partner, not a replacement,” Jamari Snipes, a writer and content creator, says. “When I run out of ideas, I use AI like ChatGPT to help me brainstorm.”

Juwayriyah Hussain, a product marketing leader, sees some use from AI tools as well, but she’s not ready to rely on them, noting that she has used AI mostly “to capture notes, summarize content or meetings and to draft marketing content. While there are a number of other tools, I don’t see much regular use of them yet primarily because of the bias and incorrectness of the results.”

According to Stephanie Dinkins, a Black artist based in Brooklyn, “The biases are embedded deep in these systems, so it becomes ingrained and automatic.” In effect, the inputs determine the output, and if the system isn’t built through the lens of a broad set of data, the result will have a higher level of bias.

For example, an image generator was found to be skewing its output for professions like doctor and journalist toward white and male individuals. If marketers or content creators were to use this image generator for visuals in their work, it would perpetuate the idea that only certain types of people belong in these well-respected fields, and it would likely discourage those from more diverse backgrounds from pursuing these lines of work.

This is one more key reason that the human element has to be intertwined with the technological side.

Phil Moore, a producer and director, calls AI his “creative ally,” adding that it “won't replace human creativity but enhance it…speeding up the creative process.”

“The efficiency gains are undeniable, helping me deliver quicker results without compromising the essence of creativity,” he emphasizes. Hussain echoes this sentiment, hoping that AI content creation will help her with personal productivity.

“Ideally, I’d like to see AI truly augment my work and life. I see myself using personal AI as assistants to reduce my mental load,” she said.

Moore adds that the government should have a role in regulating AI in the interest of “transparency, fairness, and accountability in AI systems,” and acknowledges the potential risks to society including those that stem from job displacement or bias when AI is used in business settings, law enforcement, or other social services. Is regulation the right solution to address the concerns of bias? 

According to Moore, government regulation can prevent “misuse while safeguarding individual rights…Innovation is vital, but not at the cost of safety or ethics. AI-generated content must be labeled to curb the spread of 'deep fakes' and misinformation, preserving the integrity of information we consume.”

The business leaders’ perspective

“Even something like a [web] search autocomplete can use a little bit of what we call machine learning,” says Sandhya Simhan, a leader at a large tech company, “and that's been around since the 1990s.” She recommends that business and community leaders be vocal with politicians on this topic. 

For leaders looking to build advocacy within their company or the public for using AI, Simhan says transparency is a key value.

“AI exposes the inherent biases that our society is built on,” Simhan explains. “If we don't have enough explainability, we don't have enough credibility and regulation, we risk perpetuating those biases and stereotypes into the future world.”

Both Simhan and Moore express optimism about the opportunities for machine-learning technology to be a force for good. They separately noted examples of areas of social need that have been ignored by innovation efforts, such as homemaking and caregiving, or criminal justice, that could be addressed.

“I would love to see more use cases where AI is being applied to things that lift the burden of caregiving,” Simhan said. Moore pointed to the work of a friend aiming to address gender-based violence by Chicago Police using AI, through an organization called Beneath the Surface.

“You're gonna open up time for the human intellect to do what it does best,” Simhan said, “which is to be thoughtful and creative.”

Back to blog