How to eliminate bias in AI without repeating the errors of Google Gemini
- The removal of Google's Gemini image-generation feature for testing due to bias concerns has raised concerns about the potential risks associated with generative artificial intelligence.
- The AI model learns and reflects the data it is trained on.
- It is essential to be transparent about how AI systems operate and make decisions to build trust and address bias concerns.
It is essential to have clear processes and prioritize responsible AI from the outset when managing issues of potential bias in AI, according to Joe Atkinson, chief products and technology officer at consulting firm PwC.
Atkinson stated that the development of gen AI systems should prioritize transparency and explainability, allowing users to comprehend the decision-making process and trace the reasoning behind it.
It is essential to be transparent about how generative AI systems operate and make decisions in order to build trust and address bias concerns, according to Ritu Jyoti, group vice president, AI and automation, market research and advisory services at International Data Corp.
"Jyoti suggested that organizations should invest in developing explainable AI techniques to help users comprehend the reasoning behind AI-generated content. For instance, a healthcare chatbot that utilizes generative AI can offer explanations for its diagnoses and treatment recommendations, aiding patients in comprehending the underlying factors and minimizing potential biases in medical advice."
Diversity in AI development teams, data
To ensure that AI systems are fair and inclusive, companies must create diverse and inclusive development teams, according to Atkinson. By including individuals with diverse backgrounds, perspectives, and experiences, biases can be identified and mitigated, leading to more equitable AI models.
Another good practice is to build robust data collection and evaluation processes.
"Ganesan stated that companies are often eager to implement AI models without first addressing the underlying data. To mitigate biases, organizations should use diverse and representative data sets. Tracking data changes and distribution is crucial for enhancing AI model development and ensuring explainability."
To minimize biases in outcomes, companies should ensure that their training data is diverse and representative of the population, Atkinson advised.
Regularly assessing an AI system's performance is crucial to detect and correct any biases that may develop.
"To ensure fairness and ethical implications of the generated content, organizations should establish evaluation frameworks and metrics. For instance, a news organization using a generative AI model to produce news articles can analyze the articles for biased language or perspectives and make necessary adjustments to ensure balanced and unbiased reporting."
Keeping humans in the loop
Leaders should prioritize training and awareness for responsible AI use, which involves cultivating a culture of responsible AI use by educating people on potential risks, promoting cautious usage of AI-generated content, and emphasizing the importance of human review and verification.
"Human intervention can help mitigate risks by providing a checks-and-balance system to prevent the propagation of biased or harmful content," Jyoti said.
To ensure a safer and more inclusive online environment, social media platforms that use generative AI for content recommendation can employ human moderators to review and filter out potentially biased or inappropriate content.
It is essential to establish systems for collecting user feedback and identifying inconsistencies or biases in order to promote knowledge sharing and prevent widespread issues, according to Atkinson.
Jyoti emphasized the importance of collaborative efforts and industry standards for achieving overall maturity in the industry.
"By sharing knowledge, experiences, and tools, we can accelerate progress in addressing bias and improving the ethical use of generative AI," she said. "For example, AI conferences and industry associations can facilitate discussions and knowledge exchange on bias mitigation techniques and ethical considerations in generative AI applications," she added.
Jyoti stated that the stakes are high in the Gen AI market, which is still in its infancy and undergoing rapid change. However, some of the issues are intricate, and more attention should be paid to the training and fine-tuning of models.
"Companies can improve outcomes by handling bumps along the road swiftly, as Ganesan said, and much of the heavy lifting that is required behind the scenes can help them get things right," he said.
Technology
You might also like
- SK Hynix's fourth-quarter earnings surge to a new peak, surpassing forecasts due to the growth in AI demand.
- Microsoft's business development chief, Chris Young, has resigned.
- EA's stock price drops 7% after the company lowers its guidance due to poor performance in soccer and other games.
- Jim Breyer, an early Facebook investor, states that Mark Zuckerberg has been rejuvenated by Meta's focus on artificial intelligence.
- Many companies' AI implementation projects lack intelligence.