From harmful stereotypes about race and sex - and ranging all the way to what people eat for breakfast - bias in AI is a pernicious scourge that permeates digital landscapes, shaping everything from search results to predictive algorithms.
Its ramifications are profound, exacerbating societal inequalities and hindering progress toward fairness and inclusivity.
Tech firms are creating gold standard for ethical AI. iStock
For Alexandru Costin, VP, generative AI at Adobe, battling bias is an urgent necessity.
“Recognising the importance of combating bias in AI-generated content, Adobe has developed Firefly, a groundbreaking AI model that prioritises diversity and inclusivity,” says Costin.
“By training Firefly on diverse datasets and implementing sophisticated de-biasing methods, Adobe ensures that its outputs are fair, accurate, and representative of diverse perspectives.”
Advertisement
Adobe Firefly is Adobe’s family of generative AI models for safe commercial use that generates creative content, such as images and text effects, to enhance digital media production. Firefly is integrated directly into Adobe workflows including Photoshop, Acrobat, Express and more.
Mitigating AI partiality in Firefly and industry collaboration
Adobe’s multifaceted approach to combating AI bias encompasses a range of initiatives and frameworks designed to uphold ethical standards and promote transparency.
From internal work like Adobe’s to robust frameworks for bias mitigation and diversity evaluation, Adobe prioritises responsible AI development.
Content Credentials are a free technical standard developed by the C2PA that anyone can incorporate into their own products and platforms via the CAI’s open-source toolkit.
Content Credentials have already gained widespread industry support, with over 3000 members of CAI – including Microsoft, TikTok, the Associated Press, The New York Times, The Wall Street Journal, Nvidia, Nikon, Spawning.ai and Universal Music Group. Additionally, the CP2A open standard has gained widespread industry support, with giants like Google, Microsoft, OpenAI and Sony among its key adopters and implementers.
These initiatives and cross-industry collaboration collectively signify a commitment to fostering fairness, inclusivity, and accountability in the digital landscape, setting a gold standard for digital content transparency and ethical AI development across industries.
Alexandru Costin, VP, generative AI at Adobe.
“We’ve been setting up these internal processes and councils to help us reduce the risk or increase the chances that no AI behaves irresponsibly,” says Costin.
Bias detection and correction
Adobe employs multiple layers of bias detection and correction to ensure the inclusivity of its AI models. “We have a DEI group that assesses the models to ensure they are inclusive and diverse in their outputs,” Costin says.
This group, along with the design team, spends considerable time reviewing model outputs, providing feedback, and reporting issues that need fixing.
“The design team spends a lot of time reviewing these models and their outputs, giving feedback, and reporting issues that need fixing,” he says.
Additionally, Adobe’s scientific evaluation team conducts thorough and scientific bias evaluations using third-party crowdsourced workers.
“My team, the scientific evaluation team, uses third-party crowdsourced workers to identify biases in generated images,” Costin says.
These evaluations cover aspects including skin tone, age distribution, and gender distribution.
“We conduct a thorough and scientific bias evaluation in terms of skin tone, age distribution, gender distribution,” he says.
Content Credentials for transparency
Beyond mitigating bias, Adobe has also focused on enhancing transparency and accountability in digital content creation through the introduction of Content Credentials, which serve as a “nutrition label” for digital content online.
Content Credentials provides users with essential information about the origins and edits of digital content, including the creator’s name, the date an image was created, what tools were used to create an image and any edits that were made along the way, including whether or not generative AI was used.
By enabling creators to add this level of transparency to content with these credentials, across our creative applications such as Firefly, Photoshop, Lightroom, Express, Stock, Behance and more, Adobe aims to mitigate misinformation and establish a digital chain of trust and authenticity across the content ecosystem.
“Through our ongoing industry collaboration and leadership in the CAI and C2PA, there is a shared mission to restore trust and transparency online. Ongoing cross-company efforts will be key to help drive continued adoption for Content Credentials, while also empowering powerful creativity and digital content editing says Costin.”
Future efforts
Adobe’s future efforts focus on further reducing bias in its training data and expanding the diversity of its datasets.
“For example, if you’re in Japan, and you put in breakfast, and if your data is biased for Western cuisine, you’ll get ham and eggs. This is another type of cultural bias that we think can be potentially harmful,” he says.
“We are working with the globalisation team to reduce Western bias in our training data.”
This effort involves collecting culturally rich data about local customs, food, and traditions.
“Our missions system in Adobe Stock allows us to request specific types of images from contributors,” he says. These initiatives aim to create AI models that accurately reflect the diversity of global cultures and perspectives.
Madeleine Houghton, acting head of policy at the Tech Council of Australia, praises the pioneering work Adobe is doing in this area.
For Houghton, the spectre of bias looms larger in high-risk areas such as health, finance, law enforcement, employment, and access to government services.
“Bias in AI systems is absolutely a well-acknowledged and recognised area of potential harm that needs to be addressed,” she says.
Madeleine Houghton, acting head of policy at the Tech Council of Australia.
The crux of mitigating AI bias lies in focusing not just on broad sectors but on specific use cases within those sectors.
“It’s important to not only focus on broad sectors but also on the specific use case,” Houghton says.
For instance, using AI in financial services to detect fraud is vastly different from using it to determine credit scores or loan applications. Each scenario presents unique risks and potential for bias, necessitating precise and contextual regulation and governance.
Government and industry collaboration
Addressing AI bias requires a multi-faceted approach involving hard laws, standards, and industry governance.
“When we think about how to consider action on addressing the harms of AI, we think it will involve a mix of hard laws or regulation; soft law, which we’re talking about standards there; and also, industry-level governance practices,” says Houghton.
“The government is taking a risk-based approach, considering mandatory guardrails for high-risk settings and developing a voluntary AI safety standard for other settings.”
This approach aims to balance protection against bias with the flexibility required for innovation.
But he says, the law doesn’t stop at the border of AI - the existing legal framework already provides a foundation for regulating AI.
“We have laws for consumer protection, privacy, anti-discrimination, and sector-specific laws in areas like health care and financial services – these laws all apply to AI,” Houghton says.
“AI is not unregulated. Companies must be aware of what regulations apply to different contexts.”