SHE’s Comments on the Proposed Model Governance Framework for Generative AI

The Infocomm Media Development Authority (IMDA) and AI Verify Foundation (AIVF) have developed a draft Model AI Governance Framework for Generative AI.

SHE has offered its comments to help develop consistent principles and practical approaches which, hopefully, can maximise both trust and innovation.

 

Introduction

1.   SHE is an organisation that focuses on the empowerment of girls and women and advocates for gender equality and equity. Having its roots in the Sunlight Alliance for Action to tackle online harms, a key emphasis of SHE’s work lies in combatting online harms. Fundamentally, SHE believes that girls and women should enjoy safety and equality not just in the physical world, but also online.

2.   SHE’s recent research[1] on online harms has revealed that whilst Singapore youths are open to the use of generative AI, they want clear rules of engagement in the area, including what is harmful. SHE is therefore heartened at the timely development of the Model AI Governance Framework (the “Framework”).

3.   SHE notes that the proposed framework addresses some risks highlighted in the Discussion Paper on Generative AI: Implications for Trust and Governance[2], and has provided a useful reference to AI Verify Foundation and IMDA’s initial set of standardised model safety evaluations for the Large Language Model (LLM) which covers the propensity towards bias, toxicity generation etc.

4.   SHE’s focus broadly covers four of the six key risks highlighted in the aforementioned Discussion Paper – (i) privacy and confidentiality, (ii) disinformation, toxicity, and cyber threats, (iii) embedded bias and (iv) values and alignment.

 

Key Comments

A.  We must acknowledge the impact that AI can have on gender equality, and proactively address content that perpetuates negative gender stereotypes and/or sexually objectifies women.

 

Risk: AI can amplify abuse/violence

5.   Generative AI can facilitate and amplify gender bias and gender-based abuse/violence. AI tools can allow attackers to scale up their efforts at abuse and harassment, even if they do not possess high levels of technical or programming skills. Ease of access to sophisticated AI tools can reduce the barriers to entry for potential abusers to embark on abusive conduct against women.

6.   The Framework should thus include measures to ensure that the use of generative AI tools incorporates robust safeguards and monitoring systems to detect and prevent their misuse for gender bias, abuse, and harassment.

 

Risk: AI can promote the sexual objectification of women

7.   More than 95% of deepfake porn videos generated using AI, target women. In recent years, AI has also been misused to generate inappropriate content to produce child pornography[3] or to doctor images of girls[4]. Such content can have a serious impact on the lives of the girls and women involved, by compromising their privacy, damaging their reputation, inflicting psychological trauma, and potentially leading to harassment and physical danger. It can also negatively impact gender equality by facilitating and perpetuating the sexual objectification of girls and women.

8.   The hyper-sexualisation of women is a common result of AI trained on Internet data[5]. For example, an AI-based avatar-generating app (Lensa) generated female avatars that tended to be predominantly nude or skimpily dressed. Conversely, men’s avatars were fully dressed and perceived as astronauts, and explorers[6].

9.   SHE’s research[7] shows that many youths already recognise that the sexualisation/objectification of women (54%) is a negative effect of generative AI.

10.    The Framework should thus include measures to ensure AI tools are not misused to generate pornographic material, deepfakes, or to sexually objectify women. 

 

Risk: AI can promote body image issues

11.    The use of AI tools for image alteration raises significant social concerns, including negative body image or body dysmorphia. AI-altered images can contribute to the misrepresentation of faces and bodies, often exaggerating beauty standards. This can disproportionately affect women, who may be misled into aspiring to unattainable physical ideals[8].

12.    In fact, SHE’s recent research[9] shows that many Singapore youths already recognise that the misrepresentation of persons (55%) is a negative effect of AI-generated or altered images.

13.    The Framework should thus include measures to ensure that the development of AI tools prioritises inclusiveness and diversity in its training data. AI ethics guidelines should also be developed and enforced to prevent the exaggeration of beauty standards and the misrepresentation of individuals.

 

Risk: Non-consensual image use

14.    The use of AI tools for image alteration also raises the question of how images of a subject individual can be created or altered, without that individual’s consent. Could a bad actor argue that since they created or altered an image, the copyright in that image belongs to them, allowing them all attendant legal or moral rights to that image, including the right to sell and distribute it?

15.    The Framework should thus include measures to ensure that victims who have had their images misused or altered by AI without their consent, retain the right or legal ability to have those images removed from the Internet and prevent them from being used or circulated by others.

 

Risk: AI can perpetuate unhealthy gender stereotypes and biases

16.    When AI products and tools are trained on datasets that contain biased information, the end product can perpetuate and exacerbate such biases.

17.    The Framework should thus include measures to ensure that biases (including gender bias) are mitigated and not incorporated into AI-produced output.

18.    This includes steps to ensure the integrity and quality of datasets used for AI training, to ensure they are inclusive and bias-free. It may also include ensuring that AI tools are subject to review by a diverse panel of developers, who can ensure that the product concerned does not entrench bias. Lastly, ensuring there is gender diversity and inclusion within product development teams would also help ensure that product output is bias-free[10].

 

Risk: Insufficient reporting channels

19.    SHE’s research[11] has revealed that Internet users who encounter online harms want swift and permanent recourse.

20.    The Framework should thus include measures to ensure that AI tools and platforms allow online harms and misfeasance to be identified, reported, and dealt with quickly and effectively. For a start, this could mirror the existing in-app reporting processes offered in social media applications.

 

Conclusion

21.    SHE hopes to see more efforts, such as the proposed Framework, leading to the development of AI tools, platforms, and systems that not only avoid perpetuating gender stereotypes and abuse/violence but actively promote gender equality. We must adopt a holistic approach to AI regulation that prioritises ethical considerations and a need to proactively address embedded risks that may negatively impact gender equality.

 

References:

[1] SHE’s 2024 Safeguarding Online Spaces Study

[2] https://aiverifyfoundation.sg/downloads/Proposed_MGF_Gen_AI_2024.pdf

[3] https://edition.cnn.com/2023/09/27/asia/south-korea-child-abuse-ai-sentenced-intl-hnk/index.html

[4] Sensity AI https://www.technologyreview.com/2021/09/13/1035449/ai-deepfake-app-face-swaps-women-into-porn/

[5] https://www.cigionline.org/articles/generative-ai-tools-are-perpetuating-harmful-gender-stereotypes/

[6] https://www.technologyreview.com/2022/12/13/1064810/how-it-feels-to-be-sexually-objectified-by-an-ai/

[7] SHE’s 2024 Safeguarding Online Spaces Study

[8] https://www.childrenssociety.org.uk/what-we-do/blogs/artificial-intelligence-body-image-and-toxic-expectations

[9] SHE’s 2024 Safeguarding Online Spaces Study

[10] https://www.xl8.ai/blog/ai-and-the-diversity-predicament

[11] SHE’s 2023 Online Harms Study