Artificial Intelligence is being integrated in our lives. You are not fully aware of the immense social responsibility that lies in your daily decision making with the work you perform. The common person is not fully aware of these advances with ai. Do you wonder how your decision making to use one term over another affects the ordinary person? What impact does code have in people's everyday lives? How relevant is ethics to the field of ai? What about the bias embedded already in the system? Could it be that as humans we are flawed and have bias? AI can only learn from what we feed it. For example, there are words in code like master/slave, whitelist/blacklist. We share how this can be changed with open source code. It is everyone's responsibility when we see such words to change them to increase fairness and decrease bias.
With a powerpoint presentation & conversation throughout we will:
Begin with introductions: We share our why; whether in chat or in person (your choice). You also hear my why for delving into this topic.
1. I briefly define ai (bear with me if ai is your expertise. I am not technical and include this not to assume everyone understands)
We proceed to talk about bias and privacy concerns with:
2. Social Media
a. Our data is being mined & sold without our permission.
b. Emotional Contagion is evident on social media when our feeds are manipulated. We give an example with a public experiment by Facebook.
3. Facial Recognition
a. Clearview.ai is mining our data for law enforcement.
We talk about the privacy concerns and lack of laws to opt out worldwide.
b. Worldwide use of cameras to monitor citizens.
c. Facial recognition bias among demographics
4. Bias & Word embedding: an ai tool researchers us to teach ai speech and text. We explore bias found in google news through word embedding.
a. Bias in ai for hiring based on historical data. Examples given with companies pulling the programs. The system is biased in favor of white middle aged men for leadership because this is who has historically been in majority leadership positions in corporations.
5. Social Credit Scoring: Case Study China
We share how surveillance is gathering data to score citizens in China.
a. what criteria is used?
b. what are the consequences of a low score?
Example-Uighur in Xinjiang Province & concentration camps to re-educate.
6. Future of Work
a. which jobs will be replaced by ai
b. what are the projections for percentage jobs lost and replaced by 2030? (in 9 short years!)
7. Microsoft Responsible AI Principles as the golden standard to follow:
AI systems should treat all people fairly
b. Reliability & Safety
AI systems should perform reliably and safely
c. Privacy & Security
AI systems should be secure and respect privacy
AI systems should empower everyone and engage
AI systems should be understandable
People should be accountable for AI systems
We will give examples of the above principles from Microsoft.
8. Change Management Guide:
Our goals are to:
a. Innovate Responsibly
b. Empower Others
c. Foster Positive Impact
We will utilize small group breakouts to apply Microsoft's responsible AI principles to real time scenarios with the end game being how to address fairness ensuring privacy of citizens.
With the golden standard of Microsoft's example, we will learn how to think in terms of:
1. Putting responsible AI into action with:
I will ask you what are real life case scenarios you would like to change and have input in.
Our Method in the Workshop to create a plan of action for change will be Dragon Dreaming:
A method building bridges between consciousness work and project management. It helps us to express our authentic self and support a value-based community while serving higher principles through developing innovative and meaningful projects.
We will identify change in teams with:
We storm before we form as a group. This is a natural group process according to the science of psychology. Only then can be transform what we see we want to change, thus performing it as a norm.
Change occurs with establishing social norms. Be the voice in your organization to increase consciousness of the ethics with ai implementation.
Everyone will return from breakout groups to share their plan for change.
Bonus: With extra time, we will explore 2 projects using ai from a human component: ai wrote an oped & google's poem portraits.
Participants will receive a reference sheet with links to articles cited in the talk.
Alexia Georghiou brings 25 years of experience & expertise with medical ethics through her work in community mental health & social work. Alexia has served as a HIPAA Privacy Officer and has specific training in Bias. She recently completed the course, 'AI for Everyone,' gaining understanding of Artificial Intelligence. Her course, My Story, The Story AI Tells: Bias & Privacy, combines the ethics of her profession as a counselor and social worker, challenging the student to think about ethics with AI. All of the frailty of humanity with our conflict, & bias has given artificial intelligence data through our history. This has resulted in an already flawed system.