“C-3PO: Sir, it’s quite possible this asteroid is not entirely stable.
Han Solo: Not entirely stable? I’m glad you’re here to tell us these things. Chewie, take the “Professor” in the back and plug him into the hyperdrive.
C-3PO: Sometimes I just don’t understand human behavior. After all, I’m only trying to do my job.”
If you were born in the 70’s or 80’s, your teen and early adult age may have been immersed in the Star Wars saga, created by George Lucas, in 1977. The saga had all the ingredients of success: first generation of visual effects, the battle between the light side of the force and the dark side, a love story and a myriad of eccentric characters that enchanted generations. Among them were the famous android couple C-3PO/R2-D2. These Androids became known to us with their unforgettable sense of humor, honesty, devotion, evil and compassion (at times more human than the humans). Lucas, we understand today, was writing about the future. Artificial Intelligence, or A.I., is here to stay.
A.I. presents in both grand and mundane ways. ChatGPT is the talk of the town and much is still to be written about this algorithm that composes and thinks for us. ChatGPT even has the capacity to admit its mistakes as OpenAI, the chatbot creators, explain in its introducing note.
Philanthropy has, surprisingly, been absent from the debates about A.I. despite the role it can play in shaping A.I.’s full potential as a force for good. Philanthropy has a privileged position in the finance industry where it is a driving force for moral responsibility and is a source of public leadership.
Philanthropist Organizations (PO’s) must use their internal and external influence to build a future in which A.I. works ethically and effectively to help solve humanity’s greatest challenges.
It’s time to define what role philanthropy play in protecting the most vulnerable and ensuring that A.I. works for social improvement.
This is the purpose of the newly created “Global AI Action Alliance”, a platform for philanthropic and technology leaders to engage in the development of A.I. best practice. It is led by 20 senior philanthropic leaders and POs. It proposes a 4-action plan with a strong commitment to learning and action. But it is yet to be seen how this can be achieved, and by which means or legitimate channels.
As Dan Huttenlocher, Dean of the MIT Stephen A. Schwarzman College of Computing and Board Chair of the MacArthur Foundation, observed in the recently held Davos Forum, “A.I. can help us leapfrog some of the societal challenges we face, but we have to design it to do so. There’s no such thing as a ‘good technology’ in and of itself — we have to make it work for us.”
Action can be translated into many ways as A.I. will impact a broad range of potential issues but some advocates say that, to start, organizations should consider appointing a Chief Tech Ethics Officer to take responsibility and marshal necessary resources. Some even suggest an A.I. Ombudsman.
After all, A.I. programs such as ChatGPT are only programmed, like C-3PO, to do “their job”. We should allow philanthropy and society at large to ensure that the extraordinary potential of A.I. works with ethics and for the common good.
*President, Associação Internacional
de Filantropia (Macau)
Macau Daily Times is the official media partner of the Associação Internacional de Filantropia (Macau).