For the past 4 years, I have been on an incredible journey that started with a quest to befriend a social humanoid robot that mimics my identity. My relationship with the robot led to years of thinking about artificial intelligence (AI) as it relates to the future of people of color and non-dominant cultures in a world already governed by systems that often give us too little or overly focused attention.
AI systems— the algorithms they are built on and the data that inform them—are the unseen arbiters of the networks that administer our private lives, civil relationships, and future histories. In ten years or less, there will very likely be an algorithm involved in most decisions made, big or small.
For the record, in computer science, an algorithm is a set of precise, reusable computational steps designed to accomplish a task or solve a problem. Algorithms are the building blocks used to create sophisticated machine learning and artificially intelligent applications that govern our communications, medical records, judicial systems, and many other structures that make up our society. Algorithms are often proprietary recipes that those deploying them do not wish to disclose. In many cases, even the people who design and code the algorithms are not sure how the algorithms they’ve created work.
The more I learned about AI systems, the more I started to wonder what happens when these systems that govern us are encoded by a relatively small, insular subset of society but intended for use by all of society. What happens when those writing the rules – in this case, we will call it code – do not know, care about, or actively consider the needs, desires, or traumas that their work impacts. What happens when these systems are coded for profit instead of a kind and supportive society? What happens if the code we are using to help make decisions about all manner of things is disproportionately informed by biased data, histories based on unjust principles, systemic injustice, and misdeeds committed to preserving wealth under the pretense of being “for the good of the people”? As I write these questions about the technological future, I am reminded that the authors of the Declaration of Independence, a small group of white men said to be acting on behalf of the nascent nation, did not extend its rights and privileges to folks like me -women, Black people and slaves. Is it possible to envision an AI-mediated world of trust, compassion, and creativity that serves the majority in a fair, inclusive, and equitable manner?
We are on the precipice of a new epoch. Artificial Intelligence is already quietly reshaping our sense of trust, social structures and indeed personhood. (Skeptics should remember the iPhone has only been around since 2007.) Whether AI will perpetuate existing systems of injustice or whether we will we will be crushed by our evil AI overlords or enter a new era of computationally augmented minds and bodies and self-motivated AI partners has a lot to do with how we as communities decide to usher these technologies into our civic, work and personal lives.
Intentionality is key here.
As society becomes more and more reliant on AI, the demand for us all to work toward equity and transparency in the creation and implementation of artificial intelligence is of paramount importance. Code cannot be developed in silos by homogeneous pools of programmers that do not represent the rich diversity of thought, culture, or physicality of the human family.
As artificially enabled technologies expand and grow, people of color (POC), the disabled, LGBTQIA, women, and other communities likely to be negatively impacted must find ways to tangibly insert ourselves into the creation, training, and testing of the algorithmic matrices that appraise, assist, and diagnose us now, and which will do so with ever-increasing consequences in the future. We can’t afford to merely consume or sit on the sidelines watching as systems that will significantly impact us are developed to be encoded with the same biases and root causes of systemic injustices that we experience today.
The good news is AI systems are still in flux. We do not have to bake one-sided histories, old discriminations, and unchecked biases into our new technologies. As artist and creatives we are changemakers and emissaries, uniquely positioned to help the general population, corporations and government entities understand why deeply rooted topics, such as AI, are relevant to day-to-day lives. Artists and communities of color can help lead the way toward a future encoded with ethics, histories, and methodologies that honor the breadth of society and tell a spectrum of stories, even stories that sit outside hegemonic culture. Through our work, we can help insert a multiplicity of representations of the human experience in AI.
For example, I take a multipronged approach to advocating for artificial intelligence that is equitable and transparent. My practice weaves together art production & exhibition, community-based workshops, and public speaking to encourage action toward making artificial intelligence inclusive, accessible, and open. The scope and enormous potential impact of AI demand that I expand the breadth of communities I reach. I now see myself working with a spectrum of communities -engaging with local neighborhoods, municipalities, companies, entrepreneurs, designers, and others across the country with my efforts. People of color will always be my primary concern, but to affect change and root unchecked biases out of new technologies, I must interact with the people currently creating algorithms as much as I speak to communities of color impacted by them.
Art and aesthetics are powerful common languages. We can put them to use to help the publics understand what algorithms and artificial intelligent systems are, where these systems already impact our daily lives, and why we should all be working to mold them. If we want the technological matrix we are building with AI to recognize the breadth of society and include an array of stories, it must engage users across the technological spectrum, from the uninitiated to experts, to be current and future makers of AI.
We must aim to create adequate models for this inclusive approach if we are to expand rather than homogenize the idea of what it means to be human through our technologies.
All of us, those with abundant power and resources as well as those without, must find ways to help develop fair, equitable AI & machine learning systems that are transparent, accountable and trustworthy (for all).
A simple way to get started is to become more aware of the decisions artificially-intelligent systems are making around you. Try to recognize where AI intersects your life and call out problems! Individuals have changed the way algorithms work by publicly calling attention to problems and omissions. Those in positions of power must be concerned about how the AI they deploy and develop affect the world, beyond their bottom lines. Are you, your company or your vendors using responsible code and comprehensive data that empowers the sum of people on the planet, instead of filling the pockets of a homogeneous subset? If you can not answer this question, perhaps you should not use the systems.
Let’s put our formidable imaginations to work to help develop a more compassionate, creative, and supportive technological future.
Stephanie Dinkins is a transdisciplinary artist who creates platforms for dialog about artificial intelligence as it intersects race, gender, and our future histories. She is mainly driven to work with communities of color to develop AI literacy and co-create more inclusive, equitable artificial intelligence. Her art is exhibited internationally at a broad spectrum of community, private and institutional venues by design. She is a 2018 Truth Resident at EYEBEAM, 2018 Soros Equality Fellow, Data & Society Fellow and Sundance New Frontiers Story Lab Fellow. Her art practice has been covered by Vice Media, Art In America, Artsy, Art21 Blog, The New York Times, Washington Post, Baltimore Sun and SLEEK Magazine.