If you ask people in my life, who don’t quite know what to make of my research in digital cultures and AI systems, they vaguely refer to me as ‘doing something with computers and AI’. This means that through several inter-generational networks, I often get questions that are filled with anxiety, from people who find themselves lagging behind in the fast-paced, almost breathless development of digital technology systems.
With the unleashing of Generative AI technologies (like ChatGPT, Midjourney, Bing, and Bard), questions such as these have increased in the last few months. “Is my phone hacked?” “How do I know if somebody is spying on me?” “How do I trust a message from a bank or my phone company?” “Can somebody leak my pictures from my personal account?” “Is this person I met on a dating site real?” “My friend sent me this information with a link but it doesn’t seem right.” “I just got locked out of my laptop because my security app is not recognizing my face!” “I am worried that my partner is being radicalized into extremist behavior without him knowing it.” “My company is restructuring. Am I going to be replaced by an algorithm?” “Is it safe for me to voice my dissent to a populist movement or will I be targeted and prosecuted for it?” “I saw a picture of me being shared that is fake. How do I stop that?”
Of course, AI is driving innovation, making advancements in cancer research, discovering impossible mathematical numbers, creating models to understand climate crises, streamlining distribution of resources to support people in crises, making art, and incredible works of fantasy. AI is wonderful, but in our everyday practice, AI is just creepy. It seems to be everywhere and in everything. It drives our phones, our cars, and our friendships. It seems to know us better, faster, and in more intimate ways than anybody else, and uses our weaknesses to exploit us into taking actions that can go against our own safety. AI is invisible, and we don’t know what its intentions are as the system slowly shapes all parts of our life, changing how we think of the three most basic blocks of living: I, you, and us.
So alarming is the pace of this transformation that earlier this year, a lot of influential people in the big tech industry—many of them responsible for the making of these AI systems—called for a pause to develop and deploy Generative AI systems because we are running a losing race in preserving fundamental values and civil liberties as these applications run wild without regulation, accountability, or predictable outcomes. The wildfire spread of AI systems has ushered in the most unregulated experiment in social, political, and economic engineering, and there seems to be very little we, as individual users, can do except keep our fingers crossed and hope for the best.
While it is true that AI systems can be technology blackboxes and difficult to understand or control, that cannot be the reason why we give up on aligning it with fundamental values and principles of being human. The idea that AI is too large for us to do anything produces a repetition of doom, gloom, and despair, convincing us that our AI futures are in the hands of the select few who know and can control these Franken-monsters. As a researcher working with computers and algorithms, it is not in my training to predict what AI futures will look like, but I can perhaps offer three different ways to put the ‘I’ back into AI—an approach that calls for Human-Centered AI.
Know the I
Alan Turing famously prophesied that one day, a virtual machine will fool us into believing that it is human. Turing’s test was not about a technology fooling us but a technology that makes us question what it means to be human. As Generative AI proliferates more, we are going to increasingly face this question. Who am I? In the face of sapient technologies and self-learning algorithms, if we do not have a strong sense of what makes us human, we are giving the authority of determining the measure of the human to these systems. The ‘I’ needs to be the measure of AI and not something the AI defines through its predictive, data-driven, probability-based indices.
Know the You
Algorithmic systems thrive on binaries and polarization. The more different we are, the easier it is for those who control these systems to separate us. At the backend of AI systems are databases. These databases reflect and amplify systemic biases and prejudices. The data is so singular and granular that new modes of discrimination emerge, which continue to produce us as the ‘other’. Big data produces small harms which pile up. Each small harm feels like it is happening to somebody else, but cumulatively it affects us all. We need to refuse the ‘othering’ of you that AI systems produce and insist on transparency and explainable frameworks that are committed towards affinity and commonality.
Know the Us
AI systems are organizing systems. They put us into virtual networked neighborhoods based on indicators and metrics that go beyond traditional identity markers. These groups lead to the production of filter bubbles, echo chambers, and communities of panic where misinformation is used as a weapon to generate anxiety and harmful actions. The current deployment of AI systems produces isolation and separation. We are produced as so irreconcilably different that we no longer think of a collective us. This creates apathy. Apathy leads to inaction. When we feel powerless as individuals, we give the responsibility of decision-making and judgment to AI systems. We will have to find new ways of imagining us because our AI futures are collective.
Conclusion
AI systems often present themselves as constantly new. The newness makes us believe that all the solutions to counter the harms of AI are in the future. It is important to historicize the emergence of AI and realize that while it might reconfigure problems, these are still older problems and the answers are not only going to be in regulation and oversight of AI but in reimagining what it means to be human in these changing times. The challenge in front of us is not just to control AI but to protect the human that is being challenged by AI.