The publication will provide world leaders, policy-makers, and civil society members with perspectives that will be critical to face the immense task they’re presented: to ensure the development of AI reaches its full potential in accordance with democratic values and fundamental rights and freedoms. The magnitude of this challenge requires a collaborative effort that transcends disciplinary barriers and geographical borders. This publication will bring together visionary academics, civil society representatives, artists and innovators to help us shift the conversation from what we already know to what we have yet to render visible to ensure AI technologies leave no one behind.
In 1950, Alan Turing – the pioneering mathematician who harnessed the power of artificial intelligence (“AI”) to end World War II – stated that “[w]e can only see a short distance ahead, but we can see plenty there that needs to be done”. In the past decades, AI technologies have developed exponentially, creating profound and dynamic changes in our societies, ecosystems and lives, making Alan Turing’s words ring as true today as they did then. Indeed, plenty here needs to be done.
If AI technologies are developed and used for the benefit of all, they have the potential to be of great service to humanity. They can accelerate medical discoveries, offer unprecedented access to education, amplify the resonance and resilience of cultures, knowledge and arts, as well as help us face the climate crisis by increasing our capacity to adapt to and mitigate the effects of climate change. However, as deep and far reaching as these benefits are, so too are the challenges they pose. From disrupting job markets to reproducing or amplifying discrimination, AI technologies could erode the foundations of our democracies, threaten our cultural, social and ecological diversity, and deepen inequalities within and between countries.
In this context, the questions that must be urgently asked are not whether we welcome AI technologies (they are already here whether we welcome them or not), but rather how trustworthy and equitable they are or should be, whose interests they serve and how we can ensure they create more benefit than harm. Answering these questions require working collectively on technical, ethical, cultural, social and legal initiatives that can ensure the design, development, and deployment of rights-based and inclusive AI technologies. However, these initiatives must remain flexible enough to embrace the many forms that innovation may take in the years to come. All contributors are invited to answer the same question: what are the blind spots on which we must shed light in order for AI to benefit all?
Issues can address 1) blind spots in the development of AI as a technology 2) blind spots in the development of AI as a sector, and 3) blind spots in the development of public policies, global governance, and regulation for AI. There are no limits to the subjects to be addressed. These blind spots could include issues ranging from science fiction and the future of AI, creative deep fakes and the future of misinformation, AI and the future of data driven humanitarian aid, indigenous knowledge and AI, and gender-based violence and sex robots. Proposals can be in creative formats, and the call for proposals is open to individuals from all academic backgrounds and sectors. Proposals from all stakeholder groups, particularly marginalized and underrepresented groups, are encouraged, as well as proposals from authors from the global south and innovative formats (artwork, cartoons, etc.).