If we want to address society’s most pressing and persistent challenges, from climate change to inequality, then technology will have a leading role to play. Scientific breakthroughs facilitated by artificial intelligence could make the crucial difference, by helping to discover new knowledge, ideas and strategies in the areas that matter most to us all.
Tech is too important
But increasing public concern about some elements of the technology industry should serve as a wake-up call. Tech is too important, and its effects are too wide-ranging, not to form part of the public debate. Beneath the individual issues raised, there are at least three asymmetries between the world of tech and the real world.
First, the disconnect between people who develop technologies and the communities who use them.
Salaries in Silicon Valley are twice the median wage for the rest of the US and the employee base is unrepresentative when it comes to gender, race, class and more. Technology isn’t value neutral, and it needs to be built and shaped by diverse communities if we are to minimise the risk of unintended harms.
This is an urgent problem. Women and minority groups remain badly under-represented, and leaders need to be proactive in breaking the mould.
Second, there’s an asymmetry of information regarding how technology actually works.
Solving this has to be a collaborative effort, and requires new types of organisations that facilitate deep understanding of how complex algorithms operate and their impact on society. This takes courage, trust and the prioritisation of real debate and engagement over the comfort of our institutional roles, in which activists, governments and technologists are often more likely to criticise each other than to work together.
It also requires more visibility into how data are used. There are efforts under way within companies, alongside academics and non-profit organisations who are developing ways to make the impacts of algorithms easier to understand.
MIT researcher Joy Buolamwini and the Algorithmic Justice League have created museum exhibits to increase awareness of the disturbing ways facial recognition technologies often fail for individuals with darker skin tones.
Third — and this is by no means unique to tech — there is a structural imbalance of incentives.
The standard measures of business achievement, from fundraising valuations to active users, do not capture the social responsibility that comes with trying to change the world.
This disconnect starts early. There might be a lot of money in tech, but the vast majority of entrepreneurs still fail. Any founder hoping to get a new business off the ground has to convince investors and staff of future growth, and then deliver that relentlessly. Doing this takes single-minded focus on the metrics that appear to matter, with little room to consider complex societal externalities or listen to naysayers.
That’s partly why some of the world’s brightest minds gravitate towards the safest and most proven ideas and business models. They end up creating new services to personalise soda drinks when half a billion people don’t have access to clean water, or new ways to order food by phone when more than 800m people are malnourished. We need new incentive structures to encourage more founders to take on real-world problems, and to do so with ethics at their heart.
None of this is easy. But a fairer world won’t emerge by accident. Positive ethical outcomes depend on far more than algorithms and data: they depend on the quality of societal debate and accountability, too. The prize is enormous. If we get this right collectively, we can look forward to incredible scientific and social progress over the next few decades.
Source: Financial Times
All of us who believe in the power of technology must do everything we can to ensure these systems reflect humanity’s highest collective selves. The writer is co-founder of DeepMind Technologies, an artificial intelligence company