Artificial Intelligence (AI) – How concerned should we be?

The rapid expansion of Artificial Intelligence (AI) has triggered a mixture of scepticism and excitement. The potential for mankind to take massive leaps forward through the data derived from analysis-based machine learning, teamed up with technology that uses this information to develop what appears to be expert opinions and reasoning is astounding. Few of the sceptics hold concerns related to the ability of Silicon Valley and its counterparts to create this technology, but many ask whether it would be good for mankind to remove privacy at a chillingly fundamental level.

Machine learning allows for the advanced profiling of humans. At a basic level, this means that a retail website can gather information about you: which sites you’ve recently visited, where you buy products and even your age and gender. This data is sent to the site owner in a generic report that may not breach existing privacy laws because it’s non-specific to an individual. However, it enables retail sites to develop an extraordinarily specific profile of which visitors are most likely to purchase their product and why. This allows retailers to target advertising efforts at sites they know their customers frequent, thereby offering the consumer what they want, when they want it.

But this information is only private insofar as it’s not available for mass consumption. When an individual visits a website, technology tracks each virtual location and each virtual movement across internet sites. From this, assumptions can be made about the reasons for that visit. Combined with other website visits, machine learning creates an internet persona for that customer. Then, this virtual footprint can be combined with data taken from mobile phones and virtual assistants, such as Alexa or Siri, including the physical locations users frequent based on where their phone is located.  AI systems can thus build a full psychometric profile of anyone connected online.

Businesses are not the only players testing AI profiling.  Governments around the world are also rapidly collecting data.  Profiles can be used, thanks to AI and machine learning, to try to predict which individuals might commit crimes or terrorist attacks. Profiles might also be used to track political leanings or unsavoury associations with people or organisations that are deemed to be ‘undesirable’.

These measures are worrying, not only because of the extraordinary amount of data now available about the billions of humans connected to the internet, but because of the assumptions underlying how machine learning works. The assumption is the human element of machine learning and artificial intelligence. It’s the conduit between pure data – Jenny visits YouTube, a gun store website and just received $3,000 into her bank account – and a perceived outcome – Jenny is likely to be a terrorist. Without assumptions, which are created, defined and re-defined by humans, AI and machine learning do not work. Computers only understand things insomuch as programmers explain why those things matter. ‘Learning’ in this sense is ripe for inherent bias originating from the person implementing the computer code.

Consider the example of Cambridge Analytica, the now defunct business that used private data to build profiles of voters, their behaviour and likely psychological outcomes. It’s comforting to think of Cambridge Analytica as some kind of advanced technology hub, run by computer scientists with the help of advanced computers, but it simply isn’t the case. The business itself was simply a ‘data mining’ company – one of thousands that now exist around the world. These entities take private and public data to build profiles of people, creating a targeted and scientific approach to political campaigning. This is exactly the same model used by retail websites, yet applied to the way people think at a very fundamental level: ‘who would I vote for and why?’

All of this leads to a deep concern about individual privacy and perhaps more worryingly, what the information means, as the efficiency gains of machine learning and AI methods move from the commercial sector to the political and government sectors. If assumptions are human-based, then what is ‘right’ or ‘wrong’ is defined by those who have a naturally biased opinion. In the case of retail purchasing, website owners are interested in making assumptions about the most likely buyer.  Competition in the retail sector is increasingly dominated by the companies that most effectively utilise machine learning to understand consumer preferences, adapt their offerings to be irresistibly appealing, and possibly even shape consumer preferences through effective messaging.  For political operators, the target is to identify the most likely voter and the messaging that activates that voter. Today’s political campaigner is tomorrow’s government leader.  With all the power of governments to collect data and for law enforcement and government agencies to act upon it, as machine learning and artificial intelligence technologies move across the boundary from business to politics and government, questions need to be asked about who is making the assumptions and for what insights.  Ultimately, we need to challenge the purposes for gaining new efficiencies or achieving competitive advantages when piercing the veil of traditional personal privacy.

 


This story is featured in the 9 November 2018 edition of The Warren Centre’s Prototype newsletter. Sign up for the Prototype here.