Can Latin America Ride the Wave of Artificial Intelligence?

Structural change
Institutions and Development

This article was previously published in the Ideas Matters Blog, on May 23, 2018.

Artificial intelligence (AI), with its self-teaching smart machines, will transform Latin America and the Caribbean. It will not happen immediately. But it is inevitable, and the region needs to ensure that legislation and regulations are in place so that AI increases productivity without harming privacy or abetting discrimination.

There is no time to lose. AI in the region, though still in its infancy, is already having an impact. Automation and robots are transforming industrial processes in sectors like vehicle production and the chemical and plastic industries. People throughout the region are engaging with machine-based interlocutors, or chatbots, when they deal with banks, retailers and airlines. Farms are using AI to improve planting and crop management; and a Chilean company has developed a machine-based system that can read and rank resumes, conduct exams, and even interview applicants by video, according to a recent report and article by Accenture, a global consultancy.

Latin America and the Caribbean is still considerably behind advanced economies in this regard. AI is being widely used in those countries to analyze financial data, evaluate loans, organize packaging and delivery, and identify criminals. It will likely, in the not-too- distant future, drive people’s cars.

But AI is almost certain to be revolutionary when it penetrates deeper into our region, as a recent IDB report on the future of work emphasizes.

Privacy Issues

In the meantime, serious legal and ethical challenges loom. These are related both to  AI in and of itself, and to the information ecosystem that surrounds it, including the explosion of data (Big Data) that enables algorithms in interconnected computers and machines to perform tasks, make decisions, and collect even more data through  the machine learning that is an essential part of AI.

Privacy issues  surrounding Big Data lie behind the recent scandals in the United States in which personal information from Facebook accounts was harvested without account holders’ authorization for the purposes of electoral profiling and targeted political messaging. It has raised concerns that our private data also could be misused when it comes to applications for loans, jobs, and housing.

Moreover, once this information is fed into the algorithms that allow AI to work, we may be powerless to correct errors: It may be too difficult to pinpoint them within an algorithm’s complexity. As governments and companies increasingly use AI to make their decisions for them, we may thus find ourselves being denied benefits on the basis of inaccurate information and have no way to appeal.

Automated systems and bias

Then there is the strong possibility of bias and unfairness creeping into such automated systems. In the United States, for example, police departments are increasingly using not only mug shots from arrests, but state driver’s licenses and ID photo databases to hunt for criminals using a mixture of surveillance cameras and software allegedly adept at facial recognition. Today, according to a report by Georgetown Law School’s Center on Privacy and Technology, nearly 50% of adults in the United States have their photo enrolled in a criminal face recognition network. But the disproportionate use of African-American mug shots in such systems makes the possibility of false arrests likely. And so does the widely-reported inability of facial recognition technology to accurately distinguish black faces. A recent study analyzed different facial recognition applications. It found the error rate for darker-skinned women on such systems ranges from roughly 21% to 35% compared to 1% for light-skinned males.

Such injustices could multiply across large areas of daily life. What is to stop companies from excluding gay or transgender people in employment based on publicly available information about them and how easily they can be identified via facial recognition? Or from using software to detect lying in prospective clients and then denying them services based on an incorrect assessment?

Accountability is key

Latin America and the Caribbean will need to establish measures to foreclose those possibilities. The emphasis has to be on accountability. That means laws and regulations that protect privacy and limits in how companies and government agencies can use AI. It means designing codes of corporate ethics specifically related to AI. And it probably entails involving people from minority populations and a wide range of disciplines, including psychology, sociology and law, to audit algorithms and make sure that their design doesn’t end up incorporating discriminatory practices into smart machines.

Ultimately, those protections will be important not only domestically but in international trade. On May 25, the European Union put into force new regulations giving citizens more control over their personal data and the export of that data outside Europe. As governments in advanced economies move to address those and other challenges related to Big Data and AI, Latin America and the Caribbean may find it difficult to harmonize their regulations in the interests of commerce if they don’t take steps early on to guarantee fair and accountable systems. That could mean being cut off from promising markets.

AI has enormous potential to increase growth in Latin America and the Caribbean, provided governments educate their citizens well enough to take advantage of it. But governments also have to ensure that AI protects privacy and promotes the long-term goals of inclusion and fairness. There is a window of time to work on that. It is not infinite.

Share this