Artificial intelligence, privacy and ethics
As we discussed earlier, The COVID-19 pandemic is changing the world in many ways.
The most notable change so far is the adoption and acceleration of digital transformation across many sides of life and business. The world is going through a great Info-Big-Bang when billions of bytes of data are generated every day.
It is not only about digital identification, KYC or AML services anymore. The urgency with which new techs are deployed raises many risks and ethical issues.
More and more people worry that the use of AI may compromise their privacy and civil liberties. This article explores one of the most popular yet controversial topics in the world of info technologies — AI ethics.
Building safe “AI house”. RegTech Asia 2020
Last September RegTech Asia 2020 brought together industry experts in an online forum to provide a comprehensive overview of the key regulatory issues impacting the Asia-Pacific region and the technology solutions that offer support. One of the fireside chats was devoted to the matter of privacy and ethics.
Nick Wakefield, Co-founder at Regulation Asia and UnionBank’s Head of Artificial Intelligence & Data Policy, Maria Francesca Montes discussed the growing concerns that unavoidably follow the enormous growth of artificial intelligence and machine learning in banking, and the move to enhance regulatory oversight.
When Nick asked about the reasons of so colossal importance in focusing on the questions of privacy and ethics in machine learning and AI development – Maria compared AI with a human child, who needs to be taught and educated.
“We want to prevent any unfair bias”, she said. ”When some persons are denied of a credit card transaction because the rule says its a fraud, we still can provide better customer experience and prevent them from being discriminated.
So it’s very important for the AI data policy to lay down the proper conditions, proper cement for building AI house”.
Basically, she repeated the main idea of the article In Nature magazine from June 2020 called Artificial intelligence in a crisis needs ethics with urgency. It also talks that we should be more careful relying on “systems that become less human oversight and potential for override due to staff shortages and time pressures”. The authors noted that “this must be carefully balanced against the risk of failing to notice or override crucial failures”
What will be the new “Robot Ethics Charter”?
Of course, RegTech experts and Nature magazine editors are not the first to raise the questions about privacy and ethics in AI development.
In 2007 The South Korean Government even proposed a Robot Ethics Charter where surprisingly Asimov’s laws were still mentioned as a template for guiding the evolution and development of robots.
Similarly, “Top 10 Principles for Ethical Artificial Intelligence” presented by Sanae Takaichi during G7 ICT Ministers’ Meeting in Japan in April 2016 reflect philosophical ideas of the 20th-century science fiction master.
But given how much robotics has changed and will continue to grow in the future, we need to ask how these rules could be updated for a 21st-century version of artificial intelligence.
ASIMOV'S LAWS OF ROBOTICS 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
Can AI solutions discriminate?
The answer is blowing in the wind. The good example of that is the scandalous story of Microsoft millennial-mimicking chatbot that occurred a few years ago.
Tay, a chatbot designed by Microsoft to learn about a human conversation within 24 hours of her free ride in Tweeter went full “neo-nazi”. AI started tweeting abuse at people, insulted women, denied the Holocaust and claimed that Hitler was right. The company has terminated her the same day and deleted almost all abusive tweets straight after that.
“AI doesn’t generate these responses. Rather, if you tell her “repeat after me” she will parrot back whatever you say, allowing you to put words into her mouth” — commented confused MT research team.
Well, thank god, we are not living in the times of Skynet yet!
Quoting Akim Arhipov, CEO at BASIS ID, we decide the rules for how the data is managed and then let the technology implement it. We’re leveraging the technology but at the end of the day, its humans telling it what to do.
What is waiting for us in the future?
It is quite predictable that further development of artificial intelligence will boost the modern trend of autonomous decision-making by machine even more than we have it now. Already today most of the privacy-sensitive data analysis is driven by machine learning and the majority of the decisions are taken by algorithms. ID verification software, recommendation engines and search algorithms, ad tech networks — all these are just the tip of the iceberg.
As artificial intelligence evolves, it elevates analytics of personal information to the new levels of power and speed. Therefore, it enlarges its ability to use personal information in the ways that can intrude on privacy interests.
This complicates matters a lot as current privacy and security standards might not account for AI capabilities.
AMA Journal of Ethics explains that current methods for de-identifying data are ineffective “in the context of large, complex data sets when machine learning algorithms can re-identify a record from as few as 3 data points.”
O’Reilly’s 2019 AI Adoption in the Enterprise survey shows that security is the most serious blind spot within an average organisation.
“73% of respondents indicated they don’t check for security vulnerabilities during machine learning development. Similarly, more than half (59%) of organisations do not consider fairness, bias or ethical issues in their processes either. Even privacy is neglected, with only 35% checking for the critical issues”.
The report found that instead of prioritising those issues, the majority of developmental resources are focused on ensuring that AI projects are accurate and successful.
Inserting humans “into the loop”
The complexity of AI systems and emerging phenomena they encounter indicate that keeping humans “in the loop” is still required for constant monitoring and supervising.
Many debates are focused on algorithmic bias and the potential for algorithms to produce unlawful or undesired discrimination in the decisions to which the algorithms relate.
To be true, these are major concerns for civil rights and consumer organisations in the USA, EU and Asia today.
Although AI solutions are not typically the domain of lawyers, more and more often it gets into the scope of interests of legal institutions.
If we consider legal responsibility to be a subset of moral responsibility, possibly soon businesses will be obliged to take into account the ethical considerations and the legal factors for AI to gain acceptance and be trusted in their specific sector.
This could include:
- Data stewardship requirements, such as duties of fairness or loyalty, could militate against uses of personal information that are adverse or unfair to the individuals the data relates to.
- Data transparency or disclosure rules, as well as the rights of individuals to access information relating to them, could illuminate uses of algorithmic decision-making.
- Data governance rules that prescribe the appointment of privacy officers, the conduct of privacy impact assessments, or product planning through “privacy by design” may surface issues concerning the use of algorithms.
- Rules on data collection and sharing could reduce the aggregation of data that enables inferences and predictions, but may involve some trade-offs with the benefits of large and diverse datasets.
AI vs LGBTQ+! Is it even possible?
Meanwhile, the legal response to AI evolution targets discrimination directly.
A couple of years ago a group of 26 US civil rights and consumer organisations wrote a joint letter demanding to prohibit or better monitor the use of personal information with discriminatory impacts on “people of colour, women, religious minorities, members of the LGBTQ+ community, persons with disabilities, persons living on a winsome, immigrants, and other vulnerable populations.”
Later that year the Lawyers’ Committee for Civil Rights has incorporated this principle into model legislation that was substantially reflected in the Consumer Online Privacy Rights Act introduced by Senate Commerce Committee in 2019.
It also includes a similar provision restricting the processing of personal information that discriminates against or classifies individuals based on protected attributes such as race, gender, or sexual orientation.
An interesting fact: If in the U.S. data privacy is more often regarded as an extension of consumer rights, in Europe data privacy laws are seen as an elaboration of human rights.
The world is changing before our eyes. Hello, the new world!
The tremendous development of AI raises fundamental ethical and moral issues for society. These complex issues are of vital importance to our closest future. Even where AI models are strictly speaking accurate, they may have somewhat harmful impacts across different subpopulations of humanity, with harmful consequences that are difficult to predict in advance.
We require new approaches for ethics to ensure AI can be safely and beneficially used in the times of COVID-19 and the new brave after-pandemic world.
Although ethical data management oblige organisations to establish many policies and extra processes, the purpose of it is simple – to provide a safe step in the decision-making process which will guarantee that the vital question is asked: is our data usage legal, fair, proportionate and just?