People, markets, scientists are all heralding data. Financial markets reward data-empowered companies, yet as a…
While you might not consider ethics in AI a primary concern for your business, consider this: A whopping 50% of business processes will be fully automated by the end of 2022, compared to around 30% in 2019.
Most of the advantages in digital transformation are enabled by task augmentation through artificial intelligence (AI) or through AI powered Robotic Process Automation (RPA).
Using such vast quantities of data in automated processes is already having a massive impact on business and society. By the end of 2020, around 70% of the data that a company was using, wasn’t internal transactional data, yet was external (generated) data: Syndicated Data, Smart Devices and Internet of Things (IoT) enabled devices, Social medial analyses and external data streams – in other words, coming from external, non-transactional sources.
The Double-Edged Sword of AI and Predictive Analytics
This rising impact can be both a blessing and a concern. It is a blessing — for example when AI and Predictive analytics are using big data to monitor growing conditions, to help an individual farmer make everyday decisions that can determine if they will be able to feed their family (or not).
Yet it can also be real concern when biased information is applied at the outset, leading machines to make biased decisions, amplifying our human prejudices in a manner that is inherently unfair. Or, as Joaquim Bretcha, president of ESOMAR put it in his opening speech at ESOMAR’s Congress:
technology is the reflection of the values, principles, interests and biases of its creators.
Who is Supplying Your Data: Companies Must Ask Critical Questions About Ethics in AI
Companies should ask themselves: “Is data the core asset that I monetize?” or “Is data the glue that connects the processes that have made my products or services successful?”
This is especially urgent as companies start to use third-party data sources to train their algorithms — data about which they know relatively little. Companies as also need to ask themselves:
- What is the quality of the internal and external data we’re using to train, and to input, algorithms?
- What unknown and unintended biases could our data train into algorithms? How will machines know under which biases they operate if we don’t share how algorithms arrive at its answers?
- What will the impact of this automation be on our business, people, and society? How can we detect and quickly mitigate unanticipated impacts?
In terms of accountability and ownership, it begs the question of creating algorithms in a ‘black box’; how does artificial intelligence arrive at its decisions and recommendations?
This question becomes more and more important when artificial intelligence is applied in fully automated processes that judge eligibility for a clinical trial, performs resume selection, or evaluates applications for mortgages. This raises the issue:
what happens when transparency and data quality, ownership, and governance are insufficient?
In the end, the answer around ethics in AI seems to boil down to transparency; having the applications able to demonstrate what data was used, who trained the AI, and making clear how the AI came to its answers.
In many cases, this will just mean providing an ‘explain button’ or a ‘best of five’ answer with confidence percentages and links to the source data. This approach is already being used within medical and legal AI powered applications.
In other cases, this could vary between a simple “what’s the logic behind this mortgage application approval or rejection and a more prominent “show me what happened under the hood in providing this answer.”
For AI Transparency, Human Supervision is Required
Yet, that’s not enough. It will take some human supervision, aka governance, to achieve true ethics in AI. Consider the following questions when mapping out your AI: Who within our organization owns the training and supervision process? When things go haywire with unintended outcomes, who is then accountable, and who mitigates?
An interesting fact: making an AI appear more human by adding voice, emotional concepts, or even a visual avatar or face doesn’t seem to increase the trust that people have in an AI’s recommendations. However, knowing who trained the AI does! Hence, on the risk of sounding like a broken record, transparency is one of the most key guiding concepts for successful ethics in AI.
Data Is an Asset, and It Must Have Values
By 2020, 22% of U.S. companies had attributed part of their profits to AI and advanced cases of (AI infused) predictive analytics.
According to the Economist’s Intelligent Unit, organisations doing the most with machine learning have experienced 43% more growth on average versus those who aren’t using AI and ML at all — or not using AI well.
One of their secrets: They treat data as an asset. The same way organizations treat inventory, fleet, and manufacturing assets. They start with clear data governance with executive ownership and accountability. Treat data as an asset, because, no matter how powerful the algorithm, poor training data will limit the effectiveness of Artificial Intelligence and Predictive Analytics.
Just ask yourself, “is Data the core asset that you monetize – or – Is Data the glue that connects your processes which have made your products or services successful?”
To Govern in The Future, We Must Act Today
What’s the takeaway from this? We need to apply and own governance principles that focus on providing transparency on how Artificial Intelligence and Predictive Analytics achieve its answer.
I will close by asking one question for you to ponder, when thinking about how to treat data as an asset in your organization:
How will machines know what we value if we don’t articulate (and own) what we value ourselves? *
*Liberally borrowed with permission from John C Havens ‘Heartificial Intelligence‘
Variations of this Blog have been published in other Media
Dr. Marc Teerlink mba/mbi is a seasoned Corporate Entrepreneur, Strategy Advisor and Board Member with a track record spanning both tech corporations and start-ups. Marc Teerlink is also the Chief Technology Officer and a Board Member at Data to Dollars
Following his market successes with IBM Watson and SAP’s Intelligent Enterprise Solutions, Marc’s current endeavours are focused on creating the next wave of Data Monetization and Technology Valuation around ESG and sustainable AI. Specifically Green Artificial Intelligence and Data Sets which assess, appraise and materialize sustainability quotients, and the actualization of Data & HPC centric solutions that help decarbonize the cloud and its data centres.