Risk in the Age of AI
We wrote about the discussions at Accreditation Matters a couple of weeks ago. A large number of the presentations focussed on AI. This technology has been billed as revolutionary and world-changing.
But beyond the hype, it’s timely to drill down to what the risks in the Age of AI might be – and we don’t mean risks from Artificial Insemination.
1. Data Integrity and Quality
It’s attractive to just plug a couple of prompts into an App and receive the outputs. The problem is, that when we rely on these systems, we may not understand the quality of the datasets upon which the AI has been trained.
Data Bias and Quality are real issues because, while AI systems rely on large datasets, any bias or errors in the data can lead to incorrect conclusions or flawed research outcomes and these effects can be magnified. Ensuring high-quality, representative data is crucial.
In fact, just as we can ask a person on what they base their conclusions on, we should be seeking to discover the source of the dataset to confirm the integrity and quality of the AI system. A good way of dealing with this is adding criteria relating to data integrity and quality for evaluation of software as “equipment”.
Data Security is another attribute of quality that should be investigated, Protecting sensitive organisation or research data from cyber threats and ensuring data integrity is vital to maintain trust in outputs and those important results or scientific findings flowing from lab processes.
2. Ethical Considerations
Responsible AI use is all the buzz in AI circles: Ensuring that AI is used ethically in research, in particular, is a major issue. This is also code for trustworthiness – how much trust can we place in the AI outputs and decisions? Transparency and accountability, especially in sensitive areas like medical research, engineering and environmental applications, are critical to delivering the responsible use of AI.
Other ethical considerations include whether informed consent has been given by the owners of data used in training and refining AI.
When OpenAI released GPT-3 in July 2020, it offered a glimpse of the data used to train the large language model. Millions of pages had been scraped from the web, Reddit posts, books, and more. This data was used to create the generative text system, according to a technical paper produced by OpenAI. The clash of privacy concerns with the need to access data sets for training the machine learning system is one that has not yet been fully resolved. It’s why our affiliate company, Pericles Software, will always ask permission to use a organisation’s data to help build its AI models.
3. Technical Risks
Ensuring that AI algorithms are robust, validated, and reliable is an area that can be variable, but covers concepts that many people in labs will understand. The same principles that apply in validation of your test or calibration method are equally relevant in AI. This includes verifying that AI models perform accurately under various conditions.
In terms of what this means to your lab or organisation, dust off those trusty parameters you know already for method validation, and use those as the basis of developing specifications for any AI you intend to use.
4. Regulatory and Compliance Issues
Compliance with relevant regulations and standards such as GDPR for data privacy, TGA, and FDA regulations for medical AI, is essential to avoid legal and ethical pitfalls. The good news is, that much work is being done to develop internationally agreed standards in AI.
Just as we audit processes in our organisation to monitor for compliance with standards and regulations, regular auditing of AI systems and their outputs is important, This will ensure ongoing compliance and the ability to identify and address any emerging issues promptly.
5. AI will take away all decision-making by people
This will definitely not happen!
There are many decisions made by people every day. Granted, not all decisions are earth-shattering, and some will be very low-risk and mundane.
We need to balance the role of AI in decision-making processes within the lab to ensure human oversight and intervention when necessary.
That means lab people need to be trained to understand and effectively collaborate with AI systems, including interpreting AI outputs and knowing the limitations of AI tools.
A related concern is that AI will progress in intelligence so rapidly that it will become sentient, and act beyond humans’ control. We have heard reports of this sentience, with one popular account being from a former Google engineer who stated the AI chatbot LaMDA was sentient and speaking to him just as a person would.
A Quality Approach
The solution to these risks again lies in the application of fundamental principles of Quality.
At an organisational level, develop processes for monitoring algorithms, compiling high-quality data, and explaining the findings of AI algorithms. Establishing standards to determine acceptable AI technologies will be a crucial aspect as well. Discussions on the planned implementation and use of AI are also required to support a culture that sensibly manages these risks.