Ethics-backed artificial intelligence

Ethics-backed artificial intelligence

The Australian ethics framework was formed for businesses and organisations who design, develop, integrate or use artificial intelligence (AI). The framework consists of 8 voluntary principles designed to provide important considerations when using AI.

Be empowered by AI insights, not worried about security

At Seer, we believe that an ethical framework underpinning our AI is not only necessary for responsible implementation, but a core part of making a successful and scalable AI platform

The research, development, and deployment of our AI is anchored by uncompromising ethical standards. The Australian ethics framework ensures that the insights from our service can be dependable for patients, medical professionals, and customers.

Seer Assist is the event-labelling algorithm that powers Seer Cloud, our web-based workspace for the analysis, review, and management of medical data. Seer Cloud is a certified medical device in Australia and the EU. The platform upholds global standards for privacy, safety, and transparency.

Principles

Australia’s 8 AI Ethics Principles are designed to ensure AI is safe, secure and reliable.

Human, societal and environmental wellbeing

AI systems should benefit individuals, society and the environment.

Our AI exists to improve the speed and accuracy of medical data review. Scanning data objectively provides a much higher likelihood of accurate diagnosis by an expert leading to better health outcomes. Improved diagnostic accuracy also contributes to reducing the 30% misdiagnosis rate of epilepsy reducing burdens on individuals as well as the wider community.

Human-centred values

AI systems should respect human rights, diversity, and the autonomy of individuals.

All of our AI research is overseen by a human medical research ethics committee. All data studied is done so with consent and is used in a de-identified manner. AI is used as a guidance tool where it can be turned on or off and decisions are still made by trained professionals.

Fairness

AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.

Our algorithm has been trained on data that is sourced from Australians varying in age, location, and background. It is also trained on a vast ‘disease phenotype’ having studied data sets of several types of epilepsy.

Privacy protection and security

AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.

Seer enforces strict data privacy and security protections. Our technology is approved by the Therapeutic Goods Administration (TGA), and compliant with the General Data Protection Regulation (GDPR) and the NHS Data Security & Protection Toolkit.

Reliability and safety

AI systems should reliably operate in accordance with their intended purpose.

Our machine learning algorithm has been trained using review scores of highly skilled neurologists, epileptologists, cardiologists, and clinical scientists. Continual training and optimisation by a dedicated team of developers ensures the algorithm performs accurately and as intended on every new set of data.

Transparency and explainability

There should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI, and can find out when an AI system is engaging with them.

Clinical reviewers and Seer staff receive EEG reports that have been labelled by AI to assist in reporting. Our AI is at the disposal of autonomous individuals who can choose to use it when they believe there is benefit to diagnostic accuracy.

Contestability

When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system.

AI is always used in tandem with a clinically trained human reviewer. Clinical scientists and reporting doctors look at every study that is scanned by our algorithm and ultimately decide when to use outcomes of the AI to assist in their reporting.

Accountability

Those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.

As part of our regulatory framework, an additional software system has been implemented to address regulatory requirements. This means that changes to our algorithms are tracked and benchmarked using fixed tests to ensure no performance degradation.

AI is about making software that mimics human behaviours and human intelligence.

Living best practice as part of our daily work

We are proud to uphold the highest ethical standards in both AI and medical research frameworks.

As well as following the Australian AI Ethics Framework, all of our research activities are vetted by a National Health and Medical Research Council approved human research ethics committee. This means that a committee of experts and everyday-people have considered our research studies, the ethical considerations, and any potential harm or negative impacts on research participants.

We’re dedicated to lifting the bar for MedTech research ethics. Prof Mark Cook, Seer Chief Medical Officer, and Dr Ewan Nurse, Seer Research Lead, are both members of the Human Research Ethics Committee.

Commitment to transparency, open-access data, and diversity

This new study shows that long-term seizure risk cycles can be tracked using a smartwatch and is an important step forward in developing a seizure risk app for people with epilepsy. However, there’s more to uncover.

Research is at the heart of what we do at Seer. Many Seer projects culminate in peer-reviewed journal articles so the whole world can learn from our experience.

Data is the fuel for AI systems. We’re committed to open-access data through our support of the Epilepsy Ecosystem, a place for sharing high-quality data to generate insights into epilepsy management.

We’re always looking for new collaborations that can bring our technology to the next level and working with world-class researchers means great science always comes first.

Seer’s research follows an approach focusing on diverse, multidisciplinary teams, with broad skill sets. Team development is focused on making sure individuals are multi-skilled, rather than teams of individuals with a range of skills. This means individuals are empowered to solve problems from both technical and medical perspectives, and reduces bias in our AI.

A lot of unintended bias can go into AI models, and there can be complex, systematic problems to overcome in making AI systems. Data needs to come from as broad a sample as possible, and needs diverse teams that can pick up on these biases and call them out.

The future of medicine is data-driven insights

Addressing bias and diversity so AI can to continue providing access to gold-standard healthcare

The number of clinical decisions based on data-driven insights will continue to grow. This means that the sheer quantity of data generated by modern systems will become too much for human review.

Utilising assistive-AI technology can ensure that a standard of care can be delivered. However, these same technologies must continue to be held to a framework in order for healthcare to be delivered safely and reliably.
Unintended human bias can go into AI models, so datasets need to be collected from broad samples to ensure biases are overcome. By extension, the research teams behind the models need to be diverse, multidisciplinary teams, with broad skill sets.

AI allows for world-standard healthcare no matter where you are. An ethics framework makes sure this healthcare is delivered safely, securely and reliably.