The Australian ethics framework was formed for businesses and organisations who design, develop, integrate or use artificial intelligence (AI). The framework consists of 8 voluntary principles designed to provide important considerations when using AI.
Be empowered by AI insights, not worried about security
At Seer, we believe that an ethical framework underpinning our AI is not only necessary for responsible implementation, but a core part of making a successful and scalable AI platform
The research, development, and deployment of our AI is anchored by uncompromising ethical standards. The Australian ethics framework ensures that the insights from our service can be dependable for patients, medical professionals, and customers.
Seer Assist is the event-labelling algorithm that powers Seer Cloud, our web-based workspace for the analysis, review, and management of medical data. Seer Cloud is a certified medical device in Australia and the EU. The platform upholds global standards for privacy, safety, and transparency.
Principles
Australia’s 8 AI Ethics Principles are designed to ensure AI is safe, secure and reliable.
AI is about making software that mimics human behaviours and human intelligence.
Living best practice as part of our daily work
We are proud to uphold the highest ethical standards in both AI and medical research frameworks.
As well as following the Australian AI Ethics Framework, all of our research activities are vetted by a National Health and Medical Research Council approved human research ethics committee. This means that a committee of experts and everyday-people have considered our research studies, the ethical considerations, and any potential harm or negative impacts on research participants.
We’re dedicated to lifting the bar for MedTech research ethics. Prof Mark Cook, Seer Chief Medical Officer, and Dr Ewan Nurse, Seer Research Lead, are both members of the Human Research Ethics Committee.
Commitment to transparency, open-access data, and diversity
This new study shows that long-term seizure risk cycles can be tracked using a smartwatch and is an important step forward in developing a seizure risk app for people with epilepsy. However, there’s more to uncover.
Research is at the heart of what we do at Seer. Many Seer projects culminate in peer-reviewed journal articles so the whole world can learn from our experience.
Data is the fuel for AI systems. We’re committed to open-access data through our support of the Epilepsy Ecosystem, a place for sharing high-quality data to generate insights into epilepsy management.
We’re always looking for new collaborations that can bring our technology to the next level and working with world-class researchers means great science always comes first.
Seer’s research follows an approach focusing on diverse, multidisciplinary teams, with broad skill sets. Team development is focused on making sure individuals are multi-skilled, rather than teams of individuals with a range of skills. This means individuals are empowered to solve problems from both technical and medical perspectives, and reduces bias in our AI.
A lot of unintended bias can go into AI models, and there can be complex, systematic problems to overcome in making AI systems. Data needs to come from as broad a sample as possible, and needs diverse teams that can pick up on these biases and call them out.
The future of medicine is data-driven insights
Addressing bias and diversity so AI can to continue providing access to gold-standard healthcare
The number of clinical decisions based on data-driven insights will continue to grow. This means that the sheer quantity of data generated by modern systems will become too much for human review.
Utilising assistive-AI technology can ensure that a standard of care can be delivered. However, these same technologies must continue to be held to a framework in order for healthcare to be delivered safely and reliably.
Unintended human bias can go into AI models, so datasets need to be collected from broad samples to ensure biases are overcome. By extension, the research teams behind the models need to be diverse, multidisciplinary teams, with broad skill sets.