Many of us will have had scans with I-MED: it’s a private company with more than 200 radiology clinics in Australia. These clinics provide medical imaging, such as X-rays and CT scans, to help diagnose disease and guide treatment.
I-MED partnered with the AI startup Harrison.ai in 2019. Annalise.ai is their joint venture to develop AI for radiology. I-MED clinics were early adopters of Annalise.ai systems.
AI companies want your X-rays and CT scans because they need to “train” their models on lots of data.
It’s likely the I-MED images were “sensitive information” under the Australian Privacy Act. This is because they can identify an individual.
The law limits situations in which organisations can disclose this information, beyond its original purpose (in this case, providing you with a health service).
There are lots more layers governing health-related data in Australia. We’ll consider just two.
Some large public institutions have very mature frameworks, but this isn’t the case everywhere. In 2023, researchers argued Australia urgently needed a national system to make this more consistent.
These committees apply the National Statement on Ethical Conduct in Human Research to assess applications for research quality, potential benefits and harms, fairness, and respect towards participants.
Human research ethics committees determine, among other things, what kind of consent is required in a study.
What does this mean?
We are at a crossroads in AI research ethics. Both policymakers and Australians agree we need to use high-quality Australian data to build sovereign health AI capability, and health AI systems that work for all Australians.
But the I-MED case demonstrates two things. It’s vital to engage with Australian communities about when and how health data should be used to build AI. And Australia must rapidly strengthen and support our existing infrastructure to better govern AI research in ways that Australians can trust.
Q: What is the issue with I-MED’s data sharing?
A: I-MED provided de-identified patient data to an AI company without explicit patient consent, which is a breach of patient privacy.
Q: Why did I-MED share patient data without consent?
A: I-MED claimed that the data were de-identified and therefore outside the scope of the Privacy Act. However, experts argue that de-identification is complex and context-dependent, and the data were not sufficiently de-identified to take them outside the protection of the law.
Q: What are the implications for patients?
A: Patients are reportedly avoiding I-MED due to concerns about data sharing, and there are questions about how to ensure patients can choose how their medical data is used in the future.
Q: What is being done to address the issue?
A: The Office of the Australian Information Commissioner is investigating the matter, and there are calls for I-MED to be more transparent about its data sharing practices and to obtain explicit consent from patients before sharing their data.
2024 IWF World Championships Results — Men's 67KG The 2024 International Weightlifting Federation (IWF) World…
Women Who Sleep Less and Eat Less Still Gain: Study Reveals Surprising Connection If you…
Australia's Boating Culture and the Dangers of Drunk Boating Australia is a nation of boaters,…
The King of the Table 13: Shaw Beats Hall 4-2 in Thrilling Arm Wrestling Match…
Kristin Cavallari's New Crush: Yellowstone Star Kevin Costner? Kristin Cavallari, the 37-year-old reality TV star,…
The Energies of Winter Cycles are inherent to nature. Each day, we move through dawn,…
This website uses cookies.