The Australian Medical Association (AMA) has called for stronger governance and AI regulation after five hospitals in Perth have advised staff not to use generative AI, such as ChatGPT, to write up medical notes. 

The AMA has stated that the healthcare sector requires its own approach to regulating AI and has described the current AI regulatory landscape in Australia as “largely unregulated with a lack of transparency.” 

President of the AMA, Professor Steve Robson, believes that AI needs to be regulated properly before it can be used appropriately in healthcare. 

Speaking on the topic he states that Australia “[needs] to address the AI regulation gap”, especially in healthcare “where there is the potential for patient injury from system errors, systemic bias embedded in algorithms and increased risk to patient privacy.” 

The association concludes that successful AI regulations in the healthcare sector must consider data privacy, quality of care to patients, appropriate application of medical ethics, and equity of access. 

Robson also pointed to the work already done by countries like the UK and Canada to develop AI regulation that he says Australia can “learn from and adapt to”. 

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData

While many are concerned over whether AI can really be properly regulated, AI is already beginning to seep into many industries worldwide. 

Principal Analyst at GlobalData Laura Petrone explains why healthcare might need its own separate set of AI regulations. 

Not only do patients face a bigger risk that their patient confidentiality could be violated when doctors use generative AI to write medical notes, Petrone also stated that in many cases where medical staff have been using generative AI “they don’t ask permission from the patient to do so.” 

Meaning that not only could a patient’s notes be generated without their knowledge, but they are also unable to provide informed consent over how their data is being handled. 

Petrone also recognises the potential risk should AI, rather than human doctors, be used to make important decisions regarding the health of a patient. 

Because of this, in Petrone’s opinion, it is “no surprise that the AMA recommended the Australian government follow a regulation similar to the proposed EU AI Act.” 

Whilst she describes the proposed act as the “most comprehensive” set of regulations so far, Petrone believes it could be especially beneficial to the healthcare industry. 

Explaining this, she stated that the act would have “different rules for different risk in order to regulate AI without harming innovation.” 

AI’s use cases in healthcare are likely to be high risk, meaning that any AI software developed for that industry would require a “full assessment before being put on the market.” 

A GlobalData survey shows that 58% of respondents believed AI to cause significant disruption to their industry in 2023, and thematic research into the topic predicts AI to reach >99% accuracy in the next 5 to 10 years. 

As AI becomes more advanced and ubiquitous in healthcare, proper regulation and governance must be in place to ensure patients are protected.