Cultural safety in the arts

[et_pb_section fb_built=”1″ _builder_version=”3.22.3″][et_pb_row _builder_version=”3.22.3″ background_size=”initial” background_position=”top_left” background_repeat=”repeat” custom_padding=”|||25px||”][et_pb_column type=”4_4″ _builder_version=”3.0.47″][et_pb_text _builder_version=”3.0.74″ background_size=”initial” background_position=”top_left” background_repeat=”repeat”]

Robyn Higgins and I wrote a chapter about cultural safety in the arts in an exciting new book about community engaged arts practice The Relationship is the Project edited by Jade Lillie with Kate Larsen, Cara Kirkwood and Jax Jacki Brown.

Image from the cover of the book The Relationship is the Project. The book is on a wooden table.

It is exciting to be in such a fabulous line up with folks like Genevieve Grieves about working in First Nations contexts; Caroline Bowditch on access and disability; Dianne Jones, Odette Kelada and Lilly Brown on racial literacy; and other contributors including: Esther Anatolitis, Adolfo Aranjuez, Paschal Berry, Lenine Bourke, Tania Cañas, Rosie Dennis, Alia Gabres, Eleanor Jackson, Samuel Kanaan-Oringo, Fotis Kapetopoulos, Kate Larsen, Lia Pa’apa’a, Anna Reece, Daniel Santangeli, and Jade Lillie.

Table of contents from the book The Relationship is the Project

Here’s a tiny excerpt from our chapter to whet your appetite.

Why do we need cultural safety?
Australia is a white settler colony in which British invasion and colonisation have institutionalised whiteness. Like other sectors, this history is strongly reflected in the arts, including the ways our practitioners, organisations and institutions develop and deliver projects in collaboration with artists and communities.
Arts organisations often prioritise and centre whiteness. For people and communities who are not white, these organisations may not be seen as appropriate, accessible or acceptable, which can prevent participation and engagement.

Since I wrote this post the chapter has been edited and reprinted twice:

Artshub: Taking action for cultural safety and in The Arts Wellbeing Collective in their publication: Spotlight: The Arts Wellbeing Collective magazine – Edition 2 




The potential and pitfalls of AI.

[et_pb_section admin_label=”section”]
[et_pb_row admin_label=”row”]
[et_pb_column type=”4_4″][et_pb_text admin_label=”Text”]

I wrote a piece for the Australian College of Nursing’s (ACN) quarterly publication. Cite as: DeSouza, R. (Summer 2019/20 edition). The potential and pitfalls of AI. The Hive (Australian College of Nursing), 28(10-11).

Many thanks to Gemma Lea Saravanos for the photo.

The biggest opportunity that Artificial Intelligence (AI) presents is not the elimination of errors or the streamlining of workload, but paradoxically the return to caring in health. In eliminating the need for health professionals to be brilliant, as machines will be better at diagnosis and other aspects of care, the need for emotional intelligence will become more pressing.

In his book Deep medicine, he recounts how he grew up with a chronic condition, osteochondritis dissecans which was disabling. At 62, a knee replacement surgery went badly wrong, followed by an intense physical protocol which led to devastating pain and distress leaving him screaming in agony. Topol tried everything to get relief and his orthopaedic surgeon advised him to take antidepressants. Luckily his wife found a book called Arthrofibrosis, which explained why he was suffering a rare complication of inflammation affecting 2-3% of people after a knee replacement. His surgeon could only offer him future surgery, but a physiotherapist with experience of working with people with osteochondritis dissecans (OCD), offered a gentler approach that helped him recover. AI could have helped him by creating a bespoke protocol which took into account his history which the doctor did not. The problems of health care won’t be fixed by technology, but the paradox is that AI could help animate care, in the case of the robotic health professionals he had to deal with in the quest of recovery.

The three D’s

Nursing practice is being radically transformed by new ways of knowing including Artificial Intelligence (AI), algorithms, big data, genomics and more, bringing moral and clinical implications (Peirce et al., 2019). On one hand, these developments have massive benefits for people, but they also raise important ethical questions for nurses whose remit is to care for patients (Peirce et al., 2019). In order for nurses to align themselves to their values and remain patient centred they need to understand the implications of what Topol calls the three D’s: the digitisation of human beings through technological developments such as sensors and sequencing are digitally transforming health care; the democratising of medicine as patient’s knowledge of themselves becomes their possession rather than that of the health system and lastly, deep learning, which involves pattern recognition and machine learning.

Data is fundamental to AI

The massive amounts of data being collected -from apps, wearable devices, medical grade devices, electronic health records, high resolution images and whole genome sequences- allows for increased capability in computing to enable the effective analysis and interpretation of such data, and therefore, making predictions.

Artificial Intelligence (AI) includes a range of technologies which can work on data to make predictions out of patterns. Alan Turing, who is thought to be the founding father of AI, defined it as the science of making computers intelligent; in health AI uses algorithms and software to help computers analyse data (Loh, 2018).

Applications of AI
Data are transforming health in two key ways:

Assisting with enhancing patient care – from improving decision making and making diagnosis more effective and accurate to recommending treatment.
Systemising onerous tasks to make systems more effective for health care professionals and administrators.

Applications are emerging including automated diagnosis from medical imaging (Liu et al., 2019), surgical robots (Hodson, 2019), trying to predict intensive care unit (ICU) mortality and 30-day psychiatric readmission from unstructured clinical and psychiatric notes (Chen, Szolovits, & Ghassemi, 2019), skin cancer diagnosis; heart rhythm abnormalities, interpreting medical scans and pathology slides, diagnosing diseases, and predicting suicide using pattern recognition, having been trained on millions of examples.

These systems overcome the disadvantages of being a human for example being tired or distracted. And from a knowledge translation point of view, rather than waiting for knowledge to trickle down from research into practice over decades, steps could be automated and more personalised (Chen et al., 2019).

AI can also be used to better serve populations who are marginalised. For example, we know that not everyone is included in the gold standard of evidence: randomised trials. This means that they are not representative of entire populations, so therapies and treatments may not be tailored to marginalised populations (Chen et al., 2019; Perez, 2019).

Potential for algorithmic bias in health
However, large annotated data sets on which machine learning tasks are trained aren’t necessarily inclusive. For example image classification through deep neural networks may be trained on ImageNet,which has 14 million labelled images. Natural language processing requires that algorithms are trained on data sets scraped from websites that are usually annotated by graduate students or via crowdsourcing which then unintentionally produce data which embeds gender, ethnic and cultural biases. (Zou & Schiebinger, 2018).

This is because the workforce that designs, codes, engineers and programs AI may not be from diverse backgrounds and the future workforce are a concern also as gender and ethnic minorities are poorly represented in schools or Universities (Dillon & Collett, 2019).

Zou & Schiebinger (2018) cite three examples of where AI applications systematically discriminate against specific populations- the gender biases in the ways google translate converts Spanish language items into English; software in Nikon cameras that alert people when their subject is blinking, identify “Asians “as always blinking and word embedding, an algorithm for processing and analysing natural-language data, identifies European American names as “pleasant” and African American ones as “unpleasant”.

Other similar contexts include crime and policing technologies and financial sector technologies (Eubanks, 2018; Noble, 2018; O’Neill, 2016). Also see (Buolamwini & Gebru, 2018). But, how does one counter these biases? As Kate Crawford (2016) points out “Regardless, algorithmic flaws aren’t easily discoverable: How would a woman know to apply for a job she never saw advertised? How might a black community learn that it were being overpoliced by software?”.

Biased decision-making in a systematic way might happen with individual clinicians but they also rely on clinical judgement, reflection, past experience and evidence.

Digital literacies for an ageing workforce
We have a crisis in healthcare, and in nursing. Our technocratic business models with changes from above are contributing to “callous indifference” (Francis, 2013). Calls to reinstate empathy and compassion in health care, and ensure care is patient-centered, reflect that these features are absent from care.

In the meantime, we have had Royal Commissions into aged care, disability and mental health. For AI to be useful, it’s important that nurses understand how technology is going to change practice. Nurses already experience high demands and complexity in their work, so technological innovations that are driven from the top down risk alienating them and further burning them out (Jedwab, et al. 2019). We are also going to have to develop new models of care that are patient centred and codesigning these innovations with diverse populations is going to become increasingly important.

Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In S. A. Friedler & C. Wilson (Eds.), Proceedings of the 1st Conference on Fairness, Accountability and Transparency (pp. 77–91). Retrieved from
Chen, I. Y., Szolovits, P., & Ghassemi, M. (2019). Can AI Help Reduce Disparities in General Medical and Mental Health Care? AMA Journal of Ethics, 21(2), E167–E179.
Crawford, K. (2016, June 25). OpinionArtificial Intelligence’s White Guy Problem. The New York Times. Retrieved from
Dillon, S., & Collett, C. (2019). AI and Gender: Four Proposals for Future Research. Retrieved from
Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. Retrieved from
Hodson, R. (2019). Digital health. Nature, 573(7775), S97.
Jedwab, R. M., Chalmers, C., Dobroff, N., & Redley, B. (2019). Measuring nursing benefits of an electronic medical record system: A scoping review. Collegian , 26(5), 562–582.
Liu, X., Faes, L., Kale, A. U., Wagner, S. K., Fu, D. J., Bruynseels, A., … Denniston, A. K. (2019). A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. The Lancet Digital Health, 1(6), e271–e297.
Loh, E. (2018). Medicine and the rise of the robots: a qualitative review of recent advances of artificial intelligence in health. BMJ Leader, 2(2), 59–63.
Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. Retrieved from–ThDDwAAQBAJ
O’Neill, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Nueva York, NY: Crown Publishing Group.
Peirce, A. G., Elie, S., George, A., Gold, M., O’Hara, K., & Rose-Facey, W. (2019). Knowledge development, technology and questions of nursing ethics. Nursing Ethics, 969733019840752.
Perez, C. C. (2019). Invisible Women: Exposing Data Bias in a World Designed for Men. Retrieved from
Zou, J., & Schiebinger, L. (2018). AI can be sexist and racist — it’s time to make it fair. Nature, 559(7714), 324–326.