By Georgia Summerhill
How can the Rule of Law be maintained in light of the evolving dynamic between technology and personal freedom?
Co-editor, Georgia Summerhill, investigates this highly topical issue.
Technology and its rapid development have had a massive social impact. Our mobile phones and devices have become extensions of our identity with most of us having a presence on Facebook, Tik Tok, Instagram or Twitter. As technology evolves, so we, by necessity, become adept at navigating, engaging with and understanding the content of such apps. However, the current era of AI will further reshape our existence, playing an even more important role in our daily lives. On the one hand, AI offers financial, social and educational opportunity but, on the other hand, there is little regulation of this technology which may pose a risk to our personal freedom.
As many of you will know from your Public and Constitutional Law classes, Australia is
governed by its Constitution but also by ethical and philosophical doctrines such as the Rule
of Law. The Australia Constitution Centre defines the Rule of Law as “the idea that you
cannot be punished or have your rights affected other than in accordance with a law, and only after a breach of the law has been established in a court of law.” 
Bingham says that this rule of law stands for the principle that “the law must afford adequate protection of fundamental human rights.”  We know that the phrase the ‘Rule of Law’ is not mentioned in the Constitution but, as the case law tells us, it does form an “assumption” on which the Australian Constitution is framed.  It seems that Artificial Intelligence should be legislated in compliance with the Rule of Law; however, recent examples of AI suggest this is not the case. For instance, the Cambridge Analytica scandal which saw the company “acquire private Facebook data of tens of millions of users” and sell these profiles to political campaigns during the 2016 US election. The company was also implicated in the Brexit Campaign in the UK. Cambridge Analytica joined forces with Kogan to create a quiz which, owing to Facebook’s policies at the time, allowed third parties to access the data of the participants. It is estimated that 300,000 users took the quiz; however, it has been revealed that, from this data, Analytica was able to gain access to the profiles of “87 million Facebook users”.  AI’s effect on the opinions of those taking part impacted their democratic rights by manipulating their political viewpoint, amounting to an invasion of personal privacy.
A further threat to the Rule of Law and our personal freedoms is the constant surveillance
which results from AI being deployed in our towns and cities. For example, during a recent
trip to London, I was walking along Carnaby Street and noticed a large camera in the middle
of the thoroughfare; a small sign indicated that facial recognition, which uses AI, was
monitoring the area to prevent crime. It struck me that the police were collecting my
personal, physical data - my likeness, gait and mannerisms. This data could be used to
identify me even though I was not in breach of any law which seems to me to be an
infringement of my personal liberty. Having said that, we would all recognise the need to
deter criminals and to ensure the protection of those who are innocent of any wrongdoing; AI could be used to exonerate as much as to prosecute. This issue has also arisen in Australia
with Wesfarmers’ retail chains, including Bunnings and Kmart, pausing their use of facial recognition technology as a result of an investigation being conducted by the Office of the
Australian Information Commissioner regarding privacy concerns. There have been
suggestions that such surveillance is in contravention of the Privacy Act. 
Consumer Data advocate Kate Bowler stated that “collecting information this way was akin to taking shoppers’ fingerprints and DNA every time they shop.”  In addition to this, the stores were operating this technology without the customers’ knowledge and, therefore, not providing them with an opportunity to assent to surveillance or choose not to visit the store. In both cases there is no legislation to protect privacy or to offer guidance to businesses and
institutions on how to store and manage such sensitive material.
Australia is a signatory of the International Covenant on Civil and Political Rights which
states that “the law shall prohibit any discrimination and guarantee to all persons equal and
effective protection against discrimination on any ground such as race, colour, sex, language,
religion, political or other opinion, national or social origin, property, birth or other status.”  The signing of this treaty is reflective of Australia’s desire to uphold Bingham’s
interpretation of the Rule of Law. However, when framing this in the context of AI, it appears
that the Commonwealth has fallen short of the recommendations of the covenant. AI has
some inherent biases that deepen divisions and fuel discrimination. A recent investigation
into Goldman Sachs’ credit card practices showed that its algorithm to determine credit
worthiness for the Apple Titanium Card, “discriminated on the basis of gender, if not also on
race and age.”  There was particular discrimination against women including, ironically,
Janet Hill, the wife of Apple co-founder Steve Wozniak, who was given a lower credit limit
than husband despite having a better rating. 
AI has streamlined the way we work and, in some cases, taken on our work for us; as we
have recently seen, websites like Chat GPT have become a real issue for schools and
universities. There has been much debate, especially within the legal profession, as to
whether AI will render the lawyer obsolete. Chat GPT, a language-model AI tool, has been
hailed a “game changer… that some believe could threaten the billion-dollar legal
profession”. Another AI system called DoNotPay has been suggested as a replacement for the role of a legal assistant. Once again, this emerging technology raises an issue regarding our personal freedoms and right to work but also poses a greater philosophical question: is AI stripping us of the very things which give us purpose and meaning? As a member of the
International Covenant on Economic, Social and Cultural Rights, Australia ascribes to
Article 6(1) which “recognise(s) the right to work, which includes the right of everyone to the
opportunity to gain his living by work which he freely chooses or accepts and will take
appropriate steps to safeguard this right.”  The steps currently being taken in Australia, as well as globally, to safeguard this right appear inadequate. In 2018, the IMF reported that the integration of AI will lead to “income inequality and mass unemployment”. However, the
modelling set out in this report works on the assumption that technology will become the
“perfect substitute for humans” so that the two are “indistinguishable”. AI often replaces the role of humans, making processes more efficient, so could result in job losses and income disparity, even among higher order workers such as the legal profession. There is a positive side to this; hopefully, the implementation of technology will have an overall, beneficial effect, enabling economic growth and reducing the cost of living. The University of Adelaide has stated in a recent paper that Australia must invest in education and research as it will become necessary for employers to transition their workforce to new jobs in different areas, using interpersonal skills and emotional intelligence. 
As we can see from the examples above, AI can bring about great advancements but also
poses some challenges to our laws and fundamental liberties. Who should be held responsible for protecting these laws and what are the Australian Government’s plans to ensure our safety? So far there appears to have been little, in terms of law, to mitigate the risks of AI. There is currently no legal definition for AI in Australia; however the government has endorsed a ‘working definition’ from the CSIRO.  Currently, the law surrounding AI is
described as ‘soft law’ with a document called ‘The Australian AI Ethics Principles’ setting
out the pillars on which all AI should be used, including: to benefit individuals, society and
the environment, to respect human rights, diversity and autonomy, to ensure AI is inclusive
and accessible to all whilst not unfairly discriminating against anyone.  The principles also
discuss safety, transparency, accountability, and privacy but are only guidelines as opposed to enforceable legislation. The current legislation pertaining to technology, with some regulation on AI, is found in state Privacy Acts; however, there is no legislation with the primary motive of regulating AI. The government has placed a greater emphasis on using instruments that are already present, such as reviewing the Privacy Act 1988 (Cth) as well as Data Availability and Transparency Bill 2020 (Cth). There is some encouraging progression in that Australia has been an active member of the international community when it comes to creating a uniform response to the monitoring of AI. Papers are also being written within the Australian Human Rights Commission regarding the maintenance of personal liberties alongside AI. 
As Dr Paul Burgess suggests in his recent paper, our perception of AI is based on the science
fiction of popular culture which means that our understanding is limited and this plays on our
fear of a dark, dystopian and futuristic surveillance, akin to ‘Big Brother’ in Orwell’s ‘1984’.
Burgess also points out that the notion of the Rule of Law needs to adapt to the progression of AI; he states, “The control that the concept of the Rule of Law has previously held over an
entity wielding power may not cover these—artificial—entities.”  We all, as users of
technology, should resist the temptation to believe that AI is infallible, even though it is
convenient for us to believe this as we continue to use apps with AI algorithms. We all have a
responsibility, beginning with government, to regulate AI. In EU countries data regulation
has been created to bring more transparency and give users more authority over what data
they share and how the platform processes it. An example of this is the GDPR law which
gives EU citizens more control of their personal data and how companies store and process
it.  This type of legislation ensures that companies such as Google, Facebook, Apple and Twitter are held accountable for misuse of data for malicious ends whether done intentionally or unintentionally and maintains the protection of users’ privacy in the future.
The AI community itself – those who invent and promote the technology – have a
responsibility to ensure the ethical use of personal data as well as to educate the public to
prevent a negative social impact. This responsibility should begin in the development stages
where the ethics of personal freedom, as discussed above, should be considered as well as the provision of guidance in conjunction with the release of new AI technology. In addition to government regulation and the provision by AI companies of guidance, responsibility also
lies with us as individuals. We should be careful about the data we share on social platforms
and mobile apps and always attempt to discern what permissions we are giving, rather than
accepting blindly anything in ‘terms and conditions’ which we might otherwise find too
tedious to assimilate.
There are many concerns around the ethics of AI and the potential for the infringement of
personal freedom and possible compromise of the Rule of Law, owing to the biases and
prejudices which are inherent to a technology being trained by humans who possess similar
flaws. Processing personal data without consent raises these concerns further and requires a
uniform and global response to a growing phenomenon in order to protect human rights and maintain the Rule of Law.
 ‘The Rule of Law’, Australian Constitution Centre (Web Page)
 Tom Bingham, Rule of Law (Penguin Books, 2010).
 Plaintiff S157/2002 v Commonwealth (2003) 211 CLR 476 .
 Mike Walsh ‘Does Your AI Have Users’ Best Interests at Heart?’, (2019) Harvard
 Privacy Act 1988 (Cth).
 Carrie LaFrenz, ‘Wesfarmers pauses using facial recognition at Bunnings, Kmart’,
Financial Review (online, 25 July 2022)
 International Covenant on Civil and Political Rights, signed 18 December 1972, 171
UNTS 999, (entered into force 23 March 1976) art 26.
 Bernadette Mendoza, Miklos Szollosi, Tania Leiman, ‘Automated decision making and
Australian discrimination law’  ANZCompuLawJI 93, 14.
 Marlene Satter, ‘Apple Credit Algorithm Accused of Discriminating Against Women,
Including Cofounder Wozniak’s Wife’, BenefitsPro (online, 11 November 2019) <https://www.benefitspro.com/2019/11/11/apple-algorithm-discriminates-against-cofounder-wozniaks-wife/?slreturn=20230212184328.>.
 International Covenant on Economic, Social and Cultural Rights, signed 10 December
1975, 993 UNTS 3, (entered into force 3 January 1976) art 6(1).
 Senate Select Committee on the Future of Work and Workers, The Impact of AI on the
Future of Work and Workers (Discussion Paper 2018).
 ‘Artificial Intelligence Work Group Project Australia’, Gilbert + Tobin (Web Page, 5
 ‘Australia’s AI Ethics Principles’, Australian Government Department of Industry,
Science and Resources (Web Page, 7 November 2019)
 Human Rights Commission, Human Rights and Technology (Final Report, March 2021).
 Paul Burgess, ‘The Rule of Law, science fiction, and fears of artificial intelligence’
(2022) 4(2) Law, Technology and Humans, 1, 10.
 ‘The impact of the General Data Protection Regulation (GDPR) on artificial
intelligence’, Think Tank – European Parliament (Web page, 26 June 2020)
Georgia Summerhill is a third year student studying Law and Global Studies. She is a member of the Reasonable Observer subcommittee collating articles for the Monash legal community. Georgia has a passion for Social Justice and enjoys looking at how the law affects our culture, society and personal interactions.