InnoEthics Week 4
What’s been happening recently in the world of ethical AI and technology ethics?
This week we cover AI anthropomorphism and what it means to be human, Australia’s landmark social media ban and its impact on Big Tech, and my experience at UK Internet Governance Forum 2025.
AI anthropomorphism: how our understanding of AI is redefining what it means to be human
Last month I attended the UCL x DeepMind Workshop: Can AI Understand Us, hosted at UCL. The aim of the workshop was to look at the question of AI understanding and thinking processes from both a technical and a psychological perspective.
Attending as speakers were Prof. Tim Rocktäschel, Professor of Artificial Intelligence, UCL and Director of Google DeepMind, and Prof. Lasana Harris, Professor of Social Neuroscience, UCL. They each presented their thoughts on to what extent AI understanding is similar to the nature of human thinking, and what findings in this area would mean for questions of AI personhood and technology ethics.
Hearing from both professors and their expertise highlighted how important it is that we pay attention to the different facets when discussing questions around AI. Prof. Tim’s focus on how LLMs’ performance can vary hugely with the language used in prompts, as well as explaining how AI can be used to work against jailbreaking attacks, brought up questions of responsibility and accountability when companies are developing and releasing LLMs and other AI systems.
Instead, Prof. Lasana’s perspective was on the anthropomorphism of AI, a hugely prevalent topic in AI ethics when it comes to breaking down people’s understanding and relationship with AI systems (especially LLMs). He provided an explanation of the brain functions during interactions with AI, and why AI anthropomorphism is something that happens. This raised questions about to what extent one can empathise with AI, and how does AI anthropomorphism affect our interactions with AI.
A key idea I took away from the workshop and want to discuss in future posts is the following: as AI starts being more integrated into aspects of our lives, do we have a moral obligation to increase education and digital literacy in order to raise awareness or the potential harmful consequences of AI anthropomorphism. Moreover, what will the impacts be on people’s relationship with and usage of AI, and how does this anthropomorphism of machines affect what it means to be human.
To read a previous article I wrote about AI consciousness, machine sentience, and the topic of AI anthropomorphism, check out the story below!
All eyes pointed towards Australia as landmark social media policy comes into effect
More on the focus of Big Tech, technology regulation, and policy, as opposed to specifically AI but will still undoubtedly have an effect on the technology and AI ethics and governance landscape generally, is Australia’s under-16s social media ban that came into effect this week.
The legislation covers the following platforms: Facebook, Instagram, Snapchat, Threads, TikTok, X, YouTube, Reddit and streaming platforms Kick and Twitch. To access accounts on these sites, the government has said that firms need to enforce reasonable steps to keep children off these platform, including government IDs, face or voice recognition, or ‘age-inference’.
The Australian government has ensured to stress that children and parents will not be punished for not following the ban, but instead social media companies will face fines for serious or repeated breaches.
Combined with recent regulation limiting the access of under-18s with AI companion sites such as Character.ai, governments are shifting their focus to getting children off potentially harmful technology platforms and increasing the awareness of the consequences of unregulated, unethical technology, with the aim to protect young people and those most vulnerable in society.
This legislation has been a year in the making, and it will be interesting to see how we measure its success in a year’s time, and which countries follow in a similar direction…
BBC article: Australia has banned social media for kids under 16. How will it work?
UK Internet Governance Forum 2025: How do we define the ethics of AI?
Earlier this week I attended the UK Internet Governance Forum 2025, with this year’s sessions covering topics such as Digital Fragmentation, Power Asymmetries in Tech, UK Digital IDs, protecting children’s rights online, and an interactive workshop of Ethics of AI.
Here I’m going to focus on some of the thoughts I had following the workshop on Ethics of AI, led by Stacie Chan and Sal Mohammed. The presentation started with an analysis of how almost every aspect of our professional, social, and private lives has and will continue to be transformed by AI.
As a result, it is imperative we consider the ways in which AI is built and trained, and where ethics enters the system in our real-life applications of it, and following this how it translations to choices in policy, advocacy, and governance of emerging technologies.
However, the discussion began even before the speakers had reached slide 3: how do we create a singular definition of AI ethics? A definition that is both specific enough to get to the root of the topic, but broad enough to encompass the myriad aspects of it.
Having an audience of lawyers, business professionals, journalists, computer scientists, government representatives, researchers, online safety advocates, civil society representatives, philosophers, educators, students, and all other kind of stakeholders in the AI debate meant discussions touched upon a broad range of subjects.
It was a great atmosphere to be in, debating and discussing topics such as AI’s impact on labour, privacy, education, healthcare, the arts, and other aspects of society, and I left the workshop with a range of thoughts about AI ethics next steps.
To find out more about UK IGF, check our their website and the preperation they are doing for the UN Global IGF: https://ukigf.org.uk


