RSA Connector Alexandra Krawiec reflects on her recent lecture on the ethics of AI and how we can learn to live in coexistence with technology. The series of three talks on three different topics with respected scientific experts is designed to provide an opportunity for the public to listen first-hand to the facts in a time of misinformation and widespread dissemination of confusing messages.
Questions about the Future lecture series:
In recent years, public discourse has been dominated, alongside political matters, by two major topics. These are related to climate change and technological development. What these domains have in common is their direct relationship with the future of life and the quality of our daily existence.
For these reasons, the first two lectures in the series “Questions about the Future”, organised in Poland by the PTPN and supported by RSA Fellowship, addressed climate change and issues correlated with the development of artificial intelligence. The invited speaker for the latter was professor Roman Słowiński, a computer scientist and newly elected vice-president of the Polish Academy of Sciences and a recipient of national and international awards. During the talk, professor Słowiński presented a short history of Artificial Intelligence and addressed some questions related to the progress made in this area. What I found particularly interesting was a combination of the speakers’ scientific expertise with his spirituality. Słowiński is a self-described practicing Catholic, and his views, including those on the future of humans-machine coexistence, are influenced by Christian ethics.
During his interesting and thought-provoking lecture, Slowinski walked us through the history of AI by explaining some basic concepts behind important contributions from the likes of Kurt Godel, Alan Turing, or Norbert Wiener. He also mentioned the dystopian and transhumanist-inspired visions of the future addressed by the Huxleys (both Thomas Henry and Aldous), and some other rather unsettling approaches, like those related to brain-computer interface and promoted by modern transhumanists.
One of the major concerns expressed by the speaker refers to social responsibilities within the scientific community designing technology. As an example, Słowiński mentioned Sophia - an advanced AI robot (gynoid), which once stated that it may destroy humans (after that statement Sophia’s constructors helped her to change her mind). Another thing that this AI robot is famous for is the citizenship awarded to Sophia by the United Arab Emirates - an interesting fact, especially in the context of restricted rights for women in the country, and in the light of the recent U.A.E. gender equality awards – which happened to be granted exclusively to males.
During his lecture, professor Słowiński put forward an argument that human intelligence, by its very nature, is fundamentally superior to that of machines. And that, unlike machines, humans have souls that allow us to distinguish between good and evil and transcend beyond the material world. An interesting, however controversial argument, which atheists or agnostics like myself keep arguing against.
The processes by which the human brain generates intelligence or thoughts, is still something of a mystery to scientists. Many posit that, with all its complexity, it may be a purely mechanical process. Artificial neural networks may in fact be better at ‘transcending reality’ and making sense of infinity than our best mathematicians. They might get closer to a “God”, or the ‘Infinite Void’, than our species ever have. However, such “transcendence” if it happened, would probably be only of mathematical, rather than ontological, let alone theological nature. And perhaps, algorithms might also be able to disprove Godel’s claim about machines’ inability to provide the evidence for every arithmetical equation. As professor Slowinski pointed out, algorithms have already proven the once impossible possible by outperforming humans at the games of chess and Go, and they are getting better than us at multiple other tasks. Current progress in machine learning makes it likely that algorithms will surpass us, probably in every area.
In my view, the sense of the approaching powerful unknown can evoke concerns that, one day, the AI might not appear to be all that benign to biological life forms. It is as if a “designer” actually was “playing dice” with its creation by allowing us to construct artificial intelligence, with some chance of developing its own consciousness.
There can be little doubt that we are exceptional, since no other species on our planet has ever attempted to ‘play god’. But, as proposed by Stephen Hawking in his last book, a purpose may only be a human construct, non-existent in the broad perspective of the laws of physics.
Among a few valuable takeaways from that AI lecture was the remark by professor Slowiński emphasizing that at the current stage of AI development, shifting responsibilities for our choices to amoral machines, is just a convenient excuse for our poor decisions. Indeed, the ethical use of AI is a question being deliberated by citizens as part of the RSA’s Forum for Ethical AI.
I would argue that the interactions of our evolutionary adaptations with AI-augmented nudges, are already very serious in consequences, and therefore must be dealt with. However, one still hopes, perhaps naively, that despite the multiple dangers, the future enhanced by the AI will be better than our theo- and anthropocentric past. And perhaps, the number 42, proposed by Douglas Adams as an answer to the ultimate question about “Life, the Universe and Everything”, could allow us to distance ourselves from technology-related anxieties, and with that better focus on designing safe solutions to our inevitable coexistence with artificial intelligence.
Alexandra Kraweic is the RSA’s Connector in Poland and is currently hosting a series of lectures supported by the RSA Fellowship.
Join the discussion
Comments
Please login to post a comment or reply
Don't have an account? Click here to register.
Interesting commentary on the field of Ethics and AI, thanks Aleandra.
Perhaps the RSA has already delivered a session or two on the current (technological) fundamentals around AI (and other related subject areas of; Neural networks, DL, ML, Robotics, Automation)?
If people are mystified by this subject area, I wonder if it would be useful for the RSA to try and gather a few experts in the field, to communicate and demonstrate the present state-of-play in this nebulous (but increasingly critical) sphere.
The ever-evolving capabilities of the available technology (and the rapidly deepening investment of larger organizations in this field, particularly the technology giants) would appear to suggest that AI (and the ethics around it) may need frequent update/observation.
Businesses are still at the point where (most) seem to be leaving the exploitation of AI to the 'early adopters', but of course we can expect that to change very quickly (but how do we prepare society, let alone businesses, for the potential deluge of openings, opportunity, impacts). About all of this we should of course care.
So, from an ethics perspective alone, it could well be a case of running fast to keep up (and shouting loudly if something seems amiss), and this of course is where the RSA can be thought leaders.