The Beginning of AI Revolution & Human Evolution –Part 2

Shankar Balakrishnan
8 min readApr 9, 2023
AI Revolution and Human Evolution — Part 2

Thanks for coming here to read, especially if you have read the first part of this blog series.

It’s going to be a challenge to keep pace with things that are happening around #Artificial Intelligence #AI or #AGI, whatever terms and newer terms that keep cropping up.I have to split this blog series into further parts and focus only on AI Governance and AI Adoption Curve in this part.

In subsequent parts I will write about AI Quick Learning & Scenarios and on AI Philosophy & Spirituality I will try to provide a different perspective to these topics, but definitely connected to this series title. (Looks like the blog series is turning similar to The Matrix trilogy. Whew! I should really join their fandom.)

Stages of Revolution — In this blog

AI Governance

As I wrote in the first part, many of the fears, and negative thought line seems to have triggered heated debate in just a week or so, starting with few folks signing for a moratorium and others even going to the extent of shutting down AI. As I posted on one of the early LinkedIn discussion, the rate at which both AI development and the related debates, news, perceptions, and social media posts have picked up, dividing many into extreme views, has overwhelmed many. If you aren’t a social media buff, or not have been following on the debate, I suggest you do so, at least cursorily to know what the debate is about and what collective consciousness of different thoughts and perspectives will lead us to.

The debate rages on — few links for your reference. [ Yann LeCun 1 ] [ Yann LeCun 2 ] [ Richard Socher ] [ Andrew Ng ] [ Melanie Mitchell ] (deliberately avoided the doomsayers, but folks who bring better realistic perceptions)

I have been posting lately, especially on twitter to bring some alignment and balance (for my own satisfaction of course). Those hyped-up existential risks, ensuing heated debates, that trigger emotional and egoistic reactions, could actually be more destructive than AI itself.

Our beliefs and reactions have been causing more problems than weapons of mass destruction.

The book by The Logic of Self-Destruction: The Algorithm of Human Rationality by freelance author Mathew Blakeway explores this topic. (This book review could give you a summary as well)

We are already addicted to technology (most of us I presume). AI has already been in use even prior to ChatGPT getting viral with its Conversational AI. Tesla used AI technology for self-driving capabilities. Like genetic technology compliance are framed or being evolved, like regulations on Cryptocurrencies are evolving with its own struggles a world governing council for AI would be needed. Whatever term we ascribe to it — except of course not moratorium — there is a need for governance, regulation in guiding all the pillars — AI Creators, AI Consumers and AI itself. We cannot pause or stop it, especially at the current stage. We haven’t done like this before in the technology world.

It would rather be wise to adopt, weigh the risks as it comes, and bringing in self-governance, for all the pillars.

Evolve further into a ‘shared and distributed’ alignment at a larger level built around a foundation of human centered principles.

This will regulate and bring some semblance towards building a responsible and purposeful AI or AGI. Very much like how Cybersecurity principles and GRC that have evolved and help managing known and unknown, perceived and real risks.

Issac Asimov, one of the famous science fiction author, in his ‘Robot’ novel series published in 1940 introduced the three laws of Robotics as a safety mechanism that is applied in his fictional writings.

First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

Source: https://i.gifer.com/4n0H.gif

Governance is a term widely used and has different connotations. Being in software service delivery throughout my career, I have the idea of managing risk and compliance through strong governance mechanism. Hence my initial thought on AI Governance sprang up years ago when I wrote a blog. My own thoughts as it comes to me now would be to structure it as below:

Pillars of Governance, Foundations and Alignment

For instance, to keep regulation and control we can:

  • Reset the AI at each pillar
  • Learning and unlearning built into the model
  • Adhere to a set of ‘AI moral code

One such rule could be ‘avoid in any business scenario 100% AI — allowing at least a small percentage of human element within an ‘AI Job’.

Self-regulation at consumption level can be enforced to de-risk some of the areas we want to avoid strong proliferation of use of AI.

AI learning — like our mind — feeds on what is fed to it, and gets reinforced.

The social media ‘recommendation’ and ‘attention grabbing’ algorithms are based on a similar psychology. Like internet, when it became a sensation on how it could change the world, I remember similar hype around how it is going to change and the benefits of its adoption, and that it could help remove literacy across, by remotely connecting the best learning university with those who otherwise cannot travel far and wide. While some of these benefits did happen over time, the consumption shifted towards e-commerce, entertainment content, news and media. These proliferated and used across in smart gadgets as well. Hence a regulation of AI, at many levels would be necessary.

Bringing governance might take some time for sure. There will be risk associated with any kind of governance too. Ineffectiveness at any layer scientific or political will cause more problems to deal with. Either we choose to live in fear-based principles or bring a positive mindset, to accept, manage at our own level, and move on.

Source: https://tenor.com/tyw5.gif

“You have to let it all go, Neo. Fear, doubt, and disbelief. Free your mind.

Morpheus, The Matrix

I am sure we can take help from AI itself here to bring alignment in all our thinking and form an AI World Governance Forum. After all, we all agree that AI can be more intelligent, faster and can learn, and if we can teach AI (and humans!), in some way certain ‘moral’ principles and its adherence as a trust factor when AI interacts with humans, or other AI, there could be a balanced way forward. Or better still, like in The Matrix, you just need to keep a persistent learning model (updated like security patches) for available for all AI to upload and comply with, certain common tenets that is developed and updated by the governance forum. (And these can be localized as well)

Source: https://youtu.be/w_8NsPQBdV0

Some of these are nothing new, but humans will have to adopt and adapt on cohesive and coherent model, as technology evolves faster than we can keep up the pace with it. I have been having a roller coaster ride for this very blog itself, taking time to draft in between other aspects of life, and struggling to keep it relevant as of day. During my research, I noticed that there have been many who have advocated on similar thought line, published similar blogs, and social media posts, and have also provided a good number of resources for us to start using.

Few pointers for your reference —

[ TechTarget ] [ IBM AI ][ WEForum ]

[ Steve Nouri ][ Sam Altman ][ Robin Hanson ][ Melanie Mitchell ]

AI Adoption Curve

I think the AI Adoption phase will have to be marked pre-ChatGPT and post-ChatGPT.

Some of the adoption studies and statistics published pre-ChatGPT indicates the focus that was on AI/ML research and industries that were adopting them.

This O’reilly study showed “When expectations about what AI can deliver are at their peak, everyone says they’re doing it, whether or not they really are” in 2021/22. Another McKinsey report indicated “telecom, high-tech, and financial-services firms are leading the way in overall adoption”, while an IBM report says, “Automation use cases are at the forefront as companies use AI to stay competitive and operate more efficiently”. And in India, NASSCOM published report mentions while investments on AI in India was ~1.5% of the worldwide spend, AI skills and publications in India remained on the top.

Post ChatGPT, there are enough content already now being published by individuals (Steve Nouri), and organizations (and with the help of GPT-4 itself) on what are the different jobs that are potentially going to be replaced by AI. With the perception of #ConversationalAI changing from mere chatbot to an extremely high quality language conversation with solution for problems, makes it an productivity booster to new levels. There will definitely be early adopters, some waiting to take on in the next phase. The Gartner hype cycle indicates any such change that technology triggers. The time scale in the below graph only seems to be getting crunched.

Source: Wikipedia. By Jeremykemp at English Wikipedia, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=10547051

The impact of the adoption in the culture of this world and its civilization, whatever time it takes is going to tremendous. While the world will move on, there will be certain negative side to it. As seen during earlier industrial revolution, there were criticisms that the revolution led to ecological collapse, mental illness, pollution, and for valuing profits & ‘standards of living’ over life and well-being. I am not here referring to the doomsday prediction by AI, or the other heated debate which I had referred earlier in this blog, but the impact on the culture and civilization that arises when new technology gets adopted — in this case AI. So, it’s important that unlike past industrial revolution, we can show wisdom by learning from the past, bringing right understanding and correct thinking on technology and use of AI and to also work towards bringing better quality of life as new generation takes it on. Important to get the wisdom bridges built from different generations for humanity to evolve as such changes shape civilization. I want to emphasize this aspect in this section, as AI adoption only will help us know the actual risk and handle them better. This is the challenge for collective consciousness of humanity.

In the next part we will cover the remaining topics on the importance of learning to use AI at our personal level and certain scenarios to quickly adopt.

Blogger’s view — Interaction with ChatGPT

I will leave you here with a blogger’s view of AI as I started thinking about this blog and what ChatGPT itself would have created versus what I wanted to bring to purview.

(Edited: Sharing the ChatGPT Link here — https://chat.openai.com/share/a5d99f34-7dd2-4da2-9ffc-643728f72b71 , and deleting the screenshots)

--

--