The AIvolution

The topic of Artificial Intelligence needs no introduction. From popular culture in movies and books, various science fiction gurus predicted and theorized what could and would happen. Being fictional tales, some of the technologies may not be wholly realistic (at least not yet), or they might have chosen the wrong timeline (perhaps the most notable example is people thinking we’d have flying cars by now).

In research terms, the ideas of sentient machines, indistinguishable from humans go back to Alan Turing (1950), and neural networks/deep learning to Warren McCullough and Walter Pitts  (1944).

And people at large, including leaders of huge companies, seem to be somehow completely oblivious to what is going on – shocked and unprepared to the newest and greatest capabilities of AI. In the recent days and weeks I’ve been reading a bunch of articles about a “new” AI arms race and concerns about safety. One particular eye catching headline is how Chat GPT became the fastest growing app ever, surpassing 100 million users in just two months. I think those concerns need to be repeated over and over as we can never be too careful in this case.

I was always fascinated by the artificial intelligence (or equivalent synonyms), both through the lenses of science fiction, as well as mathematics / physics, so here’s a couple of things I want to comment on, leading from one to another.

The Silent Evolution

For anyone paying attention, the capabilities of the modern tools, like Chat GPT, midjourney and so many others, are nothing more than proof of concept. Technology finally catching up to theoretical models that we’ve had for decades. I’m not surprised that the general public would be amazed by this. Most people probably didn’t really preoccupy themselves with google’s AI (alphaGo) winning against human Go champion in 2017 or OpenAI Five defeating Dota 2 champions in 2019.

There’s so many other things going on in our lives and in the World, that I wouldn’t expect anyone who’s not a complete geek and nerd (like me) to keep up with it and extrapolate the potential.

What I am surprised about though is just how oblivious even the biggest tech companies seemed to be. Of course there’s a bunch of things going on behind the scenes that I don’t have any access to, but it sure does seem like Microsoft, Google, Meta and others were not expecting the tectonic shift from “this is fun but it’s just games, who cares?” to “holy shit look at what Chat GPT can do!”

I feel like more than any other technology in my lifetime, the evolution of the AI was happening right in front of our eyes and yet very few people took it seriously. It’s as if the AI is either ridiculously stupid spam filters or straight up war machine from apocalyptic movies – nothing else in between (which is not how progress works obviously). Which brings me to my next point.

What is Learning Anyway?

Even though the application of the AI models has become increasingly more impressive, what’s actually happening under the hood, nobody knows. Why is that?

Here’s a very simplified explanation:

The way we’ve modeled neural networks in computers is by mimicking our own brains. The organ in our head is made of about 86 billion brain cells (neurons), and they are interconnected with each other through complex structures (synapses). These connections or links can form and get either weakened or strengthened depending on how often they get used. For example, if two neurons are active at the same time, their connection strengthens. As a result, it becomes easier for one of the pair to get active if the other one is.

Think of practicing something. We study, we repeat it, we learn and commit to memory. The more often we practice a new language, the easier it will be to use it, because the various brain cells will form a cohesive network of language understanding.

Such behavior is modeled in machine learning by using nodes (neurons) and weights (synapses). There’s currently no way of making something as complex as a human brain because of the sheer scale of it. On the other hand, we’re able to make very specific task-oriented machine learning models that can outperform humans in a given case. We train those neural networks to “learn” and adapt. We see the end result, but very much like with us humans, we’re unable to completely understand the connections that were formed as part of learning. The weights in these models are just numbers and they are incredibly difficult (impossible?) to interpret.

A good analogy to think of here would be the standard RGB scale. If I state that the Phoenician Red (Tyrian Purple) has values (102, 2, 60), most everyone will know instantly what I’m talking about. You could think of this as three nodes (Red, Blue and Green), each contributing a certain part of itself (weight) to the final color. This is pretty simple to interpret, telling us the Tyrian Purple is mostly red, somewhat blue and almost no green. But what if we have 5 color nodes? What if we have 10? and 100? We get more precise results, but way more difficult for anyone to make any sense out of it.

A good showcase of unexpected learning was Microsoft’s Tay chatbot, absorbing the toxic wasteland that is the internet. It seems we’re unable to predict what and how the bots will learn exactly, only able to adjust after the fact. Which culminates in the last point I’m going to try and make.

A Terrible Arms Race

The rush for the newest and greatest AI tools could not have come at a more terrible time. We clearly still struggle with figuring out just how to accurately train these new tools, as evidenced by various articles pertaining to google bard or Microsoft bing and the list just goes on.

And these companies have now started to double down on competing because somebody in a position of power somewhere has finally understood that whoever is going to create an accurate and advanced AI algorithm will be able to overtake the market much like Alphabet has done with the Google search engine.

Whenever there’s a race, there’s a tendency to disregard warning signs and throw caution to the wind. Especially when in a situation that might very likely be a “winner takes all” (again, look at google and how nothing can dethrone it). As a result of this reckless pursuit, we’ll see the release of AI tools that are not quite ready to be used, or pushing towards faster innovation and less testing. 

Adding to the issue is the fact that the tech world in the U.S. has been experiencing layoffs in the tens of thousands, with no sign of stopping. All in the name of a better bottom line and appeasing the investors. The problem is however, that often people who get left behind are the ones who have the less glamorous positions. Caring about security. About cohesiveness. About safeguards. Those things don’t typically generate revenue, at least not in the short term, and are some of the first to go.

To me, this all looks like we’re rushing towards some kind of major disaster. I very much doubt it will be a Skynet type of event from Terminator. I think instead there’s going to be a catastrophe much before we will are able to make actual sentient AI. In my view, this will only lead to the collapse of some current leaders in the tech world, and/or perhaps restructuring of the current companies.

One final thing that I’d like to mention here is that if we rush in full automation and adoption of AI too quickly, the world is not ready for it. How will the jobs change? Will some be obsolete? Will we need to add new ones? Will we finally have a universal basic income to offset for the labor that will be done by automation? These and similar questions need answers and to my knowledge we don’t have a solution. Unfortunately I’m afraid AI technology will outpace the ethical and moral implications, as well as regulations surrounding it – but hey I’m sure it’s going to make for a great Netflix or HBO series one day.


Comments

One response to “The AIvolution”

  1. […] Side note, I’ve written a couple of blog posts before on this general topic, such as comparing university and bootcamps, and some thoughts on AI. […]

    Like

Leave a reply to Human Learning in the Age of Machine Learning – Tomo Umer Cancel reply