The Universal Socket: A Better Way To Understand Artificial Neural Networks

Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!

It seems like there’s a never-ending debate about autonomous vehicles and the neural networks that drive them. While opinions vary widely, many fall into the optimist and pessimist camps. The most optimistic of the optimists not only believe that building an autonomous vehicle is possible, but that an autonomous vehicle is conscious and alive in some way, or think that true artificial general intelligence isn’t that far off. The pessimists think that not only will Tesla fail at creating Full Self Driving, but that all who try will fail. Even the pessimists who think Tesla and other companies may succeed often point to past failure or current safety concerns.

A diagram of a very simple artificial neural network. Image by Glosser.ca, CC-BY-SA 3.0 license.

As happens often when we see the extremes arguing, the truth is somewhere in between. To get at that truth, we really need to peel back some of the mysticism and look at what artificial neural networks really are.

For the data scientists out there, I know this is going to be an oversimplification, but at its core, an artificial neural network is a large number of layered self-adjusting functions. In other words, lots of little bits of statistical math adjust themselves to fit the training data (with varying levels of supervision) to produce the right outputs.

For example, each pixel of an image could be an input, and the artificial neural network decides whether the image is one thing or another, but only if the training data that was used to do this reflects the real-world well. Put more simply, an artificial neural network is only as good as its training data.

Unlike the simple example to the right, an artificial neural network that’s doing something useful might have millions or even billions of inputs, even more nodes doing the calculating, and a great number of outputs. Then, multiple networks are programmed to work together so that different neural networks can perform various tasks together.

But, at their core, each neural network is still just a bunch of nodes that are adjusted to produce the right outputs for training data and then can’t really be adjusted on the fly to fit minimally novel situations. Let’s take these neural networks for example:

As big and complex as they are, they’re only built for deciding which number (0–9) a given digit is. If you present them with an edge case, they’ll fail to produce the right answer, or will produce no answer at all. But, edge cases happen all the time in the real world, even with numbers.

For example, there’s the common “backwards nine” in Chinese populations. While there are other Chinese numbering systems, the Hindu–Arabic numeral system that’s common in the West did make its way up the Silk Road and is now in common use in East Asia, but with some changes. The most notable is the “backwards” nine (it’s the right way for them), or a “loop on a stick” version of nine. Chinese populations also use other number systems, like the Chinese characters for numbers (which is like us when we write out a number like “seventy three” instead of “73,” such as when writing a check) and less common ancient systems based on counting rods. But that’s another story beyond the scope of this article.

Even if we assume that only Hindu–Arabic numerals will be used, Chinese variants like the backwards nine are hardly an edge case, globally speaking. Chinese languages are by far the most common on earth, and that’s before you consider other East Asian cultures that might have somewhat different uses of Hindu–Arabic numbers. The edge case may, in fact, be more globally common than the western norm.

But, if you build and train a neural network that doesn’t know about backwards nines, it’s going to run into problems when it encounters them. Sure, you can make an improved version with better training data that incorporates all number systems and all common regional variants of those, but the original one built for western use simply can’t adapt itself or learn based on context the way we can.

For example, if you clicked the link about backwards nines, it’s obvious on the restaurant check that it’s a nine. Our brains can adapt and improvise based on context and figure that out. We are conscious beings, not “meat computers,” so we’re not helpless when we encounter something slightly different or unusual.

The Universal Socket: The Best Way To Understand The True Usefulness Of Artificial Neural Networks

Don’t get me wrong. Artificial neural networks really are amazing. They can do many amazing things, as any Tesla owner knows. I’m not at all trying to put them down or mock them. What I’m trying to do is put them in context so we can appreciate them for the amazing things they are and not be disappointed when they fall short of unrealistic expectations.

If we’re expecting human or even animal levels of intelligence from artificial neural networks, they’ll consistently let us down. They can’t adapt, consider context, compare against cultural values, or improvise the way that we can. They’re simply not built to really do what we do, which is basically the below:

A visualization of Colonel John Boyd’s OODA Loop concept. Image by Edwin Moran, CC-BY license.

Artificial neural networks can probably do some of the things in the OODA Loop, but they aren’t able to do it all. If we expect them to do that, they’ll disappoint us.

Instead, I propose we compare them to the “Universal Socket,” a tool that self-adjusts to fit different nuts and bolts.

Obviously, there are many tasks that the “Universal Socket,” “Magic Socket,” or whatever you call it can’t perform, but that doesn’t mean it isn’t extremely useful. It’s about the only way to use a power tool to drive a hook or loop, for example. The fact that it has limits doesn’t mean it’s worthless.

Normal “if-then” programming is like a normal socket wrench. Each socket fits one specific size of nut or bolt, just like a traditional computer program can’t ever fit anything but what it was built for. Artificial neural networks are like the “Universal Socket.” They’re more adaptable than normal programming, but not infinitely adaptable to almost all situations the way the human mind is.

Like I said, artificial neural networks are amazing, but they’re even more useful and amazing if we appreciate them for what they are instead of expecting them to do things that’s beyond their reach. Whether driving a car in nearly all conditions and locations is beyond their reach is still up for debate, and I really don’t think anybody knows for sure, but we already do know that they can certainly do a good job of assisting us when used responsibly.


Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.

Latest CleanTechnica.TV Video

Advertisement
 
CleanTechnica uses affiliate links. See our policy here.

Jennifer Sensiba

Jennifer Sensiba is a long time efficient vehicle enthusiast, writer, and photographer. She grew up around a transmission shop, and has been experimenting with vehicle efficiency since she was 16 and drove a Pontiac Fiero. She likes to get off the beaten path in her "Bolt EAV" and any other EVs she can get behind the wheel or handlebars of with her wife and kids. You can find her on Twitter here, Facebook here, and YouTube here.

Jennifer Sensiba has 1975 posts and counting. See all posts by Jennifer Sensiba