It took about 200,000 years for mankind to be able to figure out how to get an airplane to fly. The event took place Dec. 17, 1903, on a field in Kitty Hawk, N.C., when Wilbur Wright piloted the airplane he had made with his brother, Orville, into the air. The flight lasted for 12 seconds. The aircraft covered a distance of 120 feet. The next two flights went a distance of 175 and 200 feet, respectively. The last flight of the day covered a little more than 800 feet. The plane stayed in the air for 59 seconds, at an altitude never greater than 10 feet, before striking the ground and incurring minor damage.
Although the flights were a significant technological breakthrough, they had absolutely no commercial value. Other forms of transportation could go further and travel for longer periods.
Almost two years later, having learned from their mistakes, the Wright Brothers were able to keep an airplane flying for 38 minutes, covering a distance of 25 miles at a speed of 40 mph. The aircraft was 30 percent faster than a horse at full gallop, but still slower than a passenger train running at an average speed of 60 mph. Commercial viability was in sight.
In June of 1919, 16 years after the Wright’s first flight at Kitty Hawk, Capt. John Alcock and Lt. Arthur Whitten Brown of the United Kingdom flew 1,690 miles from Newfoundland to Clifden, Ireland, in a little more than 16 hours. Fifty years later, it took 73 hours and and 5 minutes for the crew of Apollo 11 to fly the 238,857 miles from Cape Canaveral, Fla., to the moon.
Think about it. It took mankind 200,00 years of technological progress to get a plane in the air, but only 63 years after that to travel to the moon. It’s pretty miraculous, in a way. Technology does not just move forward. It careens ahead exponentially.
Figure 1: Mankind was able to travel from 12 feet to 238,857 miles by air in 63 years
What was the stuff of science fiction in my childhood—think video conferencing—is commonplace today. And it’s cheaper than we ever imagined possible. The future possibilities of technology are limitless. Anything can come to be, given enough time.
But, when it comes to considering artificial intelligence and machine autonomy, there are still a lot of people out there who think that the notion of machines taking over most of the jobs that humans do is far-fetched, if not daffy. According to conventional wisdom, history’s pattern is that for every job eliminated by technology, more are produced. And should machines displace humans from the workforce entirely, that day is a long way off.
That’s what my grandmother thought. She was born in 1900. If you had told her when she was a teenage girl that the flying contraption she was reading about in the newspapers would have the result of putting a man on the moon in her lifetime, she would think you crazy, and rightly so. She had no historical precedent to think otherwise. Remember, in her day horses still pulled plows. Everything except birds and hot air balloons were land-bound. Yet, despite her disbelief, there came a time when she sat in front of a television to watch Neil Armstrong descend from the Apollo 11 Landing Module to put the first human footprint on the moon. She thought it would never happen. And yet, it did.
Yes, the historical pattern has been that technology eliminates jobs and creates new ones for humans to do. When the automobile replaced the horse, the out-of-work blacksmiths went to work in factories that filled the ever-expanding industrial landscape. And, with each wave of technological innovation, there were the harbingers of doom predicting the demise of human labor. Each generation of naysayers uttered, “This time it’s different.” Conventional wisdom points out that it never is.
That was then and this is now. This is the time of the autonomous machine. This time it is different.
The Rise of the Autonomous Machine
Machine autonomy is the ability for a device to make decisions and conduct itself independent from ongoing instruction, very much the way a human does. The most telling example is an automated stock trading program used by financial investors. The program is at work continually buying and selling stock with the goal of turning a profit. Certain safeguards are built in to prevent disaster. But for the most part, the application is left on its own to achieve its goal: to make money.
Until very recently, machine autonomy was confined to software applications such as the stock trading example described above. However, modern robotics makes it so that not only can a machine “think” autonomously, it now also can move autonomously. The most apparent example is the self-driving car.
Integrating physical and cognitive autonomy into machine behavior changes the game. In prior days, a taxi required a human driver to get the passenger’s destination, find the best route, drive the vehicle and collect payment for the ride. All a passenger needed to do was jump in the back seat and tell the driver where to go. The human driver figured out the rest. As technology improves, the passenger still will jump into the back seat, but in an autonomous vehicle that gets routing information and traffic conditions from a wireless connection to the internet.
The ramifications are profound.
For example, let’s imagine we have a diseased tree in our front yard that needs to be removed. Today, we call a landscaping company. The tree removal crew shows up, we point to the diseased tree and ask the foreman to remove it. That’s the only instruction we need to give. The landscaping crew has the ability to act autonomously to do whatever is necessary to remove the tree safely.
Now, imagine the crew brings along a robot to help with the task. At this point, it’s safe to say that current robotic technology requires a lot of human guidance to remove the tree; the robot is more an aid than an independent worker. The robot might have no other ability than to pick up tree cuttings and take them to a truck to be hauled away. Or, the robot might be able to handle a tree-cutting tool to assist with demolition. Still, the independence of the robot it limited. Either its activities are limited to repeatable tasks limited in scope, or it can act in an ad hoc manner by taking one instruction at a time from a human. The robot is dependent on human instruction to get work done.
However, let’s imagine that robot technology is on the trajectory of exponential innovation that has been the historic norm. Remember, in the scheme of 200,000 years of homo sapien activity, 63 years from a field in Kitty Hawk to the moon is but a blip in the historical timeline. Autonomous lumberjack robots are entirely possible.
You call up a landscaping company. A crew of three lumberjack robots show up in an autonomously driven vehicle. These robots know everything there is to know about tree removal and they can do all the physical activities required. The “lead” robot asks you which tree needs removing. You point to the tree. The crew figures out the rest on its own, right down to charging your credit card for work done. Plausible? If you answer no, pretend you are my grandmother and you’ve just been asked whether you think it’s plausible that a man will walk on the moon in your lifetime.
So the question remains, What happened to the human crew? Did they become data scientists? Don’t answer this question yet. We’ll get to it in moment. First, we need to talk about the next milestone on the road to complete machine autonomy: artificial general intelligence (AGI).
Narrow AI and AGI
In the tree-removal scenario described above, all robotic activity—autonomous or otherwise—was confined to a single task: removing a tree. The scope of work is limited. Working within a limited scope of behavior is called narrow AI (artificial intelligence).
Examples of narrow AI are cutting down a tree, trading a stock, finding a date on Friday night, composing a song, making a pizza, etc. We’re going to see a lot of narrow AI emerge in the next few years. Venture capitalists such as Khosla Ventures, Greylock Partners and Goldman Sachs are already putting money into advancing narrow AI technologies. However, narrow AI is but a stepping stone to get across the pond to the final destination: general AI.
Think of of it this way: If Kitty Hawk is the starting point of AI, the first transatlantic flight is narrow AI. AGI is landing on the moon.
What will general AI look like? Let’s go back to the landscaping example.
We’re back in your front yard sitting in some lawn chairs having ice tea on a hot summer’s night. A driverless vehicle shows up. Two robots emerge. One approaches you and asks your permission to deposit $500 in your PayPal account in exchange for cutting down a diseased tree in your front yard. “OK,” you say, “but how do you know the tree is diseased and why are you giving me $500?”
While the second robot gets to work, the first robot responds, “The county keeps drones in the air 24/7 analyzing the landscape. A drone did a spectral analysis of your property and noticed the diseased tree. In addition to reporting back the location of the tree, the drone also included a full biological profile. If you don’t remove the tree, the disease will spread. That’s the bad news. The good news is that it’s a cherry tree. And it seems that only 10 percent of the tree is diseased. The rest is perfectly good wood. The price of cherry wood is quite high on the commodity markets right now. Also, there’s a legal statute that requires the county to reimburse property owners when a private landscape is altered for general good. So, the county sold a futures contract on the wood it can reclaim from your tree. The $500 we want to put in your PayPal account reflects your share of the revenue received from the futures contract plus the amount of the reimbursement entitlement required by law.”
This is what AGI looks like. AGI is the ability for a machine to think in broad terms, using a multitude of knowledge domains. Combining AGI with the advances in physical robotic capabilities expected on the horizon produces a result in which machine autonomy is not only possible, but will exceed human capability.
If you think AGI is a fantastic notion, go talk to Yves Bergquist, CEO over at Novamente. The company is actively working to improve AGI for general commercial use. One application Novamente is developing uses AGI to determine, if not create, movie scripts with a high probability for profitability. To quote Bergquist: “A story is an algorithm.”
Making a modern-day Hollywood blockbuster is a billion-dollar undertaking. Given the choice, studios would rather not roll the dice. However, when it comes to scripts, predicting a winner is difficult. Of all the human endeavors that are tough to emulate with artificial intelligence, creative storytelling ranks high. Writing a script requires working in a multitude of knowledge domains: story, character, plotlines, scene design and historical continuity, to name a few. Narrow AI is not suited to the task. You need AGI. There’s too much to consider. In a way, it’s surprising to think of Hollywood is the perfect incubator for developing real-world AGI. But it makes sense. AGI might produce a few flops. But, it’s a whole lot safer to lose at the box office than to lose on the battlefield.
Given companies such as Novamente, the entertainment industry might well become the incubator in which AGI is perfected. Then, if the trend in rate of technological innovation continues, eventually AGI will become viable everywhere. Then what? What happens when a machine can do everything a human does only faster and better?
A Turkey’s Life
Understanding the long-term impacts of a given technology has not been one of the highlights of human endeavor. As cars became commonplace, few people considered the possibility that too many would create too much carbon monoxide and thus threaten the health of the earth’s population. And nobody really thought about what was going to happen to the displaced blacksmith who was no longer needed to make shoes for a diminishing number of horses. Events just ran their course with little or no forethought.
But, it’s getting better. Some people do think ahead. Military planners consider the improbable as do financial analysts. It’s called risk analysis. These are the people that get paid to think outside the box, so to speak. They understand the even small possibilities might happen. Whether they are heeded or not is another story.
Today, machine autonomy is real. The possibility exists that in our lifetime automated AI will indeed be able to do everything a human does. And, the machine will do it better at less expense. There will be no new jobs for humans to do as old ones are destroyed. A machine will be able to do the new job faster and cheaper—even those jobs that require a significant amount of AGI. Then, what will happen to the humans?
We have to think about it today!
Sadly, few are. We are not hearing any articulation in general industry or in government, at any level, about ideas to address the the impact of automation on human employment as machine autonomy and AGI become more prevalent in the economy. It seems as though most people are relying on the old logic of the conventional wisdom.
Conventional wisdom is easy to accept. The U.S. economy is reported to be at practically full employment levels. Employers are complaining about how difficult it is to find qualified workers. The stock market is achieving historic heights. Given history and current events, of course it’s reasonable to say there always will be jobs for humans to do.
This assertion works until the day fully autonomous machines appear on the landscape. Then, conventional wisdom will be that which is uttered by turkeys on the day before Thanksgiving.
What do I mean? Allow me to elaborate using a previously written analogy.
Imagine you are a turkey on a farm. Every day the farmer comes by to feed you. This goes on for one week, two weeks, 10 weeks. Given your history, you are completely justified to predict that your future will be one of satisfying meals. It’s always been this way; how could it be otherwise? You go day in and day out predicting that the next day will be full of food and sunshine. You are correct … right up until the day before Thanksgiving.
In other words, there are things that happen independent of historical precedent. The trick is to imagine what such events might be.
Everybody agrees that AI is here and that machine autonomy is growing. The conventional wisdom is that, based on historical observation, there have always been jobs as technology grows. Many think the full replacement of human labor by autonomous machines is a possibility, but one that is coming in the far, distant future. When Wilbur Wright took off in 1903, my grandmother’s distant future as was 1969. Those 63 years passed in no time. She lived to see a man land on the moon. Those of us alive today are seeing the beginnings of true machine autonomy appearing on the technology landscape. A world of full machine autonomy is more than probable in the lifetimes of our grandchildren. It will happen. What will be the impact? What will they do if their labor is no longer required?
Back in 1903, as automobiles began to chug along the planet’s roads, we missed predicting the possibility of air pollution and global warming. Today, will we allow conventional wisdom to prevail and miss preparing for the impact that full machine autonomy will have on human employment? Or will we plan ahead? Hopefully, we’ve learned from our mistakes.
I’ll leave it to you to decide.
Author’s Note: The analogy of prediction in terms of historical precedence is taken from the book, “The Black Swan,” by Nassim Nicholas Taleb