Prevent AI from outsmarting us – but will it?

Review of Stephen Hawking’s “Brief Answers to the Big Questions” - Part 6 and end

In the chapter titled “Will artificial intelligence outsmart us?”, Hawking again claims that it is a “triumph”, that we as human beings “who are ourselves mere stardust, have come to such a detailed understanding of the universe in which we live.” (183) Again, he doesn’t clarify what he means by “triumph”. Triumph over what, exactly? What’s the triumph in just “understanding” stuff? For what purpose?

(And, by the way, his choice of word is, let’s say, interesting. One of Hawking’s colleagues, the Nobel-Prize winning physicist Eugene Wigner, wrote in 1960, while describing the same phenomenon, not of a “triumph”, but of a “miracle”. Here’s what he said: “The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve. We should be grateful for it and hope that it will remain valid in future research and that it will extend, for better or for worse, to our pleasure, even though perhaps also to our bafflement, to wide branches of learning.” It is telling that Hawking does not even deign to mention Wigner and his words, let alone engage with them in his book at all.)

Whatever he thinks the purpose behind or our “triumph” is, Hawking is worried. Computers might become independent from us. What then? Artificial intelligence (AI) might at some point be able to “recursively improve itself without human help.” (184) What then? However, here Hawking’s weakness of not addressing the issue of purpose becomes a real stumbling block in his narrative. For why on earth should a machine want to “improve” itself? For what purpose? It would have to have a sense of purpose, and that can only be imputed by humans. By some standard, the machines would have to be able to measure their attainment relative to the purpose. That standard, too, would initially have to be imputed by humans. Granted, once that is done, it is principally possible that AI machines will be able to improve themselves. But they will always be working towards a goal or goals set originally by humans.

Hawking is worried that governments are “considering starting an arms race in autonomous-weapon systems that can choose and eliminate their own targets.” (186) On the other hand, “[i]n the medium term, AI may automate our jobs, to bring both great prosperity and equality.” (187) “In short, the advent of super-intelligent AI would be either the best or the worst thing ever to happen to humanity.” (188) 

Thus Hawking is not altogether wrong when he states that “we will need to ensure that the computers have goals aligned with ours.” (184) For if those goals “aren’t aligned with ours we’re in trouble.” (188) However, who does he mean when he says “our” goals? Surely not the people who voted for Brexit, even if they constituted a majority. Hawking’s barbs against Brexit in his book make clear that he wouldn’t agree with the great unwashed being allowed anywhere near the decision-making process regarding AI. But who exactly does he mean? It will have to remain a mystery. That’s a pity, because Hawking insists that the task is hugely important. So important that, with regard to AI, “we” (who?) “should . . . plan ahead and aim to get things right the first time, because it may be the only chance we will get.” (196)

Hawking is here making the perennial mistake of “the expert”: “We” know best. Let me say to anyone who thinks thus: No, you don’t. Or rather: You may know better than anyone else about a particular subject, but even you don’t know enough that would give you the authority to make a decision that has such far-reaching implications on everyone else’s life. Not nearly enough. And neither does a committee of you and people like you. It’s this kind of hubris that was the downfall of the Soviet Union, the very fitting death knell of which was the disaster at the committee-planned and -led nuclear reactor of Chernobyl. The same kind of disaster awaits us with regard to Covid and climate change, because these issues are being tackled with top-down decisions of committees of “experts”.     

Instead, let the markets decide: With the help of insurances. Today, experts are being (over-)paid with money fleeced from taxpayers who have little if anything say in how their money is used. They are in a top-down, government-controlled setting. In a market setting, experts will be employed by insurances, who have to be profitable in a competitive environment and will therefore have quality controls (remember those from the creation week?) in place with regard to even their best experts. Then, the experts will have “skin in the game”. In a top-down, government-controlled setting, they have very little skin in the game, if at all, as long as they keep to the procedural rules. In a market setting, they will be less prone to hubris and more likely to make decisions that serve their paying customers. That creates much more confidence in good decision-making than a pure reliance on good will and benevolence.

Later in the chapter comes a sentence that is a surprise considering what Hawking said earlier about human achievement so far being a “triumph”. He welcomes the fact that, in the Leverhulme Centre for the Future of Intelligence, “people are studying . . . the future” instead of history, which, “let’s face it, is mostly the history of stupidity” (190).

He tells us why he tries “to figure out how the universe works, using the laws of physics.” Namely: “If you know how something works, you can control it.” (200) This is of course nothing other than the re-iteration of God’s famous first commandment to humankind: subdue the earth. “Rule over the fish in the sea and the birds in the sky and over every living creature that moves on the ground.” (Genesis 1:28, NIV)

However, in this context it is worth pointing out something that the economist professor Hans-Hermann Hoppe said: Technology comes before science. According to Hoppe, there is a “a fundamental misconception regarding the interrelationship between science on the one hand and engineering or technology on the other.”

Hoppe continues:

“This is the popular misconception of regarding science as coming before, having priority, and assuming a higher rank and dignity vis-à-vis engineering and technology, as only secondary and inferior intellectual enterprises, i.e. as mere “applied” science. In fact, however, matters are exactly the other way around. What comes methodologically first, and what makes science as we know it at all possible and at the same time provides its ultimate foundation, is human engineering and construction. Put plainly and bluntly: Without such purposefully designed and constructed instruments such as measuring rods, clocks, planes, rectangles, scales, counters, lenses, microscopes, telescopes, audiometers, thermometers, spectrometers, x-ray and ultrasound machines, particle accelerators, and on and on, no empirical and experimental science as we know it would be possible.

Or to put it in the words of the great late German philosopher-scientist Peter Janich: “Handwerk” comes before and provides the stable foundation and groundwork of “Mundwerk.” Whatever controversies or quibbles scientists may have, they are always controversies and quibbles within a stable operational framework and reference system defined by a given state of technology. And in the field of human engineering, no one would ever throw out or “falsify” a working instrument until and unless he had another, better working instrument available.”

So, the future may depend more than ever on science and technology, but also on economics and engineering. Hawking however, like most scientists, would probably be horrified at the notion that engineering and technology comes before science. However, before Hoppe’s words on this subject are not convincingly refuted, they stand as a condemnation of any scientific “expert” claiming authority over what we should do or not do about anything.  

Hawking is confident that the future of the young generation of today “will depend more on science and technology than any previous generation’s has done.” (203) However, he worries that “[d]ue to the recent global financial crisis and austerity measures, funding is being significantly cut to all areas of science, but in particular the fundamental sciences have been badly affected.” (201 f.)

Here’s the problem with funding science and education. Science and education are not neutral. “Facts are sacred” says a UK daily broadsheet newspaper. However, facts can be arranged and re-arranged to suit somebody’s view. We have all heard of the term “lies, damn lies and statistics”.  

In the Wikipedia entry on “Lies, damn lies and statistics” (or rather “liars, damn liars and experts”), we read: “That phrase is found in the science journal, Nature, in November 1885: “A well-known lawyer, now a judge, once grouped witnesses into three classes: simple liars, damned liars, and experts. He did not mean that the expert uttered things which he knew to be untrue, but that by the emphasis which he laid on certain statements, and by what has been defined as a highly cultivated faculty of evasion, the effect was actually worse than if he had,” where the reference to the judge may be to Baron Bramwell.”

So, what follows from that? It is morally wrong to force people via tax to pay for research they are not interested in and for education they don’t agree with. Science and education need therefore to be taken out of the hands of government. There are counter-arguments. First: Fundamental science takes ages to generate profit, if at all. True. But that is just another way of saying that fundamental science takes ages to produce anything people are happy to pay money for. Second: Fundamental science can produce results nobody expected and nobody even was looking for, which however turns out to be very useful. Two questions however follow from this: Useful to whom? And for what? It is more likely than not to be useful to the funder, for purposes that enhance the position of the funder. If that funder is the state, it is not necessarily good news for the state’s citizens.

For the same reason, education needs to be privatised. Main objection here: Some people can’t afford education for themselves, let alone for their children. Retort: That is where charity should come in. Objection: There might not be enough money raised charitably. OK, so then we need to ask, why not? Maybe the product does not convince the potential giver. Second objection: Who will control the content? Here we come to the nub of the matter: Why should the state and its “experts” control the content taught to children? Why should parents hand over their children to “experts”? For that matter, why should anyone be forced to pay for the education of others’ children? We know the standard (economic) answer to that: Education enhances employability and reduces the danger of the child becoming a delinquent or in some other way burdensome to society. Yes, but that should be sufficient incentive to wealthy people to a) finance the education of the poor and b) supervise the content and the delivery. I suspect that it is this possibility of supervision by rich “non-experts” that horrifies “experts”.

Experts are currently in control of education and the curriculum. Their funders, the taxpayers, again have little if any say. Financially and conceptually, it’s an experts’ paradise. Strangely however, it still leaves people like Hawking thinking that humans could be “improved”. “Experts” are obviously doing something wrong. The reason is not malice, but the structure. There is no incentive for the experts to excel in favour of the students and their parents. The structure is such that instead the experts excel in favour of their worldview. The results of that structure can be seen when walking down the streets. 

At the end of the book, Hawking lists what he calls the “huge questions of existence” that still remain unanswered: “[H]ow did life begin on earth? What is consciousness? Is there anyone out there or are we alone in the universe?” (204) Especially regarding the last of those questions, Hawking says: “While the chances of communicating with an intelligent extra-terrestrial species may be slim, the importance of such a discovery means we must not give up trying.” (210)

Though it’s understandable that Hawking would be excited about such a discovery – who wouldn’t be? –, in the context of his book this statement is again a little strange and contradictory. For he has stated (see p. 86) the analogy of us meeting aliens might be like the American natives meeting Columbus. So why is he here so positive? Why is it so important for him that we discover them?

Two possible answers. One: It’s better to discover than to be discovered, better to be Columbus than (in this analogy) an American native. So we’d better get on with it, is what he’s saying. Two: Discovering aliens, especially intelligent aliens, might be another club with which to batter Christians. Secular alien-hunters may be hoping to say some day: “Made in God’s image? Don’t make me laugh. Look at them in that star system over there!” Well, tough. If God made the universe, he made intelligent aliens as well, if they exist. And who knows, whatever they look like outwardly: If they survived long enough to have developed a civilisation and industry capable of conceptualising the universe as we see it and developing the desire and ability to communicate with other civilisations beyond their home planet, then they too are the image of God.

However, that wouldn’t impress people like Hawking at all. After all, he “disagree[s]” with the claim that “humanity today is the pinnacle of evolution.” (204) He reiterates the vision of human beings somehow co-evolving with AI in order to improve the world. Of course this will only happen if “we” are “aware of the dangers, identify them, employ the best possible practice and management and prepare for its consequences well in advance.” (206). In other words: a world central committee of wise and benevolent people, chosen by . . . who, exactly? Hawking does not say; he does not even broach the aspect that such a structure is a necessary consequence of his idea.   

He again mentions “near-exponential population growth” and the “possibility of nuclear war” (205) as reasons for going into space. There are good reasons for going into space, even for human colonisation of space. But we should beware of “political” reasons such as the ones Hawking is raising. They lead to a sense of entitlement among politicians to push us around, take our money and spend it in a way they see fit, because they don’t trust us to do the right thing – the “right” thing somehow always being the thing that suits them most. Even though it is their actions, their laws and measures that often get us into the mess they then claim to want to clear up, with the help. Again, of our money. For example, “near-exponential population growth” and the “possibility of nuclear war”.

The rate of world population growth is falling rapidly and is likely to reach zero by the end of this century. However, there definitely are pockets of overpopulation in the world today. The question, before we ask what to do about that, is, how did it happen? It’s partly a natural development, as shown by Hans Rosling. It may however, in parts, be the consequence of well-meaning aid and charitable help, dispensing medicine, vaccines etc. There is absolutely nothing wrong with this charitable help. However, if it is not accompanied by measures that eradicate the structural reasons for poverty and misery, the result will be (instead of high infant and child mortality) huge swathes of unemployed adults. Education will help in this regard, but not any education. The content is key. If, for example, education includes no economics or false economics, the people thus “educated” will set up wasteful economies prone to mismanagement and environmental degradation. If, in addition, education includes no ethics, or rather false ethics, the result will be corruption and continued abject poverty, and mass migration of desperate millions to apparently richer lands.

So, in order to ward ourselves against “Armageddon”, it won’t be enough to “move out into space” (205). We will have to improve our spiritual shields.  

The same applies to the “future of learning and education”, which Hawking sees the internet as being. (208) The “internet connects us all together like the neurons in a giant brain. And with such an IQ, what cannot we be capable of?” (208) We will certainly be capable of lots of things unthinkable today – the good, and the bad. Hawking completely disregards the latter possibility. So, here too, ultimately only a spiritual shield will protect us from annihilation.

That spiritual shield will include a sense of purpose – a term severely lacking in Hawking’s book.  

Hawking’s last book is an exposition of mostly ill-thought-through philosophical concepts which are currently, sadly, dominating not only the scientific community but politics, education, media and entertainment. However, I am confident that this reigning attitude will not prevail.

Having said all that, I’d like to emphasise that I do like Hawking’s character as a human being. As already mentioned, he hoped to be remembered as a great dad and grandad. He also often displays a remarkable level of humour, considering his circumstances. Here’s an example of his humour I enjoyed: “Yet imagination remains our most powerful attribute. With it, we can roam anywhere in space and time. We can witness nature’s most exotic phenomena while driving in a car, snoozing in bed or pretending to listen to someone boring at a party.” (200)

Of course, I wouldn’t dream of doing that, ever.