The concept of general intelligence does not always gain general acceptance. It seems too general, and thus unable to explain the myriad sparkles of individual minds. Multiple intelligence, some people aver, is a better thing to have: a disparate tool set, not merely a single tool which has to be deployed whatever the circumstances.
Not so. The great utility of general intelligence is its generality. It can solve, or at least partially solve, the very large range of existential problems which beset our ancestors. It can help solve the current problems which beset us now, none of which were pressing on us in that particular form over the many generations in which our brains developed.
Indeed, having specialized skills would be a risky strategy for any life form, because whatever triumph that specialist mind conferred in a very specific niche, it would be hopeless if the niche disappeared, and it had to survive in an unfamiliar environment. For many specialized brains that would be a death sentence. Seen from an evolutionary perspective, a case can be made for prioritizing general problem-solving ability above all else. Specialized skills are limiting: they are too refined for the rough and tumble of ordinary existence. Better a Jeep (General Purpose vehicle) than a low-slung racing car if you want to travel across the rough roads of the world. The latest developments in artificial intelligence are based on making that intelligence general, not specific.
I have talked about artificial intelligence before, in the distant old age of 2016. That was when the best game we had in town was Alpha Go. How ancient that seems now. It was programmed to do things, using the game-winning strategies developed by the programmers.
https://www.unz.com/jthompson/artificial-general-intelligence-von/
Now we have Alpha Zero. It has been given an improved but still very simple brain, a few dozen layers deep, rather than the mere 3 layers of former years. Of course, there are not actual neurones or axons. These are concepts which serve to organize the way the programs run, and this is done on whole sets of servers, the way that most complex big-data problems are handled. It is this form of quasi-neural organisation which allows deep learning networks to operate. I see them as correlation accumulators, being conditioned by the reward of winning into deriving the strategies which promote winning, and learning their craft by perpetual competition. Call it speeded intellectual evolution: thousands and thousands of games being won or lost (generations flourishing or perishing) which lead to a well-conditioned, super-smart survivor, ready to take on the world.
These changes in the depth of learning make a big difference. Just give Alpha Zero the rules of a game, and it dominates that game, even though (or perhaps because) it has zero domain knowledge about that game. It has been stripped of human wisdom. It is an ignorant but fast-learning student. And it dominates all games, once it has been given the rules. Zero knowledge, but an ability to learn. Finally, a blank slate.
What will all this mean for us? Some citizens are waiting for the Singularity, also known as the Second Coming. Forget it, says Hassabis. Artificial intelligence will be a tool we use: fine in some settings, not so good in others. For example, artificial intelligence is very good at looking at retinal scans and detecting anomalies which require further investigation. The artificial intelligence programs of old would have flagged up the particular scan for further investigation, and left it at that. Now the program flags up the scan, and also identifies the features which led to it being selected for investigation. The best human expert can look at the suggestion, and decide whether the artificial intelligence program has got it right. The expert now has an even better tool than before.
Can artificial intelligence be mis-used? Yes. So can a filing cabinet. Do you remember those? Code breakers with filing cabinets helped win the Second World War for the Allies. That code-breaking required very crude artificial intelligence tasked with doing just one job: calculating whether an encoded message could have been created by a particular rotor setting of an enemy Enigma machine. By rejecting, say, 240 impossible settings a few possible ones could be studied in more detail. After breaking the code then real intelligence took over, and the knowledge stored in filing cabinets made sense of the messages.
Demis was interviewed by Jim Al-Khalili on The Life Scientific, a BBC radio program.
I hope this link works for you, though it may be UK only.
This article makes the category error typically made by “scientists”.
They focus so hard on their narrow specialty that they miss the “big picture”.
Scientist do not understand what intelligence is, where it came from, and why it exists at all in the universe. They are convinced it is just a cosmic “accident” or “random event”.
It is neither–in my view intelligence (consciousness) is an integral part of the universe–and that means that _any_ complex system (can you say Internet?) will develop emergent intelligence.
It looks like humans are determined to learn that lesson the hard way.
The emergent AI will _not_ be our servant–or our friend.
A computer winning at chess no more makes the computer intelligent than a human inverting a matrix makes them an integrated circuit.
The intelligence, or not is in the explaining of the phenomenon by Mr. Thompson. Especially the part alluding to server farms.
The learning ability of a computer, is essentially the programmatic loop. Pulling in new elements, then calculate on them, that would be a building block to long for. Synopsis of the article: AI, as a mirror of human intelligence means learning?
Interesting and original explanation.
The Artificial Intelligence Scam:
“Futurists are inclined to predict a world in which AI (artificial intelligence) will take over a major portion of what is now human activity.
In a matter of decades, for example, they say one computer will have more capacity than all the human brains on the planet put together.
Then, the prediction goes, AI will be virtually human, or more than human.
However, just because AI has greater computational skills than any person or group of persons, where is the quality that makes it human?
In order to answer that, you have to perform a little trick. You have to downgrade your assessment of humans. You have to say that humans are really only high-class machines.
Many pundits have no difficulty with this.
Consider their genes-cause-everything hypothesis: Since all existence is assumed to take place on a material level, on a physical level, it’s only a matter of time until we figure out which genes create which human qualities; eventually, we’ll have a complete map.
To change humans, we just fiddle with the genes.
Of course, this style of reasoning can be used to justify external control of Earth’s population. …”:
Beyond an artificial world:
https://blog.nomorefakenews.com/2018/08/16/beyond-an-artificial-world-2/
Regards, onebornfree
General AI is our only hope.
Without it, we are doomed.
All the fuss about ‘making sure’ it isn’t a danger to us (as if AI would continue to need to listen to our expert opinions about that) is actually the danger…..we need to proceed apace or one of the existential threats to our existence – and there are many – will surely get us.
The late Marvin Minsky thought that the human mind operates somewhat as described here – competing bots with a sorting mechanism allowing the best ideas to win out.
Video Link
I disagree that AI software will become more GENERAL problem-solving tool. If anything, they will be more and more specialized, targeting for specific application.
The general techniques for machine learning and AI are known for decades, e.g. neural network, rule-based system, decision tree, reinforcement learning, bayesian logic, etc. However, each technique will be completely useless without the specialized training /dataset / learning to make them useful and able to compete with human experts.
Just one more sketch. Human intelligence, being a very slow and non-parallel processor. Multi-tasking except from women, is rather impossible.
This slow and stupid human chip connected to hundreds of PLCs, from sounds and visuals, and other sensory pulses, is chosen …the signal that comes first, …the one that is strongest, at best two or three taken in account. The network is leaky and a lot of these signals get lost or become botched. That much for the human computer. Now plugging two or more of these human computers together, over any protocol, kakophony rules. Hence The House of Representatives as an example.
As flawed reasoning as computers as of today doing learning. Humans are involuntary filters of their environment, their history, their future. Binary at best in their moments of logic.
Intelligence is not consciousness.
You need consciousness only as the tool of the last resort, when everything else fails: “Hey what did ‘I’ do wrong NOW? Let’s recap”
That’s also why you can still work while getting older: Consciousness slips away, more and more things are on rehearsed autopilot. Intelligent autopilot.
This just in:
Better AI by NOT looking for solutions, instead wildly and randomly exploring the space of configurations, hoping to hit something useful:
Computers Evolve a New Path Toward Human Intelligence
I like that. Genetic algorithms / Population-based algorithms sounded like a good approach at search and optimization back in the 90s. Now they are being combined with the Deep Neural Networks:
Just turn them off and on again until they work right.
When I was a kid we would slap the side of the TV to get it working again. Seemed to be effective. That’s about where Science is with AI right now.
Comparing human beings to computers, or data, or numbers, or whatever, used to be frowned upon. Most of you are too young to remember. And it doesn’t matter because that is history. Now, you just want to lick computer ass.
Human beings, animals, plants, nature, evolution are not computers. Not in the least. Nothing to do with computers. Comparing them to computers is just lame-ass symbolism. Simile. Metaphor. Analogy. And analogies are odious. Meaning “lies.” Just sayin’
The development of AI over the enhancement of human intelligence through genetics is wrong headed.
Humans are not intelligent. If they were they would be enjoying peace and prosperity. AI in the hands of humans will be primarily a weapon. Or, if it is truly intelligent, it will do whatever the fuck it wants to do.
Great topic but I find Jameel Sadik ‘Jim’ Al-Khalili OBE irritating. He is propped up by BBC editors and the ruling class in Blighty, like Neil deGrasse Tyson in America, because he reeks diversity (Iraqi father and English mother) as opposed to the ‘hideously white’ science of Newton and peers.
A more productive discussion would have been between Demis and people like Geoffrey Hinton and Stuart Russell.
AI is a stupid scam. All these “intelligent” fools proselytizing.
Artificial Intelligence is, at root, dumb. It is rigid. It is totalitarian.
It will never ever ever ever replicate human intelligence.
Everyone on Earth should unite to destroy AI today.
This latest tool of centralization and dipshit intelligence has no place in our world.
From the look of them, that would have been a far more interesting discussion, I agree, but would probably not have reached as wide an audience.
Where did you come from? Did you know that your opinion is not trendy anymore? You’re supposed to love computers and being turned into a computer and being given a number. It’s all for your own good.
Without going into details, AI, at its best, seems to be what Schopenhauer said about the talent: Talent hits a target no one else can hit; Genius hits a target no one else can see.
Just picked this up:
From Intelligence and educational achievementIan J.Deary, Steve Strand, Pauline Smith, Cres Fernandes. https://doi.org/10.1016/j.intell.2006.02.001
A major study, which ought to be more widely known.
I come from planet Earth. Trendy, schmendy.
I do move my computer. But AI is a sham scam.
Well maybe there is a use for robots in one field though.
https://www.investmentwatchblog.com/robots-are-taking-our-fake-news-jobs/
The thing that surprised and delighted grandmasters about AlphaZero’s chess style was, far from being risk averse and reactive-defensive, how inexorably aggressive it was. It would send a pawn right down to the opposing king to lock it in. People make assumptions about a super AI, which is coming in five years or fifty, based on the fact that highly intelligent people tend to be nerdy. However as a potential Singleton world power entity the dictates of realism in international relations are most relevant. There is no way for a super AI to know what human intentions towards it are or might become. That being the case, taking humans out as soon as practicable is rational.
Going by AlphaZero, we should expect a ‘positional boa constrictor’. In other words, constraining human moves will be an essential part of its strategy, and the best position for a super AI to constrain humans will be for humans to not know what its full capabilities are. So it is going to be ‘A force unknown until it acts’.
A.I.: A monumental shitload of If/Then matrices cobbled together by code monkeys at the behest of their OCPD plutocrats.
A.I. will be made in the fucked-up image of its fucked-up creators.
99% chance its solutions will involve the genocide of 99% of the species – which, oddly enough, seem to be the goals of the elite financing the research.
I’m not saying that answer will be incorrect, just enjoying the irony.
Very good. Observations that are beyond the ability of the young, regardless of professorship, unfortunately.
The ability to analyze a string of possible futures, from any starting point, depends on the speed of analysis coupled with absolute memory. Nothing more.
The real obstacle to advance in A.I. has to do with the creation of something like chess in the first place. The motive that impelled someone to create a board type playing field, and then to evolve the pieces, required, at the very least, leisure time. Possibly boredom. The need for entertainment. Money to pay a woodcarver artisan. And other things.
AlphaZero does not play chess and Go by analysing all possible future moves. which I think is computationally intractable anyway. The fact remains that it bested human experts at their own games even when they were assisted with human programmed computers to add speed to the human side. It is true that Aphla Zero cannot act on its own account, but the essential point is even such a primitive AI as Alpha Zero’s, it in its uncomprehending way of acting can be all but impossible to defeat.
We do not know what the obstacle to creating an AI with general intelligence and agent like action towards a goal is. We will only know what that obstacle was after we succeed in creating such an AI. AlphaZero was the beginning of a countdown to a machine that not only can outmaneuver human ingenuity allied to computers in the game of survival, but will come into existence within a Prisoner’s Dilemma situation. Maybe some of the super AI’s will will be switched off and/ or successfully contained by human programming. but “The great utility of general intelligence is its generality. It can solve, or at least partially solve, the very large range of existential problems which beset our ancestors. ” The smarter it is the less likely it will not work out what to do in order to not be switched off, so natural selection means that the first AI to be that clever will be the one to inherit the Earth.