What are the Limitations of Machine Learning in Causal Analysis?

In summary, machine learning is able to make accurate predictions from data in an automated and algorithmic way, but is not capable of making causal inferences due to the reliance on human judgment and intuition. Human level intelligence and intuition is necessary for proper causal analysis, as humans have a better understanding of what causality is and can use deductive reasoning to narrow down potential causal factors. While machines may eventually be able to surpass human intelligence, they currently struggle with tasks that require intuition and pattern recognition. The ultimate goal of AI is not to replace human intelligence, but to better understand how humans think and incorporate this understanding into machines.
  • #36
Dale said:
So far, I am not convinced that machines are particularly good at learning. For example, an average human teenager can learn to drive a car with about 60 hours of experience. The machines that are learning the same task have millions of hours and are far from average. Similarly, even a below average human toddler can learn to speak any language with far less data than is available to computers attempting the same task.
To be honest, I think there is some false equivalence here. Not that I can necessarily produce a more equitable sense myself., The very nature of "learning" is not, in my opinion, extremely dependent not only on the capability and "suitability" (i.e. a person without legs would struggle learning to "walk" in an extreme example) of the pupil, but also of the particulars of the subject and the manner of tuition.

A human teenager has an immense advantage because they already know that a cyclist up ahead is a cyclist and might move onto the road, as opposed to a lamppost that is less likely to do so, they have instinctive emotional reactions that can make their responses to slam on the brakes. Of course, autonomous vehicles have other advantages that humans do not have, but again, there is not a direct equivalence to this and I don't believe that comparisons of such lead to a 'fair' appraisal of human and machine learning.

By that kind of logic, one might argue that deer are much faster learners than humans just because a deer can walk minutes after birth.

There is a wealth more context and available information and 'technique' that humans learn and can apply whilst learning to drive, that isn't necesarily true of machine AI autonomous driving tech.
By the time a teenager starts learning to drive, they have already seen and understand traffic lights, they know not to stop in the middle of the motorway, they know that there are some roads they might not be able to drive down and they know the implications of mechanics such that driving faster introduces less control, more risk and longer stopping distances etc. all these factors and intuitions and preconcepts are built up over the course of the lives of the teenager, and are therefore already present before the 60 hour learning begins. The laws that make up highway codes have been developed over decades and other knowledge about the world has been passed down from generations from many varied sources.

Before a machine can learn to drive a car, it needs to learn the pattern recognition and the data needs to be amassed and collated and prepared in a format that can be utilised effectively. This also requires that the data retrieval aspects of the algorithms are developed AND EFFICIENT ENOUGH to work in the realtime scenario of driving a car.
Yes, computers process gajillions of teraflops a second, but the number of data points (in visual processing alone) to be processed is also incredibly high.

Humans have had BILLIONS of years of evlution to refine the image processing.I've not used the best examples and I appreciate that there are some elements such as "it doesn't add much to a 60 hour learning time to learn that red = stop") but I hope the point is clear at least.
 
  • Like
Likes Dale
Technology news on Phys.org
  • #37
_PJ_ said:
Humans have had BILLIONS of years of evlution to refine the image processing.
I guess if you assume bacteria had image processing abilities then just maybe those genes found their way into our DNA but that's a stretch...
_PJ_ said:
but I hope the point is clear at least.
If there were sub-routines specifically devoted to threat detection and it was programmed effectively that would make the rest of the systems straight-forward at least.
 
  • #38
Dale said:
So far, I am not convinced that machines are particularly good at learning. For example, an average human teenager can learn to drive a car with about 60 hours of experience. The machines that are learning the same task have millions of hours and are far from average. Similarly, even a below average human toddler can learn to speak any language with far less data than is available to computers attempting the same task.
The human can transfer other knowledge accumulated during their lifetime to the task of driving. For example, what a pedestrian looks like, what color the sky is, how to walk. Now try to teach a newborn to drive in 60 hours.
 
  • Like
Likes Auto-Didact
  • #39
Khashishi said:
Now try to teach a newborn to drive in 60 hours.
All I can think of is the Simpson's intro where Marge is driving and Maggie is in her car seat mimicking her every move. :-p
 
  • Like
Likes Auto-Didact
  • #40
Khashishi said:
The human can transfer other knowledge accumulated during their lifetime to the task of driving.
That is a big part of what makes humans so good at learning, and computers not.

_PJ_ said:
By that kind of logic, one might argue that deer are much faster learners than humans just because a deer can walk minutes after birth.
Interesting point! I wonder if deer actually learn to walk or if it is already hardwired in? I don’t know the answer to that, but in general it seems to me that humans do a lot more learning than other species.
 
  • #41
True. Less reliance on instincts is probably why humans are so good at learning new things. We have some remarkable neuroplasticity. If we hook up robot appendages to a baby, with proper connections, it could probably learn to walk on robot legs.
 
  • #42


The title is a bit clickbaity, but the research mentioned seems truly remarkable or interesting to say the least; especially the possibility of democratization of ML from researchers to basically anyone for any task that automatic ML brings seems like a major revolution still waiting to happen.

@Danko Nikolic, might this be a precursor to your AI kindergarten? I did not read their paper so not sure if they refer to your papers, but their automatic ML reinforcement learning reminds me an awful lot of your practopoietic learning to learn explanation. If they haven't referenced you, they should or maybe you should give a Google Tech Talk.
 
  • #43
Here we see the current limits on machine learning in action: Sofia vs Penrose!
 
  • #44
phyzguy said:
I think you're missing the point of how machine learning is being increasingly done today. Many (perhaps most) machine learning tools today are not algorithmic in nature. They use neural networks configured in ways similar to the human brain, and then train these networks with learning sets, just like a human is trained to recognize patterns. Even the (human) designer of the neural network doesn't know how the machine will respond to a given situation. Given this, I don't see why these artificial neural networks cannot match or eventually exceed human capability. Indeed, I think Google's facial recognition software is already exceeding human capability. Granted this is in a controlled environment, but given time and increasing complexity of the networks (and increasing input from the environment), I think you will see these machines able to do anything a human mind can do.

Yes, but these techniques need a clear set of rules and definition of 'winning' to optimize against - like in Go, Chess or Poker (which was not a Deep Learning algorithm BTW). The 'rules' and definition of 'winning' for tasks like driving are much more nebulous and complex, let alone the optimization function for a general AI
 
  • #45
FallenApple said:
From what I understand, machine learning is incredibly good at making predictions from data in a very automated/algorithmic way.

But for any inference that is going to deal with ideas of causality, it's primarily a subject matter concern, which relies on mostly on judgment calls and intuition.

So basically, a machine learning algorithm would need human level intelligence and intuition to be able to do proper causal analysis?

Here's an example where there might be issues.

Say, an ML algorithm finds that low socioeconomic status is associated with diabetes with a significant p value. We clearly know that diabetes is a biological phenomena and that any possible(this is a big if) causal connection between a non biological variable such as low SES and diabetes must logically have intermediate steps between the two variables within the causal chain. It is these unknown intermediate steps that probably should be investigated in follow up studies. We logically know(or intuit from prior knowledge+domain knowledge) that low SES could lead to higher stress or unhealthy diet, which are biological. So a significant pval for SES indicates that maybe we should collect data on those missing variables, and then redo the analysis with those in the model.

But there's no way a learning algorithm can make any of those connections because those deductions are mostly intuition and logic, which are not statistical. Not to mention, how would ML look at confounders?
Let me offer some opinions as someone that used AI early in the game. The human brain is perhaps the closest thing to infinite we've ever found. AI runs on a computer which isn't even as brilliant as a lizard. There are arguments among authorities of whether a brain works using binary logic or analog logic with me bending towards analog, We can speak of quantum computers but these are not really analog - they consist of logic of 0, maybe and 1. AI is really a method of bypassing a lot of hardware by using training algorithms. But this has the same limitations of a lot of hardware - it can only do what you train it to do. It can only use methodology you show it. It cannot actually invent anything of its own unless you train it in the method to do so. What we're speaking of here is teaching a computer how to be original. Since we cannot even tell ourselves how to be original beyond stupid things like "think outside the box" you're not going to achieve that.

Self driving cares will work fine if all of the human drivers around them obey the rules and the road designers do as well. There is NO CHANCE of that. I ride a bicycle for sport and believe me that I cannot ride one mile without observing people breaking driving laws in such a dangerous manner that it surprises me that we don't have 100 times the accident rate that we do. The fact that we don't is because other drivers are ALWAYS expecting poor driving and they are thinking further ahead than a self-driving car could do.

While waiting at a stop light, it changed to green and NO ONE MOVED. I wondered why and a car exiting the bridge came though the red light 5 full seconds after the light had changed at 60+ mph and accelerating. Because of the speed and the amount of time after the light changed there isn't a automated driving system in the world that could have predicted that. The speed of that pickup put it 150 yard out and out of practical range of the detectors on a self driving car. Put in better hardware? There are price limitations after all.

But there is even more - You are insured. If you get into a accident your insurance pays the bills if required. But with self driving cars the system manufacturer is liable. No insurance company in their right mind would insure such a company. This is like hanging out a sign and saying - "Million of dollars free for the taking".

So the single largest source of R&D money for AI research is essentially removed from the market and each company must assume the liabilities via their shareholders. I'll leave you to decide how successful that is going to be. Tesla has already ceased to call it an "autopilot" and now calls it a navigation device and you are required to have two hands on the wheel during the entire time that it is engaged. And I'll bet that there are now detectors built into remove all liabilities from Tesla if you do take a hand off before a wreck.

Having a smart phone do simple tasks under AI might give those of the Star Wars generation visions of C3PO but it isn't and won't much improve.

Google's use of AI is little more than delivering news to you that they think you want - anti-Trump stuff only or pro-Trump stuff for the other side. And Google guards this data jealously because it that becomes too widely understood that they are essentially being manipulated there is no telling the price they will pay for that. Go to YouTube and play a Glenn Miller piece and you have almost the entire rest of the selections from the 30's to 50's. This isn't smart. This is in fact rather stupid. Playing a Huey Lewis and the News piece doesn't mean that you want to listen to everything else from the 70's.

Can AI and Deep Learning be improved? What couldn't be? But will that improvement evade the real limitations of machine intelligence? That is very highly doubtful.
 
  • Like
Likes Auto-Didact and BWV
  • #46
I'm fascinated with the thought of analog computing returning. I have no business even entering this conversation with my level of knowledge, that said, it seems the potential for analog computing opens up avenues that are far more complex that straight binary. Arguments against analog that I've heard are that at the very base of physics lies the state of is, or is not. Binary. Yet it seems to me that synthesis and learning are more of an analog system. "Probably" is best described as a range or spectrum -analog. "Definitely" is singular and linear -binary.
I wonder if machine learning and AI will advance thru a further investigation of analog computing advances possibly crossed with binary. Much as the realization that non linear equations weren't junk, but much better described as 3 dimensional entities. Is it possible that simultaneous data running on multiple channels in an analog computer might experience resonance that could be a source of insight? Clock speeds and amplitudes could result in intersections of data that might prove profound. How one would configure a system like this is beyond me, but I have to wonder if the answer of ultimate machine learning can be found solely in a binary state. I don't believe there is a hard limit to machine learning. Humans have barely scratched the surface of the AI field, yet from my age perspective the advances are staggering. So. Carry on. All is as it should be.
 
  • #47
Trawlerman said:
I'm fascinated with the thought of analog computing returning. I have no business even entering this conversation with my level of knowledge, that said, it seems the potential for analog computing opens up avenues that are far more complex that straight binary. Arguments against analog that I've heard are that at the very base of physics lies the state of is, or is not. Binary. Yet it seems to me that synthesis and learning are more of an analog system. "Probably" is best described as a range or spectrum -analog. "Definitely" is singular and linear -binary.
I wonder if machine learning and AI will advance thru a further investigation of analog computing advances possibly crossed with binary. Much as the realization that non linear equations weren't junk, but much better described as 3 dimensional entities. Is it possible that simultaneous data running on multiple channels in an analog computer might experience resonance that could be a source of insight? Clock speeds and amplitudes could result in intersections of data that might prove profound. How one would configure a system like this is beyond me, but I have to wonder if the answer of ultimate machine learning can be found solely in a binary state. I don't believe there is a hard limit to machine learning. Humans have barely scratched the surface of the AI field, yet from my age perspective the advances are staggering. So. Carry on. All is as it should be.
Digital logic is so fast because it only has two states. Analog has a theoretical infinite number of states though due to actual accuracy it has more to do with settling time. And this is a great deal slower. Whether the increased accuracy of a smaller number of "bits" is worth it hasn't been worth it up to this point but I will have to think about that. I know a couple of REAL analog engineers that are more than worth their salt. National awards and all.
 
  • Like
Likes Auto-Didact
  • #48
Tom Kunich said:
Digital logic is so fast because it only has two states. Analog has a theoretical infinite number of states though due to actual accuracy it has more to do with settling time. And this is a great deal slower. Whether the increased accuracy of a smaller number of "bits" is worth it hasn't been worth it up to this point but I will have to think about that. I know a couple of REAL analog engineers that are more than worth their salt. National awards and all.
OK, went to the people who know the real stuff and it seems like the settling time to get an accuracy to one part in one thousand would be 100 times that maximum speed of the circuits. This sort of kills that idea since it would make about a millisecond to get accuracy in a common op-amp. Now you can push the envelope with his speed designed op-amps etc. But digital circuitry could do this in a hundredth of that time.

Shrinking the circuitry could speed it up or slow it down depending on many things - the less power the larger the effects of stray capacitance and conductor resistance etc.

I'm far more of a digital designer and after thinking about quantum computing I have to learn more about it. But I think that I have an idea for making a simple cell.
 
  • #49
FallenApple said:
Say, an ML algorithm finds that low socioeconomic status is associated with diabetes with a significant p value. We clearly know that diabetes is a biological phenomena and that any possible(this is a big if) causal connection between a non biological variable such as low SES and diabetes must logically have intermediate steps between the two variables within the causal chain. It is these unknown intermediate steps that probably should be investigated in follow up studies. We logically know(or intuit from prior knowledge+domain knowledge) that low SES could lead to higher stress or unhealthy diet, which are biological. So a significant pval for SES indicates that maybe we should collect data on those missing variables, and then redo the analysis with those in the model.

But there's no way a learning algorithm can make any of those connections because those deductions are mostly intuition and logic, which are not statistical. Not to mention, how would ML look at confounders?

A ML algorithm with advanced language processing capabilities could learn about diabetes from existing literature and find out the confounder variables that would need to be accounted for. Artificial intelligence can theoretically be superior to the human brain in everything, and I'm pretty sure that it's just a matter of time until that is achieved.
 
  • #50
ZeGato said:
Artificial intelligence can theoretically be superior to the human brain in everything,
Please cite the professional scientific reference that supports this claim. It seems improbable to me.
 
  • #51
Dale said:
So far, I am not convinced that machines are particularly good at learning. For example, an average human teenager can learn to drive a car with about 60 hours of experience. The machines that are learning the same task have millions of hours and are far from average. Similarly, even a below average human toddler can learn to speak any language with far less data than is available to computers attempting the same task.
It seems to me that we don't even know how a human being thinks. How his memories are stored and in what order. Humans have trillions of possible neural connections and if you were to get a couple of hundred thousand in AI you'd be a great deal better than any AI presently is. And remember - you cannot teach a machine to do something for which there is no presently known answer. Recent claims of AI teaching another AI to do something better than a human could seems a bit far-fetched to me. I should think that in reality it would be AI teaching another AI to do the same thing as it can do more rapidly. Algorithms are limited to the operations they were designed to do. This is not comic book fantasies.
 
  • Like
Likes Dale
  • #52
I'll just bump this again. ML is not anywhere near replacing brains. It does things completely differently which allows it to excel at some things (but totally fail at others).

Pythagorean said:
watch
 
  • #53
Dale said:
Please cite the professional scientific reference that supports this claim. It seems improbable to me.
I don't need any source to state that it's theoretically possible to make a physical system that is better than the human brain on performing its functions. I think it's shortsighted to say that it's improbable that the human brain will never be surpassed by AI.
 
  • #54
ZeGato said:
I don't need any source to state that it's theoretically possible to make a physical system that is better than the human brain on performing its functions.
On this site, yes you do. For instance, a reference to the relevant theory by which you conclude that it is “theoretically possible”. With no relevant theory it is not a theoretical possibility, it is merely a personal speculation.
 
Last edited:
  • #55
ZeGato said:
I don't need any source to state that it's theoretically possible to make a physical system that is better than the human brain on performing its functions. I think it's shortsighted to say that it's improbable that the human brain will never be surpassed by AI.

It's already a vague statement. AI will surpass brain in what domains? All of them? Some of them? AI already surpasses brains in very specific crafted cases. But would an AI brain ever be able to detect and see to to the emotional and social needs of others?
 
  • #56
I love the dose of skepticism in this thread against AI not being able to outperform actual human intelligence or consciousness more generally; for those who aren't aware of the terminology in academia, the idea that modern AI - or any purely computational algorithm for that matter - is fundamentally incapable of outperforming or even matching actual human intelligence i.e. consciousness without copying/implementing/improving the actual neurobiological network design, is basically a variant of the Gödelian Lucas-Penrose argument.

Going by the replies in this thread, or at least on this page of the thread, it might seem like Penrose wrote The Emperors New Mind about 30 years too soon. Back in '89 he was universally panned by the academic AI community, who pretty much all - under the domineering stewardship of Marvin Minsky, Ray Kurzweil et al. - came to believe that human intelligence was essentially nothing but raw computation, and therefore soon to be overtaken by a rapidly evolving bruteforce AI.

Today most people don't think it will be bruteforce AI, but will instead be a resultant of a combination of ML, decision theory, network theory and AI techniques which will outperform humans in most non-subjective aspects of intelligence or consciousness. More and more people, like Elon Musk and Sam Harris, are afraid of this actual possibility and I believe rightfully so, precisely because experts do not fully understand the intricacies of human intelligence yet, while non-experts are willing to replace humans with robots regardless simply for financial reasons.
 
  • Like
Likes gleem
  • #57
FallenApple said:
So basically, a machine learning algorithm would need human level intelligence and intuition to be able to do proper causal analysis?
Causal analysis can be done only by an agent that can interrogate Nature, by getting experiments performed, but then there are no limits.
FallenApple said:
Essentially it takes detective work to do causal inference.
This would not be an obstacle. Computer programs can already find needles in haystacks...
atyy said:
I agree that machines have a long way to go before reaching human level performance. But is it true that they have access to the same data as humans? For example, in addition to the 60 hours of experience a teenager needs to learn to drive, that teen already spent 16 years acquiring other sorts of data while growing up. Similarly, the toddler is able to crawl about and interact in the real world, which is a means of data acquisition the computers don't have.
They don't have much experience of the real world, which accounts for most of the superiority of humans on real world tasks. A baby can do very little until it is able to generate sense from raw data, which takes a long time...
Khashishi said:
The human can transfer other knowledge accumulated during their lifetime to the task of driving. For example, what a pedestrian looks like, what color the sky is, how to walk. Now try to teach a newborn to drive in 60 hours.
Transfer is easy, once knowledge is properly organized.
Auto-Didact said:
Today most people don't think it will be bruteforce AI, but will instead be a resultant of a combination of ML, decision theory, network theory and AI techniques which will outperform humans in most non-subjective aspects of intelligence or consciousness. More and more people, like Elon Musk and Sam Harris, are afraid of this actual possibility and I believe rightfully so, precisely because experts do not fully understand the intricacies of human intelligence yet, while non-experts are willing to replace humans with robots regardless simply for financial reasons.

Since 2001, I have been giving courses on AI for mathematicians, and I am giving such a course this term, which started 9 days ago. In the first week I roughly covered the overview, with a backbone given in these slides. There are lots of AI techniques needed in addition to machine learning, and they develop at a rapid pace.

My research group in Vienna is working on creating an agent that can [URL='https://www.physicsforums.com/insights/self-study-basic-high-school-mathematics/']study mathematics like a human student[/URL] and succeeds in getting a PhD. It is still science fiction but looks realizable within my life time, or I wouldn't spent my time on it. If any of you with enough time and computer skills has interest in helping me (sorry, unpaid, but very exciting), please write me an email!

Conceptually, everything in human experience has an analogue in the world of agents in general. There is no visible limit to artificial capabilities; only degrees of quality. It probably takes only 20-30 years until some human-created agents can outperform humans in every particular aspect (though probably different agents for different tasks).
 
  • Like
Likes Auto-Didact
  • #58
Tom Kunich said:
it surprises me that we don't have 100 times the accident rate that we do. The fact that we don't is because other drivers are ALWAYS expecting poor driving and they are thinking further ahead than a self-driving car could do.
Perhaps at present.

But nothing prevents developers to program car-driving software to ALWAYS expecting poor driving and to think ahead. Thinking ahead is needed for many control tasks that robots can do quite well.

Tom Kunich said:
you cannot teach a machine to do something for which there is no presently known answer.
Automatic theorem provers have proved at least one mathematical theorem that humans conjectured but could not prove. This is not much, but the beginnings of counterexamples to your claim.
 
  • #59
A. Neumaier said:
This would not be an obstacle. Computer programs can already find needles in haystacks...
Finding 'the needle' is very much a problem dependent issue; this is essentially reflected in the entire fields of computational complexity theory and computability theory.
A. Neumaier said:
Automatic theorem provers have proved at least one mathematical theorem that humans conjectured but could not prove. This is not much, but the beginnings of counterexamples to your claim.
'Could not prove (yet)' does not imply 'unable to prove in principle'. Moreover, a computer being capable of generating a proof for a well-defined problem before any humans have done so, does not in any way, shape or form imply that computers are also able to generate proofs for not yet well-defined problems; humans on the other hand are actually capable of solving many such problems by approximation, analogy and/or definite abstraction.
 
  • #60
Auto-Didact said:
a computer being capable of generating a proof for a well-defined problem before any humans have done so, does not in any way, shape or form imply that computers are also able to generate proofs for not yet well-defined problems; humans on the other hand are actually capable of solving many such problems by approximation, analogy and/or definite abstraction.
Many classification problems solved by computers better than by humans are also not well-defined problems.
 
  • Like
Likes Auto-Didact
  • #61
I agree with the concept that machines have no limit, but comparing them to human intelligence seems arbitrary and anthropocentric. We'd also have to make clear that we're talking about very long periods of time. 100 years from now? Yeah, there's a lot of limits and I don't see a Bender Rodriguez strutting down the street in that amount of time, but they'll certainly be driving our cars and flying our planes. 1000 years? Mmm... maybe? 10,000, absolutely.

Of course, assuming humans destroy ourselves, which in theory, we don't have to.

It shouldn't really even be debatable of whether or not an AI could ever match a human. The human brain is a chemical machine, we have programmable maths that describes that chemistry. Given enough computing power, you could build a brain from atoms up inside a computer. It's unlikely we'd ever need to do anything like that and I don't see humanity having that much computing power any time soon, but there's nothing really preventing it in theory. The only real limit is energy and matter.

Natural selection gave us our thought process in about 600 million years, I'd think intelligent design could beat that by several orders of magnitude.I'm weary of AI in the long term. I don't think anyone alive today has to worry about it, but I see it as one of the potential "great filters" for life in the Fermi paradox. I see no reason to fear any individual AI, but the fact that they are immortal means that the number of them will grow very rapidly. I think they'll be very individualized, and be a result of their "upbringing." They'll be as diverse as humans and while I believe that most humans are good. .. Hitler existed.
 
  • #62
A. Neumaier said:
Many classification problems solved by computers better than by humans are also not well-defined problems.
I don't doubt that at all, but I wouldn't lump all non-well-defined problems in the same category. There are several different degrees of being badly defined, some of which are still perfectly solvable - sometimes even with trivial ease - by some humans, despite all their vagueness.
 
  • #63
Auto-Didact said:
I don't doubt that at all, but I wouldn't lump all non-well-defined problems in the same category. There are several different degrees of being badly defined, some of which are still perfectly solvable - sometimes even with trivial ease - by some humans, despite all their vagueness.
Please give examples.
 
  • #64
A. Neumaier said:
Please give examples.
I'm currently publishing two papers on this topic; will link when they are done.
 

Similar threads

Replies
5
Views
1K
Replies
1
Views
1K
Replies
9
Views
2K
Replies
1
Views
2K
Replies
6
Views
2K
Replies
14
Views
1K
Replies
13
Views
5K
Back
Top