Is Bing Chat Reliable for Math and Physics Queries?

  • Thread starter Slimy0233
  • Start date
  • #1
Slimy0233
167
48
I realize and understand the criticisms of ChatGPT and I have personally seem how bad it can be. Once I asked to count the number of days till a random date giving the present date and it failed miserably, again and again. Trust me! I get the criticism. But, what about Bing Chat Bot?

Have you ever tried to ask you Physics and Maths related questions to it? I was coding a while ago and I had a pretty complex questions which could not be solved by a very popular reddit coding community but Bing Chatbot gave an answer to it in an instant! I was genuinely impressed. Apparently it checks for answers on multiple webpages on the internet, it reads and understands what it reads and it gives the answer to it after combining the knowledge it gained from it's search. Again, the question I asked was pretty complex but it was able to answer it in an instant and it was the right answer! It was coding, it's pretty hard to get the right answer in the first try, I have found it's more "trial and error".

So yeah!
1. Can I rely partially on Bing Chatbot for math questions?
2. If not can I ask it to form a query which encapsulates my question perfectly?
3. If not, should I ask it to "Answer this question and site your sources"?
4. Can I do something more? i.e., like I did in 3? What are your thoughts on this?

I won't be able to reply to each of your comments anytime soon, but know that I deeply appreciate this community and it's members and their help :')
 
Physics news on Phys.org
  • #2
I am convinced that any AI necessarily fails in math because math is not a collection of facts and you cannot judge the truth value of a statement from any number of samples of similar cases.

E.g., I will never forget an exam where I wrote the protocol.
"What is a linear function?"
"A linear function is a function ##f## such that ##f(x+y)=f(x)+f(y)## and ##f(a\cdot x)= a\cdot f(x).##
"Correct, but what is it?"
"I don't understand!"
"Can you give us an example?"
"Any function for which ##f(x+y)=f(x)+f(y)## and ##f(a\cdot x)= a\cdot f(x)## holds."
I don't remember how long that exchange actually went, but I do remember that the student didn't understand why his grade was only a C.

AI can give many correct answers to such a question but it cannot understand it. And there will always be methods to demonstrate the difference.
 
  • Like
  • Informative
Likes pinball1970, Slimy0233, Vanadium 50 and 2 others
  • #3
Another example is the 4-color theorem. It is proven. Well, we needed a computer to check many exotic cases, we even needed to review and correct the computer part a few times if I remember correctly. Nevertheless, it is considered as proven now.

en.Wikipedia said:
The four color theorem was proved in 1976 by Kenneth Appel and Wolfgang Haken after many false proofs and counterexamples (unlike the five color theorem, proved in the 1800s, which states that five colors are enough to color a map). To dispel any remaining doubts about the Appel–Haken proof, a simpler proof using the same ideas and still relying on computers was published in 1997 by Robertson, Sanders, Seymour, and Thomas. In 2005, the theorem was also proved by Georges Gonthier with general-purpose theorem-proving software.
Wikipedia.de said:
Computing a 4-coloring is possible for planar graphs with n nodes in O(n^2) time. On the other hand, the decision as to whether three colors are sufficient is NP-complete.

Question: Do the proofs allow us any insight into why it is true? I assume yes since there are many insights into graph theory necessary and the computer part had to be prepared. But I think the answer is also no. We do not really understand what makes one problem P - well, we know, it is the algorithm - and another NP-complete. The underlying understanding of what is difficult has yet not been achieved. Things are easy if we have an algorithm, but we do not know when and primarily why we fail to find one for certain problems.
 
  • #4
AI can already re-discover Kepler's third law and Einstein's time dilation.

paper said:
Combining data and theory for derivable scientific discovery with AI-Descartes
...
Abstract
Scientists aim to discover meaningful formulae that accurately describe experimental data. Mathematical models of natural phenomena can be manually created from domain knowledge and fitted to data, or, in contrast, created automatically from large datasets with machine-learning algorithms. The problem of incorporating prior knowledge expressed as constraints on the functional form of a learned model has been studied before, while finding models that are consistent with prior knowledge expressed via general logical axioms is an open problem. We develop a method to enable principled derivations of models of natural phenomena from axiomatic knowledge and experimental data by combining logical reasoning with symbolic regression. We demonstrate these concepts for Kepler’s third law of planetary motion, Einstein’s relativistic time-dilation law, and Langmuir’s theory of adsorption. We show we can discover governing laws from few data points when logical reasoning is used to distinguish between candidate formulae having similar error on the data.

Source:
https://www.nature.com/articles/s41467-023-37236-y
 
  • #5
Sagittarius A-Star said:
AI can already re-discover Kepler's third law ...

See, we still celebrate the simplest algebra as an achievement!
I close my case.

AI can successfully find malignant melanoma for sure. (I think they are currently at a success rate above .9.)
However, we checked Riemann up to ##10^{13}.## So what? We still do not know whether it is true or not. Computing power makes us believe it is true, but a proof is something significantly different.
 
  • Like
Likes russ_watters
  • #6
Slimy0233 said:
Apparently it checks for answers on multiple webpages on the internet, it reads and understands what it reads and it gives the answer to it after combining the knowledge it gained from it's search results.
Without examining the search results and their sensitivity to the way you phrased your question to the bot, there's no way of knowing whether your description or my marked-up revision more accurately describes what the thing is doing. It takes serious intelligence and insight to discern a needle in a haystack - unless the algorithm is "look at every straw".
 
Last edited:
  • Like
Likes russ_watters and Slimy0233
  • #7
Nugatory said:
Without examining the search results and their sensitivity to the way you phrased your question to the bot, there's no way of knowing whether your description or my marked-up revision more accurately describes what the thing is doing. It takes serious intelligence and insight to discern a needle in a haystack - unless the algorithm is "look at every straw".
thank you! you make a good point. But I feel like I can depend upon AI to give me some basic facts. I feel like Bing Chatbot it pretty good.

I am talking abt simple questions like What is a symmetric matrix? What is a directional derivative of a vector field? etc etc
 
  • #8
fresh_42 said:
AI can give many correct answers to such a question but it cannot understand it. And there will always be methods to demonstrate the difference.
thank you!
 
  • #9
Slimy0233 said:
thank you! you make a good point. But I feel like I can depend upon AI to give me some basic facts. I feel like Bing Chatbot it pretty good.

I am talking abt simple questions like What is a symmetric matrix? What is a directional derivative of a vector field? etc etc
Why don't you ask Wikipedia instead? This has at least been reviewed hundreds of times plus it has references for its claims! And you can switch between languages depending on which you can read, speaking isn't necessary.
 
  • Like
Likes russ_watters
  • #10
Slimy0233 said:
thank you! you make a good point. But I feel like I can depend upon AI to give me some basic facts. I feel like Bing Chatbot it pretty good.

I am talking abt simple questions like What is a symmetric matrix? What is a directional derivative of a vector field? etc etc
What is it that makes you feel this way? That word - "feel" - seems out of place to me.
 
  • Like
Likes fresh_42
  • #11
fresh_42 said:
Why don't you ask Wikipedia instead?
Or Google, which in many cases will quote the thesis of the wiki article and then link it if you want more. I really don't see the upside of asking a chat-bot.
 
  • Like
Likes nsaspook and fresh_42
  • #12
russ_watters said:
Or Google, which in many cases will quote the thesis of the wiki article and then link it if you want more. I really don't see the upside of asking a chat-bot.
I regularly use Google with the pattern <subject>+pdf where "subject" determines also language and degree of difficulty (e.g. "Einführung in Differential Geometry" + pdf, "Calculus 2" + pdf) when I write an insight article or want to (re-)learn something. It leads me to hundreds of university servers (mainly in Europe, the US, and Canada) and lecture notes. You can really study topics on a professional level these days from home. It's a matter of discipline, not a matter of availability, and even less a matter of AI. The natural I already does the job so much better!
 
  • Like
Likes russ_watters
  • #13
"We are all Artificial Intelligence. We all got our knowledge and logic from humans - our parents - which is, by definition, artificial. The only one of us with Natural Intelligence is Tarzan."
- DaveC426913, Oct 5, 2023
 
  • #14
DaveC426913 said:
"We are all Artificial Intelligence. We all got our knowledge and logic from humans - our parents - which is, by definition, artificial. The only one of us with Natural Intelligence is Tarzan."
- DaveC426913, Oct 5, 2023
The problem for me is that chatbots aren't AI, in the strict sense of the word. It's a marketing mislabel. When/if we get real AI, it will be an actual game changer, not a risky overplayed hand..
 
  • Like
Likes Vanadium 50, fresh_42 and BillTre
  • #16
russ_watters said:
What is it that makes you feel this way?
Are you channeling ELIZA?
 
  • Haha
Likes pbuk

FAQ: Is Bing Chat Reliable for Math and Physics Queries?

Does Bing Chat give reliable answers to math and physics questions?

Bing Chat can provide useful and accurate answers to many math and physics questions, especially those that are straightforward and well-defined. However, its reliability may vary depending on the complexity of the question and the quality of the input provided by the user.

What are the limitations of Bing Chat in answering math and physics questions?

Limitations include potential misunderstanding of the question's context, difficulty with highly complex or multi-step problems, and occasional errors in computation or logic. Additionally, the chatbot may not always have access to the most up-to-date scientific knowledge or methods.

How can users improve the reliability of answers from Bing Chat?

Users can improve reliability by clearly and precisely framing their questions, providing all necessary details and context, and breaking down complex problems into simpler, more manageable parts. Verifying answers through additional sources is also recommended.

Is there ongoing work to improve the reliability of Bing Chat's responses to math and physics queries?

Yes, there is ongoing work to enhance the accuracy and reliability of AI-driven chatbots like Bing Chat. This includes improving natural language processing algorithms, integrating more advanced mathematical and scientific models, and continuously updating the knowledge base.

Can Bing Chat be integrated with specialized tools to enhance its math and physics problem-solving capabilities?

Yes, integrating Bing Chat with specialized computational tools and databases, such as WolframAlpha for mathematical computations or arXiv for recent physics research, can significantly enhance its ability to provide accurate and reliable answers to complex questions.

Similar threads

Back
Top