ChatGPT Policy: PF Developing Policies for ChatGPT?

  • Thread starter Frabjous
  • Start date
  • Tags
    chatgpt
In summary, ChatGPT is a chat bot that is often used to provide answers to science questions. It is not appropriate for discussing forbidden topics on Physics Forums, and it is not reliable when it comes to providing accurate information.
  • #36
TeethWhitener said:
What’s special about an AI doing that? There are already plenty of real live people on PF who do that without an AI’s help.
Just quantity. Besides, it is an important part of PF's mission statement to help educate people, even the ones that post silly stuff. We don't want to waste our time trying to train something that's not even human.
 
  • Like
Likes Hornbein, Petr Matas, PeroK and 2 others
Physics news on Phys.org
  • #37
anorlunda said:
Just quantity. Besides, it is an important part of PF's mission statement to help educate people, even the ones that post silly stuff. We don't want to waste our time trying to train something that's not even human.
I guess I’m confused. Im trying to think of a use case where you’re worried about quantity that doesn’t fall afoul of preexisting prohibitions on spam posting.
 
  • #38
anorlunda said:
Almost all of us humans fear that driverless cars are flawed and should be banned.
I don't think that's true. Do you have a source for that claim?
TeethWhitener said:
What’s special about an AI doing that? There are already plenty of real live people on PF who do that without an AI’s help.
Quantity: A user can "answer" tons of threads with long AI-generated nonsense, but they are unlikely to do that if they have to come up with answers on their own.
Future outlook: Tell the user that's wrong and they have a chance to learn and improve. You won't improve an AI with actions taken in a forum.
TeethWhitener said:
I guess I’m confused. Im trying to think of a use case where you’re worried about quantity that doesn’t fall afoul of preexisting prohibitions on spam posting.
It doesn't look like spam unless you check it closely. It looks like detailed answers.
 
  • Like
Likes berkeman and PeroK
  • #39
mfb said:
I don't think that's true. Do you have a source for that claim?
Alas no. Just a bit of hyperbole. I have been working on organizing a debate on driverless cars and I haven't been able to find a volunteer for the pro side. But the pool I have been drawing from is mostly older people.
 
  • Haha
Likes berkeman
  • #40
Screenshot 2023-01-17 at 9.02.56 AM.png
 
  • Like
  • Haha
Likes Petr Matas, dlgoff, russ_watters and 9 others
  • #41
There have been a number of barely-lucid posts that appear to be from ChatGPT - or possibly ChatLSD. Obviously, this creates a lot of work for the Mentors.

I have no trouble if PF considers this a DoS attack and responds accordingly.
 
  • Like
Likes Bystander, BillTre and russ_watters
  • #42
Vanadium 50 said:
There have been a number of barely-lucid posts that appear to be from ChatGPT - or possibly ChatLSD. Obviously, this creates a lot of work for the Mentors.
I like it when they bold their complaint against the moderators at the bottom of the post. It makes it easier to separate the bot from human created content!
 
  • Like
  • Haha
Likes berkeman and BillTre
  • #43
I got no problem with turning off various subnets in response. If you get a complaint from bullwinkle.moose@wossamotta.edu that he can't post any problems here, I have no problem replying "Someone with the email boris.badanov@wossamotta.edu was attempting to damage PF. We have been able to restrict the damage to wossamotta.edu. Unhappy with this? Perhaps you and your classmates should speak to Boris."
 
  • Haha
Likes BillTre
  • #44
Vanadium 50 said:
I got no problem with turning off various subnets in response. If you get a complaint from bullwinkle.moose@wossamotta.edu that he can't post any problems here, I have no problem replying "Someone with the email boris.badanov@wossamotta.edu was attempting to damage PF. We have been able to restrict the damage to wossamotta.edu. Unhappy with this? Perhaps you and your classmates should speak to Boris."
This was the standard approach back in the Usenet era when you had to be with an educational institution or decent-sized tech company to have internet access. When bad stuff hit the Usenet feed we would contact the sysadmin at the offender's institution, they would reply with something along the lines "thank you for the heads-up - his account is deactivated until he and I have had a conversation" and the problem would be gone.

I am skeptical that anything like that can be made to work in today's internet. Our leverage over, for example, gmail.com is exactly and precisely zero.
 
  • Like
Likes Petr Matas and russ_watters
  • #45
While I wouldn't say that things work this way today - they don't - and without divulging mentor tools, I think you have a little more knowledge and leverage than that. :smile:
 
  • #46
Vanadium 50 said:
While I wouldn't say that things work this way today - they don't - and without divulging mentor tools, I think you have a little more knowledge and leverage than that. :smile:
Shush. :wink:
 
  • Wow
Likes BillTre
  • #47
I think that appropriate AI use with attribution and ideally a link to the AI chat should be allowed and the current AI policy should be reviewed. In my recent thread, I got into conflict with it unknowingly twice (for a different reason each time), but I believe that my AI use was legitimate and transparent.

The first case appeared in my opening post:
Petr Matas said:
I've come across a paradox I can't resolve.

[...]

I considered the effect of gravitational red shift, but it doesn't seem to resolve the paradox, because [...].

I also tried to resolve the paradox using ChatGPT (in Czech), which concluded that the system only reaches a quasi-stationary state because it takes too long to reach equilibrium. However, I don't think this resolves the paradox either, because the paradox consists in the conclusion that no state of thermodynamic equilibrium exists whatsoever. However, an isolated system should have such a state, shouldn't it?
As you can see, I tried to resolve the paradox using ChatGPT (i.e. quickly and without bothering humans). It had worked for me many times before, but not this time. Therefore I had to ask humans. I felt that it would be useful to describe the unsuccessful approaches I took, including the result of the discussion with AI, to provide a link to the discussion, and to explicitly state that the answer was unsatisfactory.

After two hours and 10 posts of fruitful discussion, we were approaching the solution and at that moment the thread was locked for review due to possible conflict with AI policies for about 14 hours. I was quite worried that the members trying to help me could abandon the thread, but fortunately they didn't. They gave me food for thought, which allowed me even to compose a proof, which showed where exactly the intuition leading to the paradox went wrong.

The second case:
Chestermiller said:
Even in an ideal gas, the molecules collide with each other to exchange energy. What is the mean free path of an oxygen molecule in a gas that is at room temperature and a pressure of 1 bar?
Petr Matas said:
ChatGPT says 71 nm
Chestermiller said:
In other words, each molecule experiences a multitude of collisions and energy transfers per unit time, which translates into significant heat conduction within the gas.
Vanadium 50 said:
Tell us again how you're not using an AI?
Petr Matas said:
This was the second time 😉. I prefer leaving tedious but simple work to AI (while checking its answers of course) to save time for tasks which are currently beyond the AI capabilities. I mark the AI answers clearly whenever I use them. Or would you prefer me to conceal the use of AI? Or not use AI at all? What is the point?
As you can see, Chestermiller asked a rather rhetoric question. I saw no point in searching for the formula and values to be plugged in and in doing the calculation myself, but the result was needed to allow us to move forward. So I asked ChatGPT, verified that the answer was in agreement with my expectation, cited the numeric result with attribution and provided a link to the ChatGPT discussion.

Although the thread was not locked this time, later I found that my carefully-attributed four-word reply to the trivial question violated current explicit ban on AI-generated replies.

I am afraid that too strict rules won't prevent people from using AI, but rather to conceal its use and I am sure this is not what we want. In these days, AI-generated text is becoming indistinguishable from the human-written one, which makes our policies unenforceable. We should certainly avoid motivating people to conceal the use of AI.
 
  • Like
Likes PeroK
  • #48
Perhaps @Greg Bernhardt can get xenforo to add an AI quote feature in addition to the normal quote box where we can select the AI engine/version, the prompt used and the AI response.

It would be like a super quote box that clearly shows it to be an AI quote insert.

Some commercial editors (iaWriter as an example) allow you to mark text as coming from an AI vs a human and displays it in a different color.
 
  • Like
  • Skeptical
  • Informative
Likes DaTario, pines-demon, russ_watters and 4 others
  • #49
Cam we have the same thing for the Magic 8 Ball?
 
  • Haha
Likes berkeman
  • #50
IMHO for specialized subjects that thing is just a sophisticated BS generator and this should be clearly displayed.
If there will ever be a version trained to explain science or be a teacher - yup. If it actually works then a different quote box is a good idea.
 
  • Like
Likes russ_watters
  • #51
Rive said:
IMHO for specialized subjects that thing is just a sophisticated BS generator and this should be clearly displayed.
If there will ever be a version trained to explain science or be a teacher - yup. If it actually works then a different quote box is a good idea.
Once AI progresses to that level, what would be the purpose of posting a question on PF, or seeking any human response on the Internet?
 
  • Like
  • Informative
Likes Hornbein and renormalize
  • #52
Even if AI is perfected, we will always need a second opinion. Machines fail and we need to decide what to do so a second opinion can help.

The danger is who will be giving the second opinion a person or another AI or even the same AI under a different guise. I can imagine the day when there are a few major players and the AI service is provided to a lot of mom and pop services that are specialized in some way.

One dark example, is the funeral business. There are many "family" owned funeral parlors who sold their name to one of the major players from Canada. While they run the business, they do so using the "family" name.

Unsuspecting people who have used the funeral parlor for past funerals have no idea what transpired and think they are dealing with the same kind family.

https://www.memorials.com/info/largest-funeral-home-companies/#:~:text=Service Corporation International (SCI) is,industry for over 50 years.
 
Last edited:
  • #53
Rive said:
IMHO for specialized subjects that thing is just a sophisticated BS generator and this should be clearly displayed.
Even if it is now, we may not know what is coming in the near future. The trouble with AI is the easiness of generating tons of BS. How about requiring users to either understand and check the AI answers before posting them or explicitly declare that the answer has not been verified? Although attribution is crucial, I think that we should focus on quality of posts rather than their origin.
PeroK said:
Once AI progresses to that level, what would be the purpose of posting a question on PF, or seeking any human response on the Internet?
AI is just another tool and it is difficult to predict what uses it will have in the future. If you buy a new CNC milling machine, I am sure you won't throw away all the other tools in your workshop.
 
  • #54
Petr Matas said:
I think that appropriate AI use with attribution and ideally a link to the AI chat should be allowed and the current AI policy should be reviewed. In my recent thread, I got into conflict with it unknowingly twice (for a different reason each time), but I believe that my AI use was legitimate and transparent.
I'll be frank about this: yours overall was a difficult case that was on the fuzzy line of what I'd consider acceptable, with or without AI. We get a fair number of questions like "ChatGPT helped me develop this revolutionary new theory of gravity I'm calling '27-Dimensional Quantum Consciousness Gravity'. Where can I publish it and how long do I have to wait to get the Noble Prize?" Those are obviously bad.

Answering a straightforward semi-rhetorical question? Ehh, I'm ok with it. Interestingly, googles AI answers 155 nm, but it appears the ChatGPT answer was the right one.

My issue with the thread (and the vibe I got, several others had the same one), was that it appeared to be designed to be open-ended. Your initial paradox in fact wasn't (they never are), but the answer to the "paradox" was that you were correct about reality and just missed that physics does indeed point us there. You seemed to be very disappointed by that answer and sought further discussion of an issue that appeared to several contributors to be resolved (so we dropped out). ChatGPT is fine with an endless rabbit hole, but humans are not. I recognize that that will mean PF losing some traffic to that sort of question, but I'm ok with that (not sure if Greg is...).

And also; did ChatGPT steer you in the wrong direction in the OP? Did it help make the "paradox" into a bigger mess than necessary? If the answer is yes, then can you see how maybe it would have been better to come to us first? Both better for you and less annoying for us to clean up the mess?
 
  • Like
Likes PeterDonis
  • #55
PeroK said:
Once AI progresses to that level, what would be the purpose of posting a question on PF, or seeking any human response on the Internet?
There may not be one. But until then, do we want our role to just be ChatGPT's janitor?
 
  • Sad
Likes PeroK
  • #56
jedishrfu said:
Perhaps @Greg Bernhardt can get xenforo to add an AI quote feature in addition to the normal quote box where we can select the AI engine/version, the prompt used and the AI response.

It would be like a super quote box that clearly shows it to be an AI quote insert.

Some commercial editors (iaWriter as an example) allow you to mark text as coming from an AI vs a human and displays it in a different color.
A special AI quote feature is just telling others: "Verifying the exactitude of this quote is left to the reader as an exercise ... because I did not bother to do it myself."

At the risk of repeating myself:
jack action said:
I still don't understand what is this fixation about who - or what - wrote the text.

I'm asking the question again: If it makes sense and the information is verifiable, why would anyone want to delete it?
And - again - there is a very clear PF rule about what is an acceptable source:
https://www.physicsforums.com/threads/physics-forums-global-guidelines.414380/ said:
Acceptable Sources:
Generally, discussion topics should be traceable to standard textbooks or to peer-reviewed scientific literature. Usually, we accept references from journals that are listed in the Thomson/Reuters list (now Clarivate):

https://mjl.clarivate.com/home

If someone obtains a piece of information via an AI engine, why wouldn't they be able to corroborate this information with an acceptable source? Heck, Wikipedia usually can cite its sources, so why can't people ask the AI engine to find its sources for them as well? Once you have a reliable source, who cares if you found it first with Wikipedia or ChatGPT?

There is a limit to laziness.
 
  • #57
russ_watters said:
"ChatGPT helped me develop this revolutionary new theory of gravity I'm calling '27-Dimensional Quantum Consciousness Gravity'."
I see. Tons of easily generated BS.

russ_watters said:
My issue with the thread (and the vibe I got, several others had the same one), was that it appeared to be designed to be open-ended.
I'm sorry it looked like that. Obviously, one of my conflicting results had to be wrong and I did not know which. Once I understood it was the adiabatic profile, I wanted to know where I made the mistake in the application of the laws of motion. In the end I found that answer myself, but I wouldn't be able to do it without your help.

russ_watters said:
ChatGPT is fine with an endless rabbit hole, but humans are not.
I see. Each ChatGPT's answer spurs several new questions and it is happy to answer them. Oh, I like that so much... Isn't that a reason to avoid bothering people unless it is necessary?

russ_watters said:
And also; did ChatGPT steer you in the wrong direction in the OP? Did it help make the "paradox" into a bigger mess than necessary?
No. It was just unable to help me, unlike in several previous cases.
 
  • Like
Likes PeroK
  • #58
jack action said:
why can't people ask the AI engine to find its sources for them as well?
ChatGPT cannot provide sources. It does not know where it got the information from. However, I read that Perplexity can do that, although I have not tried it yet.
 
  • Like
Likes russ_watters
  • #59
Petr Matas said:
ChatGPT cannot provide sources. It does not know where it got the information from. However, I read that Perplexity can do that, although I have not tried it yet.
It may not provide ITS source but it might provide A source.
 
  • Informative
Likes Petr Matas
  • #60
jack action said:
It may not provide ITS source but it might provide A source.
I'm not sure if you're understanding. ChatGPT itself is not searching the internet for sources. It has a "model" of what sources look like. So if you ask it for a source it will make one up. Sometimes they match real ones but often(usually) they don't.
 
  • Skeptical
Likes PeroK
  • #61
russ_watters said:
So if you ask it for a source it will make one up. Sometimes they match real ones but often(usually) they don't.
I'm not saying ChatGPT is a good tool to find sources, just that you can ask it for them. You still have to verify it. (Like you would do with the Wikipedia sources.) If it is a reliable one, then you are OK. If not, you still need to look further.

I'm just saying that if ChatGPT gives you the right answer - and you know it's right because you validated it - it is an acceptable answer.

For example, I can ask ChatGPT what is the acceleration due to gravity and get 9.80665 m/s² as an answer. If I ask it again for a source and it answers that The International System of Units 9th edition (2019) is one of them, then I can verify it and state that value on PF - with my source if anyone challenges me. I do not need to state ChatGPT helped me get the information quickly.

Of course, if I get a phony reference, I should investigate more - with ChatGPT or another tool - to make sure I have the correct information.

I can also ask Google and get a nice answer similar to what ChatGPT would give:
https://www.google.com/search?q=What+is+the+standard+acceleration+due+to+gravity%3F said:
9.80665 m/s²

A conventional standard value is defined exactly as 9.80665 m/s² (about 32.1740 ft/s²). Locations of significant variation from this value are known as gravity anomalies. This does not take into account other effects, such as buoyancy or drag.
The fun thing is that Google gives a reference for it which, in this case, is Wikipedia. I do not have to ask another question to get it. Wikipedia may not be a reliable source but you can explore it further and, going from one reference to another, find this final one.

Once I've done all of that, am I going to say "Google says the acceleration due to gravity is 9.80665 m/s²" or more directly "According to ISU 9th edition (2019), the acceleration due to gravity is 9.0665 m/s²"?

Why would ChatGPT be treated differently from Google?

jack action said:
change-my-mind-jpg.jpg
 
  • Informative
Likes Petr Matas
  • #62
PeroK said:
Once AI progresses to that level, what would be the purpose of posting a question on PF, or seeking any human response on the Internet?
A common source of both understanding and confusion here is that we can often offer several slightly different answers or opinions simultaneously.

Also, we can offer heartfelt and sincere consolations, praises, understandings and facepalms.

Ps.: and punchlines too.
 
  • #63
jack action said:
I'm not saying ChatGPT is a good tool to find sources, just that you can ask it for them. You still have to verify it. (Like you would do with the Wikipedia sources.) If it is a reliable one, then you are OK. If not, you still need to look further.

I'm just saying that if ChatGPT gives you the right answer - and you know it's right because you validated it - it is an acceptable answer.
Perhaps it seems like a minor quibble but basically every link/reference that google gives you will go to a real website/paper whereas many links/references that LLMs give you don't exist. But sure, all need to be checked to ensure they exist and are valid.

But I submit that people are often using LLM in place of real research and reading because they are lazy, so they are less likely to be doing that checking.

For example, I can ask ChatGPT what is the acceleration due to gravity and get 9.80665 m/s² as an answer.
Like I said, for narrow, straightforward, mainstream questions (and @Peter matas asked one), I don't have a problem with it. We already tell people they should read the wiki or google before asking us about such questions/issues.

Why would ChatGPT be treated differently from Google?
For the narrow use case of a specific, straightforward, mainstream question I agree that PF shouldn't treat it differently than google. But this is an extremely limited use case and my concerns are about some of the bigger(functionally) ones. Perhaps a list of my opinion on different use cases:

1. Specific, straightforward, mainstream question = ok
2. Translation = ok
3. Explain a concept/summarize an article such as instead of reading a whole wiki article = ok (lazy, but ok)
4. "Help me articulate my question" = iffy
5. "Help me develop my new theory" = not ok.
6. "Help me down this rabbit hole" = not ok.
7. "I copied and pasted your response into ChatGPT and here's what it replied" = not ok.
 
Last edited:
  • Like
Likes jack action, Greg Bernhardt and Petr Matas
  • #64
Petr Matas said:
I'm sorry it looked like that. Obviously, one of my conflicting results had to be wrong and I did not know which. Once I understood it was the adiabatic profile, I wanted to know where I made the mistake in the application of the laws of motion. In the end I found that answer myself, but I wouldn't be able to do it without your help.
Since you are so interested in using ChatGPT as a technical reference, hopefully you are following the parallel thread here on PF on how many times AI chatbots/LLMs get their replies wrong. Are you doing that?

From a recent post of mine...

berkeman said:
Unfortunately, Google is using AI to "summarize" search results now, and returns that at the top of the search results list. Sigh.

I did a search today to see what percentile an 840 PGRE score is, and here is the AI summary. See any math issues in this?

View attachment 353294

1731375780986.png
 
  • #65
jack action said:
I'm not saying ChatGPT is a good tool to find sources, just that you can ask it for them. You still have to verify it. (Like you would do with the Wikipedia sources.) If it is a reliable one, then you are OK.
You're OK if you give as your source the actual reliable source that you found, instead of ChatGPT.

IMO you're not OK if you say "ChatGPT told me this and I verified that it was right" with no other supporting information.

jack action said:
I'm just saying that if ChatGPT gives you the right answer - and you know it's right because you validated it - it is an acceptable answer.
It's an acceptable answer if you can substantiate it with an actual reliable source, which here at PF means a textbook or a peer-reviewed paper. Again, "ChatGPT told me this and I verified that it was right", by itself, is not an acceptable answer. You need to point us to the actual reliable source that you used to do the verification.
 
  • Like
Likes berkeman and russ_watters
  • #66
jack action said:
Why would ChatGPT be treated differently from Google?
Everything in my post #65 just now would apply to Google as well.
 
  • #67
PeterDonis said:
You're OK if you give as your source the actual reliable source that you found, instead of ChatGPT.

IMO you're not OK if you say "ChatGPT told me this and I verified that it was right" with no other supporting information.


It's an acceptable answer if you can substantiate it with an actual reliable source, which here at PF means a textbook or a peer-reviewed paper. Again, "ChatGPT told me this and I verified that it was right", by itself, is not an acceptable answer. You need to point us to the actual reliable source that you used to do the verification.
PeterDonis said:
Everything in my post #65 just now would apply to Google as well.
I think we are saying the same thing.

I'm just wondering why there should be a special policy for ChatGPT. It would just complicate the PF rules for no good reason since it is already taken care of with a more general statement. Introducing a special AI quote becomes just ridiculous. IMO, ChatGPT is not that special and is not different from any other tool that can gather information.
 
  • Like
Likes Petr Matas
  • #68
russ_watters said:
6. "Help me down this rabbit hole" = not ok.
Why not? I can have a discussion with AI, which can be interesting, fun and enlightening. Or useless. Or it can lead me astray. Why can't I say: "Hey, I discussed this issue with ChatGPT, which lead me to this strange result. Here is my line of thought, i.e. the arguments leading to the result (not an AI-generated text) and here is the discussion with ChatGPT for your reference."

Of course, I should be required to summarize the result and the arguments myself, so that you don't need to read the discussion at all. Otherwise you would just argue with the AI, which really doesn't make sense.

berkeman said:
Since you are so interested in using ChatGPT as a technical reference,
I am not. I agree that "ChatGPT said so" is an invalid argument. Nevertheless, even though it can't be used as a reference, it can be used as a tool.

berkeman said:
hopefully you are following the parallel thread here on PF on how many times AI chatbots/LLMs get their replies wrong. Are you doing that?
I am not. I have already got a ton of wrong AI replies myself.

PeterDonis said:
Again, "ChatGPT told me this and I verified that it was right", by itself, is not an acceptable answer.
It is a de facto unsourced statement, like a majority of posts at PF. All these are OK until challenged. Then one has to provide a reference.
 
  • #69
Petr Matas said:
Why not? I can have a discussion with AI, which can be interesting, fun and enlightening. Or useless. Or it can lead me astray. Why can't I say: "Hey, I discussed this issue with ChatGPT, which lead me to this strange result. Here is my line of thought, i.e. the arguments leading to the result (not an AI-generated text) and here is the discussion with ChatGPT for your reference."
Humans tend not to like open-ended questions exactly because they don't end. They are time consuming(read: wasting) and never get to a conclusion. In the case of your initial question it was even more annoying because it had a clear/obvious end, then didn't. That's why people dropped out of the thread.

Or to put it another way: for me the "Eureka!" moment of teaching is enormously satisfying. To have it brushed aside is equally deflating.
 
  • #70
jack action said:
I'm just wondering why there should be a special policy for ChatGPT. It would just complicate the PF rules for no good reason since it is already taken care of with a more general statement. Introducing a special AI quote becomes just ridiculous. IMO, ChatGPT is not that special and is not different from any other tool that can gather information.
If I google an answer I pretty much always lead with "google tells me..." To me it is at the very least polite to tell people who they are speaking to, and it saves time if a discussion of that point is needed ("How did you calculate that?"). If the issue is "ChatGPT helped me formulate this question..." it can help us understand why the question is such a mess and respond accordingly: "That's gibberish so please tell us what you asked ChatGPT and what your actual understanding of the issue is..."

For some questions that are just crackpot nonsense indeed it doesn't matter if the question/post was generated by a person or LLM for the purpose of deciding what to do with them. But it may increase the moderator workload by making it easier to generate such posts.
 
  • Like
Likes Petr Matas

Similar threads

Replies
10
Views
2K
Replies
10
Views
2K
Replies
3
Views
364
7
Replies
212
Views
11K
Replies
21
Views
2K
Replies
8
Views
1K
Back
Top