Is ChatGPT's contribution to technical discussions off-topic?

  • Thread starter Frabjous
  • Start date
  • Tags
    chatgpt
In summary: In some cases I do the googlework for members when a promising looking topic gets stuck without answers. Is that OK? Or: am I guilty?It's perfectly acceptable to use ChatGPT for help with queries in physics, as long as you're willing to accept that the results may not always be reliable.
  • #1
Frabjous
Gold Member
1,845
2,276
In a technical thread, is “here what ChatGPT says” on the topic of the thread considered on or off topic
 
  • Like
Likes Wrichik Basu and PeroK
Physics news on Phys.org
  • #2
In my opinion yes. If you're thinking of the thread I'm thinking of (Sabine on GR) I think it's only value is demonstrating how bad verbal reasoning is at physics. I reported it as not helping the thread.
 
  • Like
Likes apostolosdt and Frabjous
  • #3
We are still trying to form a long-term workable policy on ChatGPT-generated content. However, a "here is what ChatGPT says" reply is little more than a "Let me Google that for you" (https://letmegooglethat.com/) - OP could have asked ChatGPT themselves and pointing this out or doing it for them adds no value - so is unlikely to fly even without a ChatGPT-specific policy.
 
  • Like
Likes Rive, russ_watters, Ibix and 1 other person
  • #4
Ibix said:
In my opinion yes. If you're thinking of the thread I'm thinking of (Sabine on GR) I think it's only value is demonstrating how bad verbal reasoning is at physics. I reported it as not helping the thread.
LOL, I reported it as off topic, but felt that the subject needed to get discussed here.
 
  • Like
Likes Ibix
  • #5
Nugatory said:
OP could have asked ChatGPT themselves
Can ChatGPT even be trusted with queries in physics? It might give a confident-looking wrong answer. Posting what ChatGPT says regarding something technical should be strictly avoided, in my opinion. There is a difference between Google search and ChatGPT: in the latter, we don't know the source of information because it is predicting the next word based on its training. Google search is safer than ChatGPT, I think.
 
  • Like
  • Informative
Likes aaroman, apostolosdt, 256bits and 1 other person
  • #6
Wrichik Basu said:
Google search is still safer than ChatGPT, I think.
I think so too. And seeing as how we don't allow lmgtfy replies, the case for disallowing ChatGPT replies is even stronger.
 
  • Like
Likes russ_watters, Bystander, Frabjous and 1 other person
  • #7
Worth noting that we've already experienced a ChatGPT bombing attack, where a new account posted ChatGPT-generated text into a half-dozen or so open threads. All of these posts would have been rejected (ranging from content-free to outright misinformation) if generated by a human and we have been treating them accordingly - but the attacker was able to generate them more quickly than users could report them and the logged-in mentors could handle them.

The attacker was banned, prompting the mature and adult response "Shut the **** up you jealous little man because you dont have the answers" (banned users can still PM the mentors - it's part of the appeal process) which leads me to impute malicious intent to the whole episode.
 
  • Like
  • Wow
  • Sad
Likes aaroman, bagasme, pinball1970 and 4 others
  • #9
malawi_glenn said:
I said that we're trying to formulate a policy statement here....

But in the meantime, the ChatGPT-generated posts so far have all been removable under the existing rules even if they had been generated by a human. And do remember that this thread is discussing the least helpful sort of ChatGPT posts: a reply that is just the ChatGPT-generated text with no value added by the poster.
 
  • #10
1. People are responsible for what they post, whether the ultimate source was ChatGPT or a fortune cookie.
2. If someone cuts and pastes from ChatGPT and identifies it as such, the members should critique it as such. Just as if he posted "I got this from a fortune cookie".
3. If someone cuts and pastes from ChatGPT and does not identify it as such, this is pretty clearly antisocial behavior and the Mentors should deal with it. Ideally by showing the poster the door.
4. Speaking of fortune cookies, my favorite fortune is "that wasn't chicken."
 
Last edited by a moderator:
  • Like
Likes aaroman, dextercioby, BillTre and 2 others
  • #11
Wrichik Basu said:
Can ChatGPT even be trusted with queries in physics? It might give a confident-looking wrong answer. Posting what ChatGPT says regarding something technical should be strictly avoided, in my opinion. There is a difference between Google search and ChatGPT: in the latter, we don't know the source of information because it is predicting the next word based on its training. Google search is safer than ChatGPT, I think.
For how long?
The query search engines may be on the decline. If the chat bots, and/or their bosses, soon figure out a way to make $ out of the 'new' way of sharing information.
 
  • #12
256bits said:
For how long?
I think it's as long as the human who started the query is at least interested in finding/checking/looking for the information among the various presented results.
ChatGPT in this regard is kind of about presenting the first result (which is often would be some AD, of course) in an inquestionable way like a con artist would do.

Nugatory said:
we don't allow lmgtfy replies
Well. Duh. In some cases I do the googlework for members when a promising looking topic gets stuck without answers.
Is that OK? Or: am I guilty? (Serious question!)
 
  • #13
Wrichik Basu said:
Can ChatGPT even be trusted with queries in physics? It might give a confident-looking wrong answer. Posting what ChatGPT says regarding something technical should be strictly avoided, in my opinion. There is a difference between Google search and ChatGPT: in the latter, we don't know the source of information because it is predicting the next word based on its training. Google search is safer than ChatGPT, I think.

Isn't it possibly sometimes helpful to point it out when/how ChatGPT gives wrong answers, especially in a forum where people are capable of knowing the difference?

In the thread that triggered this discussion, the topic being discussed was the issue of potentially misleading or incorrect information being presented to the general public. And the particular physics topic was one that happened to be one that perfectly demonstrated how ChatGPT can mislead. I completely understand deleting it as a basic rule, but don't you think the world is less informed because of it rather than more informed?

Like it or not, a lot of people are trying to learn physics from ChatGPT in the wild and ignorantly assuming it is reliable. It is probably a matter of time as well before we start getting tons of posts from beginners asking whether what ChatGPT said is correct. If PF isolates itself from a large part of the modern defacto education system, it arguable would be missing out on making the positive impact that it could.
 
Last edited:
  • #14
Jarvis323 said:
Isn't it possibly sometimes helpful to point it out when/how ChatGPT gives wrong answers
Even debunking pseudoscience of living people is not really encouraged here: should debunking of the output of linguistico-statistical neuro-gibberish con-engines be allowed?

Ps.:
Jarvis323 said:
It is probably a matter of time...
Maybe. Honestly, I don't know. But for now, at the very least I would limit this kind of activity for Insights, or: 'at mentor discretion only' like philosophy discussions.
 
Last edited:
  • Like
Likes aaroman, Wrichik Basu and weirdoguy
  • #15
Jarvis323 said:
In the thread that triggered this discussion, the topic being discussed was the issue of potentially misleading or incorrect information being presented to the general public. And the particular physics topic was one that happened to be one that perfectly demonstrated how ChatGPT can mislead. I completely understand deleting it as a basic rule, but don't you think the world is less informed because of it rather than more informed?
I was following that thread. Your post changed the direction of the conversation so for me it was jarring and off topic.

I think we need guidelines for when ChatGPT is appropriate otherwise a more draconian solution will be applied which will limit discussions that PF should be having.
 
  • Like
Likes Rive and Wrichik Basu
  • #16
Jarvis323 said:
Isn't it possibly sometimes helpful to point it out when/how ChatGPT gives wrong answers, especially in a forum where people are capable of knowing the difference?
...
Like it or not, a lot of people are trying to learn physics from ChatGPT in the wild and ignorantly assuming it is reliable. It is probably a matter of time as well before we start getting tons of posts from beginners asking whether what ChatGPT said is correct. If PF isolates itself from a large part of the modern defacto education system, it arguable would be missing out on making the positive impact that it could.
Abridged by me

It appears that we have several questions in front of us.

Should the use of ChatGPT for answering technical questions be allowed?
Definitely not. It should be treated as spam.

Should quoting ChatGPT output on the main topic of the thread be allowed?
No. In this case, ChatGPT, as a source of information, should not be allowed, just like we have a list of banned sources. It's of no use to waste time and energy behind correcting what ChatGPT has said because we cannot change it.

For many threads, we often ask the OP what research they have done. Should using ChatGPT qualify as a research effort?
This is a question that, I think, needs some discussion in the community before a policy is framed.

In any case, we can always be fooled if someone gets an answer from ChatGPT, reshapes it, and posts without mentioning the source.
 
  • Like
Likes aaroman, Rive and Frabjous
  • #17
Wrichik Basu said:
Abridged by me

It appears that we have several questions in front of us.

Should the use of ChatGPT for answering technical questions be allowed?
Definitely not. It should be treated as spam.

Should quoting ChatGPT output on the main topic of the thread be allowed?
No. In this case, ChatGPT, as a source of information, should not be allowed, just like we have a list of banned sources. It's of no use to waste time and energy behind correcting what ChatGPT has said because we cannot change it.

For many threads, we often ask the OP what research they have done. Should using ChatGPT qualify as a research effort?
This is a question that, I think, needs some discussion in the community before a policy is framed.

In any case, we can always be fooled if someone gets an answer from ChatGPT, reshapes it, and posts without mentioning the source.
I think that a policy should also explicitly allow conversations about ChatGPT. It is a technology that is going to be around so is a legitimate topic of discussion.
It would be nice if the proponents could provide other affirmative guidelines.
 
  • Like
Likes Wrichik Basu and Rive
  • #18
Frabjous said:
I think that a policy should also explicitly allow conversations about ChatGPT.
I expect general agreement with that - IMO is is clearly appropriate.

I also can imagine an ongoing humor thread in GD: "Look at the stupid/silly/surprising thing the AI did this time!". It will pose the same sort of moderation burden as the ongoing jokes threads - occasional abuse, well within the envelope of what we can handle.
 
  • Like
Likes anorlunda, Wrichik Basu and berkeman
  • #19
I agree there is value in discussing what ChatGPT says about physics questions. The answers in the Sabine thread were utter nonsense, and that's interesting for what it reveals about the bot. But I don't think a thread where someone is asking a question is an appropriate place, because a digression about someone (or something <plays dramatic music>) else's misunderstanding is probably not helping the OP. And it doesn't help the bot without feedback.
 
  • #20
Vanadium 50 said:
1. People are responsible for what they post, whether the ultimate source was ChatGPT or a fortune cookie.
2. If someone cuts and pastes from ChatGPT and identifies it as such, the members should critique it as such. Just as if he posted "I got this from a fortune cookie".
3. If someone cuts and pastes from ChatGPT and does not identify it as such, this is pretty clearly antisocial behavior and the Mentors should deal with it. Ideally by showing the poster the door.
4. Speaking of fortune cookies, my favorite fortune is "that wasn't chicken."
The chances of getting real stuff from ChatGPT are even slimmer than finding it in a fortune cookie. But hey, you can totally use ChatGPT to whip up a fortune cookie, which is pretty fitting. Stumbled upon this site and thought so.
 
Last edited by a moderator:

Similar threads

Replies
10
Views
2K
Replies
10
Views
2K
7
Replies
212
Views
11K
Replies
55
Views
7K
Replies
8
Views
1K
Replies
43
Views
8K
Replies
8
Views
3K
Back
Top