ChatGPT Policy: PF Developing Policies for ChatGPT?

  • Thread starter Frabjous
  • Start date
  • Tags
    chatgpt
In summary, ChatGPT is a chat bot that is often used to provide answers to science questions. It is not appropriate for discussing forbidden topics on Physics Forums, and it is not reliable when it comes to providing accurate information.
  • #1
Frabjous
Gold Member
1,824
2,239
Is PF developing policies for ChatGPT. I just saw an answer that used it with acknowledgement.
 
  • Like
  • Informative
Likes dextercioby, Demystifier, Wrichik Basu and 1 other person
Physics news on Phys.org
  • #2
Link?
 
  • #4
I'm pretty sure stackoverflow is attempting to ban it. I think we should discourage it, but I am unsure how to "ban" it here. Well worth a discussion. At a minimum content from ChatGPT should be quoted.
 
  • Like
Likes dextercioby, topsquark, Demystifier and 1 other person
  • #5
I think that while it can be appropriate, it will be hard to moderate. There will be plenty of instances where it will only muddy the waters. I believe that there are several forbidden topics based on this reasoning.

On the other hand, detecting unattributed quotes will be a bear.
 
Last edited:
  • Like
Likes Wrichik Basu
  • #7
By the way. Often on PF, we recommend that people do searches using Google or other before posting. I have seen ChatGBT described as a new way to search the Internet. I think that's accurate.

Perhaps we should start recommending that users do research using Google and/or ChatGBT before posting on PF.
 
  • Like
Likes jedishrfu and Demystifier
  • #8
Isn't there already one?

https://www.physicsforums.com/threads/physics-forums-global-guidelines.414380/ said:
Acceptable Sources:
Generally, discussion topics should be traceable to standard textbooks or to peer-reviewed scientific literature. Usually, we accept references from journals that are listed in the Thomson/Reuters list (now Clarivate):

https://mjl.clarivate.com/home

Use the search feature to search for journals by words in their titles.

In recent years, there has been an increasing number of "fringe" and Internet-only journals that appear to have lax reviewing standards. We do not generally accept references from such journals. Note that some of these fringe journals are listed in Thomson Reuters. Just because a journal is listed in Thomson Reuters does not mean it is acceptable.

References that appear only on http://www.arxiv.org/ (which is not peer-reviewed) are subject to review by the Mentors. We recognize that in some fields this is the accepted means of professional communication, but in other fields we prefer to wait until formal publication elsewhere. References that appear only on viXra (http://www.vixra.org) are never allowed.

  • Specifying ChatGPT as a source is against this policy;
  • Using ChatGPT without stating it and not being able to specify a valid source is also against this policy.
@Demystifier simply broke PF rules in the example thread. Normally people ask for sources, but somehow @Greg Bernhardt chose to only emphasize the fact that it came from ChatGPT. I wonder if he would have done the same thing if the quote came from the Bible?

I can answer most questions asked on this forum by using Google. Without finding another valid source to back up my answer, it is invalid.
 
Last edited:
  • Like
  • Skeptical
  • Informative
Likes apostolosdt, dextercioby, Greg Bernhardt and 3 others
  • #9
anorlunda said:
By the way. Often on PF, we recommend that people do searches using Google or other before posting. I have seen ChatGBT described as a new way to search the Internet. I think that's accurate.

Perhaps we should start recommending that users do research using Google and/or ChatGBT before posting on PF.
That presupposes a certain level of accuracy and transparency.
 
  • #10
ChatGPT doesn't understand science. It writes confident-sounding answers that might or might not be right, and it's wrong too often to be used as answer to a science question. I would not recommend asking it to learn about anything either. For entertainment purposes: Sure, whatever.
 
  • Like
Likes bagasme, apostolosdt, Filip Larsen and 4 others
  • #11
mfb said:
ChatGPT doesn't understand science. It writes confident-sounding answers that might or might not be right, and it's wrong too often to be used as answer to a science question.
In fact, doesn't it learn from what it reads? What's the ratio of garbage crackpottery to decent science on the internet? Unless it can cite sources I'd tend to suspect it of being more likely to talk rubbish than not on science topics.
 
  • #12
Ibix said:
Unless it can cite sources
That would be a very worthwhile improvement for chat bots if they could cite sources. But that may be easier said than done.

Note that Google has also started synthesizing answers shown higher up on the page than the links. That blurs the boundaries between chat bots and Internet searches.

I'll stick out my neck and predict the near future: The boundaries between AI and non-AI resources will continue to be blurred making it increasingly difficult to make any pro or anti AI policies, or even to define what is AI is and is not.
 
  • Like
Likes topsquark
  • #13
anorlunda said:
That would be a very worthwhile improvement for chat bots if they could cite sources.
For human posters too. Grumblegrumblemuttergrumble.
 
  • Like
  • Haha
Likes Astronuc, topsquark, Nugatory and 1 other person
  • #14
Ibix said:
In fact, doesn't it learn from what it reads? What's the ratio of garbage crackpottery to decent science on the internet? Unless it can cite sources I'd tend to suspect it of being more likely to talk rubbish than not on science topics.
It seems to have some weights for how credible different inputs are. I don't think it can cite anything because the answer isn't coming from a small set of specific sources, it's coming from the overall
knowledge it gained over time.
 
  • #15
I think we should add to the rules an explicit ban on posting chatbot-generated text. That won't make it go away of course, but it will give us justification for acting when we do recognize it.
(There would an exception when a chatbot itself is the topic of discussion, and I can imagine a long-running "look at what the bot did THIS time!" sort of humor thread).

It's easy to deal with recognized chatbot text - just delete it. The hard part is going to be detection. I am not long-term optimistic about watermarking because there will be more bots, not all will choose to watermark, and I'd expect that watermarks created by one bot can be removed by another. Nonetheless, the information ecosystem (that's everything from letters to the editor to letters to your legislative representative, from student essays to journal submissions, and pretty much every online discussion everywhere) will eventually evolve some way of dealing with this stuff, just as it did with spam. This is, I think, an argument for putting the policy in place early even with imperfect enforcement.
 
  • Like
Likes dextercioby
  • #16
I'm not likely to say anything that hasn't already been said but I just looked at the post Frabjous made with the ChatGPT reference.

The problem I have with using this as some kind of "official" reference is that, well, it might be wrong. I mean, it sounds really good. And, mind you, I'm not saying that humans can't make mistakes when they post. (Heaven knows I do that all too often!) But we expect the people here to make the occasional mistake. That's why we want references that have some kind of (Science-based) community approval. I might be willing to look something up on it, but as we can't know just how much ChatGPT already knows I wouldn't want to trust it as a recognized source.

Just my two cents.

-Dan
 
  • #17
Nugatory said:
I think we should add to the rules an explicit ban on posting chatbot-generated text. That won't make it go away of course, but it will give us justification for acting when we do recognize it.
If a credible source cannot be provided, isn't that a good justification for acting?

Nugatory said:
It's easy to deal with recognized chatbot text - just delete it. The hard part is going to be detection.
Would you delete an answer from a chatbot that is true and easily verifiable? What would be the point?
 
  • #18
Just to be clear, we're talking about replies as well as OPs.

Nugatory said:
It's easy to deal with recognized chatbot text - just delete it. The hard part is going to be detection.
Stating a policy, or putting it in the guidelines is OK. Making it known that we don't like AI text is better than silence. But the policy should be, no AI generated text without citing the AI source.

Even then, I'm skeptical. It is entirely foreseeable that people who have trouble with English, will be able to paste their clumsy, but original, text into an AI and get well phrased text in return. The bot might even throw in translation at the same time as quality improvement. That's just like a spell checker or grammar checker on steroids. So it is also foreseeable that the line between admirable and deplorable AI bots can itself get blurred. For sure, someone will claim that the spell checker is an AI.

But any attempt at enforcement an AI policy and deleting posts will likely be disastrous unless there is powerful evidence like a watermark that the text is from one of the "deplorable" AIs.

By the way, if we change the guidelines, how is that announced to the entire PF membership? I don't recall ever receiving notice that the guidelines changed.
 
  • Like
Likes Petr Matas
  • #19
anorlunda said:
By the way, if we change the guidelines, how is that announced to the entire PF membership? I don't recall ever receiving notice that the guidelines changed.
I believe i recall instances when 'Feedback & Announcements' has been used, and Greg has used a PF-wide announcement referring to revision (perhaps substantial revisions/updates) to PF Guidelines in the past.
 
  • #20
I just tried an experiment. I asked ChatGBT the following.

I want to ask a physicist about the twin paradox, but I have trouble expressing the question. What is the best way to ask the question?
The reply said:
To ask a physicist about the twin paradox, you might try framing your question as follows:

"Could you explain the twin paradox in special relativity? How does the difference in time experienced by the two twins arise, and what does this tell us about the nature of time and motion?"

This question clearly states the topic you are interested in (the twin paradox) and asks for an explanation of the key concepts involved (time dilation and relative motion). It also invites the physicist to provide additional insights or elaboration on the significance of the paradox.

Should that use be banned?
 
  • #21
anorlunda said:
It is entirely foreseeable that people who have trouble with English, will be able to paste their clumsy, but original, text into an AI and get well phrased text in return.
That is indeed a possibility, and one that I would welcome. If this silver lining does develop, it won't just be here at PF that, it will show up across the entire information ecosystem (BTW, I just made that term up and I don't particularly like it) - for example, ESL students will want to use it for schoolwork. So I think we can wait and see, there's no reason for PF to be a leading innovator here.
 
  • #22
Perhaps a short term approach is an explicit mention in the rules that we understand that AI support can be a good thing, but it can also be abused and can produce fluent garbage (etc). Therefore we aren't banning it, but we explicitly note that what mentors judge to be abuse of AI generated content may be deleted and possibly infracted. And this policy is under active review and we see how it all pans out?
 
  • Like
Likes Petr Matas, TeethWhitener and topsquark
  • #23
I still don't understand what is this fixation about who - or what - wrote the text.

I'm asking the question again: If it makes sense and the information is verifiable, why would anyone want to delete it?
 
  • Like
Likes Petr Matas
  • #24
jack action said:
and the information is verifiable
Well, that's the question, isn't it.

I think the problem with AI generated text is that it makes it very easy for me to produce something written confidently and clearly and completely wrong. An automated and improved way of copying and pasting from random papers I found via keyword search, perhaps. That's why I was suggesting a rule against abuse of AI generated text - so if people post verifiable text that's fine, but if they repeatedly post authoritative sounding nonsense then we explicitly note that "but ChatGPT said it" is not a defense.
 
  • Like
Likes Petr Matas
  • #25
Maybe some of that promising future is already here. A quick test of ChatGPT's ability to assist with better writing is:
1672522186982.png


I think we need to enormously broaden our horizons about how people will use the bots and for what purposes. Factual questions and answers are just one of nearly infinite possibilities.

ChatGPT has only been out for a month. Jumping to conclusions now would be like saying, "I think there is a world market for maybe five computers."
 
  • Like
Likes Petr Matas
  • #26
anorlunda said:
Maybe some of that promising future is already here. A quick test of ChatGPT's ability to assist with better writing is:
View attachment 319591

I think we need to enormously broaden our horizons about how people will use the bots and for what purposes. Factual questions and answers are just one of nearly infinite possibilities.

ChatGPT has only been out for a month. Jumping to conclusions now would be like saying, "I think there is a world market for maybe five computers."
I think this is a straw man argument. The concern is not people improving their english, but AI creating plausible sounding incorrect arguments/answers.
 
Last edited:
  • #27
Ibix said:
That's why I was suggesting a rule against abuse of AI generated text - so if people post verifiable text that's fine, but if they repeatedly post authoritative sounding nonsense then we explicitly note that "but ChatGPT said it" is not a defense.
Again, this is already applicable by actual PF rules. Here are reasons mentioned by actual posts, from actual closed threads:

https://www.physicsforums.com/threads/nuclear-fusion-and-anti-nuclear-technology.1047161/post-6820264 said:
please keep in mind that thread starts in the technical PF forums need to be based on the mainstream literature, not opinions, and you should always include links to reputable sources when starting threads in the technical forums.
https://www.physicsforums.com/threads/hello-from-portugal.965451/post-6127308 said:
Please do you own search, and if you can find sources that meet the criteria for PF threads (peer-reviewed mainstream scientific articles), then start a thread in the technical forums with a link to that article, and ask your specific question.

Why wouldn't these reasons apply to a ChatGPT post?

As far as I know, ChatGPT is not "peer-reviewed mainstream scientific literature".
 
  • Like
Likes Rive
  • #28
Frabjous said:
I think this is a straw man argument. The concern is not people improving their english, but AI creating plausible sounding incorrect arguments/answers.
You're really missing the point. Banning AI Chatbot prose because it might be abused is like banning keyboards because they might type nasty words.

Rules must focus on the abuse, not the tools.
 
  • Like
Likes Petr Matas and jedishrfu
  • #29
Frabjous said:
The concern is not people improving their english
I think that's a concern too. Online translators with the purpose of ... well: honest attempt of translation do exists. Post-editing the result of a clumsy human translation or a clumsy online translator with a chatbot just muddles the issue, but can't actually help the underlying problem.

In such case I would rather prefer to work with the clumsy original.

Regarding ChatGPT usage in answers/questions: I too think that the general citation and source rules are applicable and for some time being are sufficient (if kept in mind and actually applied if/as needed/possible).
 
Last edited:
  • #30
I’ve used ChatGPT here for some frivolous poetry writing and for answering a react.js question. It’s responses were quite impressive.

In the react.js case, I asked for citations as well. It answered the question and provided reasonable citations. I added some commentary and my own citation To show I did some research as well.

I don’t know if ChatGPT responses are guaranteed to complement the citations or not but I have to say it is a great search feature reminiscent of the computers on the original Star Trek.

I also have to say it’s a great search tool so far. I shudder to think how big business will muddy the waters with adverts and other such nonsense embedded within its answers.

Look what Google has done with search going the route of ATT yellow pages search and it’s sad that results are often intermixed with other nonsense based on what key words trigger what advertisements, a key part of googles money making operation.
 
  • #31
mfb said:
ChatGPT doesn't understand science. It writes confident-sounding answers that might or might not be right, and it's wrong too often to be used as answer to a science question. I would not recommend asking it to learn about anything either. For entertainment purposes: Sure, whatever.
So I had recently played with ChatGPT.

I started the thread about using so-called "cheek microphones" on musical theater productions. ChatGPT replied that these mics become commonplace in Broadway, but it boils down to director's preference and available budget. Then I tried to generalize to operas, for which ChatGPT also replied with almost the same text.

Yet, I have not virtually seen (from clips I have ever watched) that Broadway actors wear cheek mics. And from what I know, these mics are redundant in operas, since opera singers can generate loud sound with just the help of natural amplification from the opera house.

PS: In Indonesia cheek mics are always worn for musical numbers.
 
  • #32
We are very very early in the game of forming policies and opinions about AI. In my opinion, it is not even a generational question. I think that it will take 400 years or so to sort out how to handle AI in societies. During those 400 years the AIs will change more rapidly than humans change.

As evidence, I think of driverless cars. Almost all of us humans fear that driverless cars are flawed and should be banned. But almost none of us would ban human drivers who are also flawed, and who may drive drunk or drowsy. That's not fully rational, and it will take centuries for us to figure out what is rational.
 
  • #33
400 years is a long time.
 
  • Like
Likes anorlunda
  • #34
anorlunda said:
As evidence, I think of driverless cars. Almost all of us humans fear that driverless cars are flawed and should be banned. But almost none of us would ban human drivers who are also flawed, and who may drive drunk or drowsy. That's not fully rational, and it will take centuries for us to figure out what is rational.
Humans are not flawed. They are the result of millions of years of evolution that gave them the capacity to adapt to an ever-changing environment.

The problem you are hoping to solve with your statement is to determine the thin line between boldness and recklessness.

The former is the courage to take risks, and the latter is taking unnecessary risks. One is a quality, the other is a flaw. The only true way of determining whether an action is part of one category or the other is to evaluate the end result. Personally, I think that as living beings get life experience, they classify more and more decisions as being reckless rather than bold, which reduces them to do less and less until they do nothing. And that is why one must die and get replaced by a new - inexperienced - being that will see life as an exciting adventure rather than an obstacle course, impossible to get by.

AI will not be able to determine better than us where that frontier is; better than any living being on this Earth, for that matter. It's the randomness of life that makes it impossible. Even if all probabilities say you will most likely die if you do an action, one must try it once in a while to verify that it is still true. The more people try and don't lose, the clearer the path becomes and more and more people can follow it. It's the only way you can adapt to an ever-changing environment.

This is the true reason why most of us don't want to ban drunk drivers: they don't always get it wrong even if they don't always get it right. An example of an action that always returns a bad consequence is drinking a cleanser made of pure ammonia. This is such a clear reckless act that we don't even feel the need to have a law to forbid it. Smoking cigarettes? Not every smoker dies of a smoking-related disease. Some smokers use it to help them cope with anxiety, which may give them a life that they wouldn't have dreamed of otherwise. Is there another way that could be better? Maybe. But nature doesn't care. As long as life finds its way through, it's good enough.
 
  • Like
Likes Bystander
  • #35
Frabjous said:
The concern is not people improving their english, but AI creating plausible sounding incorrect arguments/answers.
What’s special about an AI doing that? There are already plenty of real live people on PF who do that without an AI’s help.
 
  • Like
Likes Petr Matas and Tom.G

Similar threads

Replies
10
Views
2K
Replies
10
Views
2K
Replies
3
Views
352
7
Replies
212
Views
11K
Replies
21
Views
2K
Replies
8
Views
1K
Back
Top