Is ChatGPT Becoming Teachers' Favorite Tool for AI Writing?

In summary, ChatGPT is rapidly becoming the teachers' favorite tool, and why not. It is interesting that Khan Academy uses their version of ChatGPT as a math tutor.
Physics news on Phys.org
  • #2
And why not. If I were teaching today I would use it. It was interesting that Khan Academy uses their version of ChatGPT as a math tutor.
 
  • Like
Likes russ_watters and jedishrfu
  • #3
For teachers it can assist in those mundane tasks that they must do like writing brief notes to contact parents on specific kids issues.

As an aside, there was a recent video short by a teacher about teacher speak that parents didn’t grasp. When the teacher says your kid is very sociable that means they talk too much, or when they say your is a born leader it means they are too bossy…
 
  • Like
Likes russ_watters and Klystron
  • #4
My complaint is that if a communication is generated by AI, it is unclear to anyone receiving it whether it is a sincere and truthful representation of the state of affairs (for lack of better terms) of the sender.

Personally, I think that ideally, if a person, or company, or organization, uses generative AI to automate any form of communication, whether it is a tweet, a letter, an email, a memo, a mission statement, a research paper, or whatever, they should indicate clearly that it, or which parts of it, was/were generated by AI.
Otherwise, even without the intent to deceive and confuse, people will be blurring, or obfuscating, each other's understanding of each other. It will ultimately make us isolated, confused, and unable to collaborate effectively.
 
  • #5
Jarvis323 said:
My complaint is that if a communication is generated by AI, it is unclear to anyone receiving it whether it is a sincere and truthful representation of the state of affairs (for lack of better terms) of the sender.

Personally, I think that ideally, if a person, or company, or organization, uses generative AI to automate any form of communication, whether it is a tweet, a letter, an email, a memo, a mission statement, a research paper, or whatever, they should indicate clearly that it, or which parts of it, was/were generated by AI.
Otherwise, even without the intent to deceive and confuse, people will be blurring, or obfuscating, each other's understanding of each other. It will ultimately make us isolated, confused, and unable to collaborate effectively.
Front the article:
In January, the New York City education department, which oversees the nation’s largest school district with more than 1 million students, blocked the use of ChatGPT by both students and teachers, citing concerns about safety, accuracy and negative impacts to student learning.
Obviously students and teachers are fundamentally different, and I don't see an inherent problem with a teacher using it, nor a need to cite it. Teachers can't "cheat" and there's nothing wrong with/difference between asking a bot "write me a two hour lecture on the Battle of Gettysburg" and finding one in a repository or even copying your own from last year (of from the guy who taught it last year who left his lesson plans when he retired). The teacher is still responsible for the content. I wonder if the district can articulate a real/potential problem that doesn't make it sound like they are treating their teachers like students?

It's the same reason when discussing rule changes on PF the default/starting position was that nothing has changed. The poster is responsible for the content either way.

I alluded to this in the other thread where I said teachers shouldn't fear for their jobs; writing lesson plans is not what makes a teacher a teacher, it's the human interaction of "teaching" that does.
 
  • Like
Likes gleem
  • #6
Jarvis323 said:
Personally, I think that ideally, if a person, or company, or organization, uses generative AI to automate any form of communication, whether it is a tweet, a letter, an email, a memo, a mission statement, a research paper, or whatever, they should indicate clearly that it, or which parts of it, was/were generated by AI.
If, e.g., an organizational executive asks a human assistant to draft a document and then issues the document under that executive's authority, must it include a disclaimer that it was prepared by an assistant? The executive is, after all, the one bearing ultimate responsibility for any information put out in their name, regardless of who or what was used in its preparation. Why should AI be different than any other assistance?
 
  • Like
Likes gleem and Hornbein
  • #7
renormalize said:
If, e.g., an organizational executive asks a human assistant to draft a document and then issues the document under that executive's authority, must it include a disclaimer that it was prepared by an assistant? The executive is, after all, the one bearing ultimate responsibility for any information put out in their name, regardless of who or what was used in its preparation. Why should AI be different than any other assistance?
That is why I said "ideally". Obviously no communication a corporation puts out can be reasonably expected to be sincere. I don't honestly believe that Qunol is the brand Tony Hawk trusts. There is a great deal of communication in our society which is obviously insincere and misrepresentative. But that isn't a good thing, its a bad thing.

When you receive incentive based communications written insincerely by bots or assistants, it is basically spam or junk mail.

I just don't think, just because it makes the job easier, teachers should be sending out junk mail instead of authentic mail, without disclosing that.
 
  • #8
russ_watters said:
Front the article:

Obviously students and teachers are fundamentally different, and I don't see an inherent problem with a teacher using it, nor a need to cite it. Teachers can't "cheat" and there's nothing wrong with/difference between asking a bot "write me a two hour lecture on the Battle of Gettysburg" and finding one in a repository or even copying your own from last year (of from the guy who taught it last year who left his lesson plans when he retired). The teacher is still responsible for the content. I wonder if the district can articulate a real/potential problem that doesn't make it sound like they are treating their teachers like students?

It's the same reason when discussing rule changes on PF the default/starting position was that nothing has changed. The poster is responsible for the content either way.

I alluded to this in the other thread where I said teachers shouldn't fear for their jobs; writing lesson plans is not what makes a teacher a teacher, it's the human interaction of "teaching" that does.

Would there be a problem if a teacher outsourced all of the technical aspects of their job completely? For example, what if teachers were just there for moral support, but didn't actually know the course material, couldn't answer any technical questions, didn't grade or comment on any of the work, etc?
 
  • #10
Jarvis323 said:
Would there be a problem if a teacher outsourced all of the technical aspects of their job completely? For example, what if teachers were just there for moral support, but didn't actually know the course material, couldn't answer any technical questions, didn't grade or comment on any of the work, etc?
I don't see how that's possible. You can't have interaction if the teacher doesn't know the material.
 
  • #11
What about a document whose content was prescribed by a person but written by AI? This would seem to me to be the most common way personal correspondences will be written. These correspondences would be reviewed by the originator.

Indicating AI wrote a document with content prescribed by a human would seemingly reduce the credibility of the content and candor that might otherwise be inferred.

GLM/GLM
 
Last edited:
  • #13
gleem said:
Indicating AI wrote a document with content prescribed by a human would seemingly reduce the credibility of the content and candor that might otherwise be inferred.

GLM/GLM

If being honest and candid about the origin of the document would reduce the credibility and candor that would otherwise be inferred, then something is wrong with the system.

I hope we don't end up lying to each-other more in an attempt to be perceived as more honest, and then that becoming normal, so that most of us know we are lying to each other anyways, but still do it. Maybe some portion of the population is still outright deceived (e.g., kids and people with very low intelligence or disabilities), and the rest of us are just a little psychologically manipulated while perhaps being conscious of it.

It reminds me of youtube videos where the thumbnail implies the video includes something that it doesn't. You watch the video and find it doesn't even contain what it said it did. And after a while, you realize this is normal now, you don't expect the promise in the thumbnail to be kept. They might as well add a disclaimer that the promise is false from the start. Except everyone is doing it, so the people who know they are being lied to accept it, and there are enough vulnerable new or stupid people to deceive to make the lie worth maintaining.

It's not much different than advertising in general. A company puts out a commercial which is obviously a big lie. You might think it would be reasonable for an intelligent enough person to form a negative view of the company or product and then avoid it based on the dishonesty of the company/ad. But we don't because it's normal.

It might end up being the same way with all kinds of other forms of communication.
 
  • Like
Likes russ_watters
  • #14
gleem said:
Indicating AI wrote a document with content prescribed by a human would seemingly reduce the credibility of the content and candor that might otherwise be inferred.
Yup. Or, rather, being AI written reduces the credibility. Indicating just announces it.
 

FAQ: Is ChatGPT Becoming Teachers' Favorite Tool for AI Writing?

How are teachers using ChatGPT in the classroom?

Teachers are using ChatGPT to generate writing prompts, provide instant feedback on student essays, and assist with lesson planning. It helps in creating personalized learning experiences and can also be used to explain complex topics in simpler terms.

What are the benefits of using ChatGPT for writing assistance in education?

ChatGPT offers several benefits, including saving time for teachers, providing immediate feedback to students, and enhancing student engagement. It can also help students improve their writing skills by offering suggestions and corrections in real-time.

Are there any concerns about using ChatGPT in educational settings?

Yes, there are concerns about over-reliance on AI, the accuracy of the information provided, and the potential for students to use ChatGPT to complete assignments without truly understanding the material. Issues of data privacy and the ethical use of AI in education are also significant concerns.

How does ChatGPT compare to traditional teaching methods?

ChatGPT can complement traditional teaching methods by providing additional resources and support. While it cannot replace the human touch and expertise of a teacher, it can enhance the learning experience by offering quick assistance and personalized feedback.

What is the future potential of ChatGPT in education?

The future potential of ChatGPT in education is vast. It could be used to develop adaptive learning systems, create more interactive and engaging educational content, and support teachers in managing larger classrooms more effectively. As AI technology advances, its role in education is likely to expand further.

Similar threads

Replies
18
Views
1K
Replies
11
Views
2K
Replies
16
Views
3K
Replies
8
Views
3K
Replies
43
Views
8K
Replies
144
Views
11K
Replies
4
Views
2K
Back
Top