- #1
- 15,089
- 9,619
Teachers embrace ChatGPT
https://www.forbes.com/sites/emmawh...st-becoming-the-teachers-pet/?sh=60c039af5177
https://www.forbes.com/sites/emmawh...st-becoming-the-teachers-pet/?sh=60c039af5177
Front the article:Jarvis323 said:My complaint is that if a communication is generated by AI, it is unclear to anyone receiving it whether it is a sincere and truthful representation of the state of affairs (for lack of better terms) of the sender.
Personally, I think that ideally, if a person, or company, or organization, uses generative AI to automate any form of communication, whether it is a tweet, a letter, an email, a memo, a mission statement, a research paper, or whatever, they should indicate clearly that it, or which parts of it, was/were generated by AI.
Otherwise, even without the intent to deceive and confuse, people will be blurring, or obfuscating, each other's understanding of each other. It will ultimately make us isolated, confused, and unable to collaborate effectively.
Obviously students and teachers are fundamentally different, and I don't see an inherent problem with a teacher using it, nor a need to cite it. Teachers can't "cheat" and there's nothing wrong with/difference between asking a bot "write me a two hour lecture on the Battle of Gettysburg" and finding one in a repository or even copying your own from last year (of from the guy who taught it last year who left his lesson plans when he retired). The teacher is still responsible for the content. I wonder if the district can articulate a real/potential problem that doesn't make it sound like they are treating their teachers like students?In January, the New York City education department, which oversees the nation’s largest school district with more than 1 million students, blocked the use of ChatGPT by both students and teachers, citing concerns about safety, accuracy and negative impacts to student learning.
If, e.g., an organizational executive asks a human assistant to draft a document and then issues the document under that executive's authority, must it include a disclaimer that it was prepared by an assistant? The executive is, after all, the one bearing ultimate responsibility for any information put out in their name, regardless of who or what was used in its preparation. Why should AI be different than any other assistance?Jarvis323 said:Personally, I think that ideally, if a person, or company, or organization, uses generative AI to automate any form of communication, whether it is a tweet, a letter, an email, a memo, a mission statement, a research paper, or whatever, they should indicate clearly that it, or which parts of it, was/were generated by AI.
That is why I said "ideally". Obviously no communication a corporation puts out can be reasonably expected to be sincere. I don't honestly believe that Qunol is the brand Tony Hawk trusts. There is a great deal of communication in our society which is obviously insincere and misrepresentative. But that isn't a good thing, its a bad thing.renormalize said:If, e.g., an organizational executive asks a human assistant to draft a document and then issues the document under that executive's authority, must it include a disclaimer that it was prepared by an assistant? The executive is, after all, the one bearing ultimate responsibility for any information put out in their name, regardless of who or what was used in its preparation. Why should AI be different than any other assistance?
russ_watters said:Front the article:
Obviously students and teachers are fundamentally different, and I don't see an inherent problem with a teacher using it, nor a need to cite it. Teachers can't "cheat" and there's nothing wrong with/difference between asking a bot "write me a two hour lecture on the Battle of Gettysburg" and finding one in a repository or even copying your own from last year (of from the guy who taught it last year who left his lesson plans when he retired). The teacher is still responsible for the content. I wonder if the district can articulate a real/potential problem that doesn't make it sound like they are treating their teachers like students?
It's the same reason when discussing rule changes on PF the default/starting position was that nothing has changed. The poster is responsible for the content either way.
I alluded to this in the other thread where I said teachers shouldn't fear for their jobs; writing lesson plans is not what makes a teacher a teacher, it's the human interaction of "teaching" that does.
It should, yes: https://smallbusiness.chron.com/reference-notation-business-letters-21548.htmlrenormalize said:If, e.g., an organizational executive asks a human assistant to draft a document and then issues the document under that executive's authority, must it include a disclaimer that it was prepared by an assistant?
I don't see how that's possible. You can't have interaction if the teacher doesn't know the material.Jarvis323 said:Would there be a problem if a teacher outsourced all of the technical aspects of their job completely? For example, what if teachers were just there for moral support, but didn't actually know the course material, couldn't answer any technical questions, didn't grade or comment on any of the work, etc?
Thanks. It's good to know that there are recognized attribution standards for official letters. But it would be even better if that practice could be extended to public press releases and advertising!russ_watters said:
gleem said:Indicating AI wrote a document with content prescribed by a human would seemingly reduce the credibility of the content and candor that might otherwise be inferred.
GLM/GLM
Yup. Or, rather, being AI written reduces the credibility. Indicating just announces it.gleem said:Indicating AI wrote a document with content prescribed by a human would seemingly reduce the credibility of the content and candor that might otherwise be inferred.
Teachers are using ChatGPT to generate writing prompts, provide instant feedback on student essays, and assist with lesson planning. It helps in creating personalized learning experiences and can also be used to explain complex topics in simpler terms.
ChatGPT offers several benefits, including saving time for teachers, providing immediate feedback to students, and enhancing student engagement. It can also help students improve their writing skills by offering suggestions and corrections in real-time.
Yes, there are concerns about over-reliance on AI, the accuracy of the information provided, and the potential for students to use ChatGPT to complete assignments without truly understanding the material. Issues of data privacy and the ethical use of AI in education are also significant concerns.
ChatGPT can complement traditional teaching methods by providing additional resources and support. While it cannot replace the human touch and expertise of a teacher, it can enhance the learning experience by offering quick assistance and personalized feedback.
The future potential of ChatGPT in education is vast. It could be used to develop adaptive learning systems, create more interactive and engaging educational content, and support teachers in managing larger classrooms more effectively. As AI technology advances, its role in education is likely to expand further.