Using ChatGPT to help story building

  • Thread starter sbrothy
  • Start date
  • Tags
    chatgpt
  • #1
sbrothy
Gold Member
681
525
I'm sure I'm not alone in walking around with a a half-baked plot for a story percolating down my mind. Yesterday I served the overall plot to ChatGPT and (not surprisingly) it came up with a ton of good suggestions for developing the story. (It also told me my idea was a good one but it probably does that to everyone *blush* :) )

So, between tvtropes.org and ChatGPT writing a story is suddenly within my capabilities.

The question is probably either rhetoric or academic but have serious authors already embraced it? Any semi-serious writers here who'll admit they use it in their daily routine?
 
Physics news on Phys.org
  • #2
sbrothy said:
ChatGPT
I'm assuming you are talking about version 4, not 3.5, yes?

EDIT: never mind. I figured out how to do it with 3.5
 
Last edited:
  • #3
There is a lot of debate and strong emotion right now in the writing community about the ethics around the use of LLMs in the writing process. I think if you're looking to play around with it for personal interest, or use it as prompt generator or maybe even as an editor that's one thing, but if you're interested in using them to produce a novel or story that you intend to sell commercially there are a number of considerations.

For one, these algorithms have been trained and validated on the work of other artists without their consent or any direct reimbursement. Some people see this as a form of theft and/or plagiarism.

Something else I've observed is that people are substituting the creative process itself with prompt generation. People are crafting a stories based on world building exercises, but without much care for or understanding of the story crafting process itself. In time I suspect this will become its own art form. But I've seen a lot of people who think that because ChatGPT spits out something that seems okay to them, they can run with it, but they don't have the skills to properly edit it or integrate it into a greater story. Conversely those who have already developed these skills, are rarely satisfied with what the LLMs are producing. The TLDR on this is: if you can't be bothered to write it, why should someone else be bothered to read it?

As I understand it, Amazon is currently requiring that authors who use LLMs identify the work as such. I don't believe that's currently as far as it goes, and that may be it, but there is a concern in the community that if some of the lawsuits start turning against the LLM organizations that the large publishing sites will drop anything AI-generated like hot potatoes.
 
  • Informative
Likes Lren Zvsm
  • #4
Choppy said:
For one, these algorithms have been trained and validated on the work of other artists without their consent or any direct reimbursement. Some people see this as a form of theft and/or plagiarism.
I ask the following not to be provocative but to gain understanding.

Every human has been influenced by all the stories, novels, movies, etc. that they have consumed over their lifetimes. When creating a new work, should a human author be required to seek the consent of, or offer reimbursement to, the artists that created those influential works, lest the human author be accused of theft or plagiarism? If not, why is an AI author to be treated differently? Is it a matter of the difference of scale, i.e., the presumably huge number of literary works that are "read" by the AI during its training?
 
  • Like
Likes BillTre
  • #5
renormalize said:
... an AI author ...
The key may lie in this oxymoron. Is it really an author?I submit to you, the Star Trek: Next Gen episode "Elementary, Dear Data."

On the holodeck, Giordi and Data are in period costume as Watson and Holmes. Data as Holmes, solves every Sherlock Holmes mystery effortlessly because he knows every one of the novels. Giordi is bored, so he asks the Holodeck to create a new story, just for Data.

The computer creates a new story but, as it turns out, it is no more than a mish-mash of existing tropes, tweaked to seem original pulled from the Holmes novels. And Data, of course, is still able to solve it in minutes.

The computer, in short, is writing material that, while superfically seeming to be creative is, in fact, merely derivative of original works it has learned. It is not actually creating its own new work.And yes, when a human does this, their work is, indeed, panned as merely "derivative".
 
Last edited:
  • Like
Likes russ_watters
  • #6
Choppy said:
There is a lot of debate and strong emotion right now in the writing community about the ethics around the use of LLMs in the writing process. I think if you're looking to play around with it for personal interest, or use it as prompt generator or maybe even as an editor that's one thing, but if you're interested in using them to produce a novel or story that you intend to sell commercially there are a number of considerations.

For one, these algorithms have been trained and validated on the work of other artists without their consent or any direct reimbursement. Some people see this as a form of theft and/or plagiarism.

Something else I've observed is that people are substituting the creative process itself with prompt generation. People are crafting a stories based on world building exercises, but without much care for or understanding of the story crafting process itself. In time I suspect this will become its own art form. But I've seen a lot of people who think that because ChatGPT spits out something that seems okay to them, they can run with it, but they don't have the skills to properly edit it or integrate it into a greater story. Conversely those who have already developed these skills, are rarely satisfied with what the LLMs are producing. The TLDR on this is: if you can't be bothered to write it, why should someone else be bothered to read it?

As I understand it, Amazon is currently requiring that authors who use LLMs identify the work as such. I don't believe that's currently as far as it goes, and that may be it, but there is a concern in the community that if some of the lawsuits start turning against the LLM organizations that the large publishing sites will drop anything AI-generated like hot potatoes.
I never intended for ChatGPT to write my story for me. Naturally I want that to be my own words. If I just copy/paste (or parrot ChatGPT, which would amount to the same thing just indirecly.) it's not really my story at all. That would just be silly. I'm not out to make millions here. Just to write an enticing story people would want to read.

But if it can point out things like, for instance a deficit in the way of characterization contra a surplus in the way of worldbuilding, I can use that.

Also, it doesn't strike me as dishonest. It's just another, albeit powerful, tool.
 
  • #7
renormalize said:
Every human has been influenced by all the stories, novels, movies, etc. that they have consumed over their lifetimes. When creating a new work, should a human author be required to seek the consent of, or offer reimbursement to, the artists that created those influential works, lest the human author be accused of theft or plagiarism? If not, why is an AI author to be treated differently? Is it a matter of the difference of scale, i.e., the presumably huge number of literary works that are "read" by the AI during its training?
I have wondered this myself.

One of the big differences I think is that human authors must integrate everything they have read with their own lived human experience. Their personal victories and failures, vulnerabilities, anxieties, fears, hopes and dreams all play a role in how the text is interpreted. So what comes out the other end, when a human is producing something creatively, derives from the sum of that experience.

There are other arguments as well. We know for example that when an author publishes something, the intention is that it will be consumed by other people, because that's what publishing is. I don't think we can say that the authors also intended for their work to train an AI.
 
  • #8
Choppy said:
One of the big differences I think is that human authors must integrate everything they have read with their own lived human experience.
But what if the authors have minimal human experience? Hellen Keller comes to mind.

Choppy said:
We know for example that when an author publishes something, the intention is that it will be consumed by other people, because that's what publishing is. I don't think we can say that the authors also intended for their work to train an AI.
We can't know one way or the other what most authors would have made of AI. Assigning them opinions based on current prejudices can't go well. And we have never respected the intention of authors who we disagree with and wish to prove wrong.
 
  • #9
Algr said:
But what if the authors have minimal human experience? Hellen Keller comes to mind.
That's still a human experience. Just because it's unique or limited compared to the common experience doesn't minimize it.

But that's beside the point I think. Unless you're talking about artificial general intelligence perhaps, the AI is producing output that completely derives from the input data set (and some randomness), and that's one of the differences between what an AI produces and what a human produces.

Algr said:
We can't know one way or the other what most authors would have made of AI. Assigning them opinions based on current prejudices can't go well. And we have never respected the intention of authors who we disagree with and wish to prove wrong.
There are a number of high-profile lawsuits against AI companies at the moment.
https://www.theverge.com/2023/9/20/23882140/george-r-r-martin-lawsuit-openai-copyright-infringement
https://www.forbes.com/sites/cyrusf...leged-copyright-infringement/?sh=876834c66a48
https://www.nytimes.com/2023/07/10/arts/sarah-silverman-lawsuit-openai-meta.html
 
  • Like
Likes PeroK
  • #10
Neither of us really know how the machines or the humans work. Can we really be so certain?

Not all lawyers are right. If a machine can do something, why would it still be "creative" when a human does the same thing?
 
  • #11
For me the issue is uniqueness. If the work is somehow a compendium of similar work how is it different. One hundred identical monkeys will not produce Hamlet no matter how long they labor on their selectric typewrriters.
I did ask Chat (we're now on a first-name basis) to render: "three little pigs" in the style of Physical Review D. The result was sublime. I remain astounded
 
  • Like
Likes PeroK
  • #12
Neither would those monkey produce anything resembling what today's AI can do. And that infamous Batman AI story from a few years ago was certainly unique.
 
  • #13
I don't know it....reference?
 
  • #14
Nevermind, it wasn't really AI.
I haven't found any AI texts that impress me, but AI images certainly do.
crazy_face__mocking_viewer__hand_gestures__girl__nice_living_room___leaning_in__breasts__big_e...png
 
  • #15
old_church_interior__night__candles__ghosts_-nudity__drawings___3139970385.png
 
  • #16
Art is either plagiarism or revolution - Paul Gauguin.
 
  • Like
Likes hutchphd
  • #17
DaveC426913 said:
The key may lie in this oxymoron. Is it really an author?I submit to you, the Star Trek: Next Gen episode "Elementary, Dear Data."

On the holodeck, Giordi and Data are in period costume as Watson and Holmes. Data as Holmes, solves every Sherlock Holmes mystery effortlessly because he knows every one of the novels. Giordi is bored, so he asks the Holodeck to create a new story, just for Data.

The computer creates a new story but, as it turns out, it is no more than a mish-mash of existing tropes, tweaked to seem original pulled from the Holmes novels. And Data, of course, is still able to solve it in minutes.

The computer, in short, is writing material that, while superfically seeming to be creative is, in fact, merely derivative of original works it has learned. It is not actually creating its own new work.And yes, when a human does this, their work is, indeed, panned as merely "derivative".
I think that a little unfair. Some of us may just be able to see that coming. I mean, in the city I live in I'm nicknamed "Søren Bog (book)" because I read books as I walk around in public (I'm somewhat of a public weirdo.) Strangers approach me with books they think I should read. And yes I've had more than one runin with lighting poles; even stoppping cars whose occupants got out to applause because basically I looked like an idiot. (Nothing to do but get up and take the applause) Point is that I've read a lot. I think I'd be able to see if what I'm writing is plagiarism.

And should I be in doubt I'd get someone to proofread it. But again the drive is not money.
 
  • #18
sbrothy said:
Point is that I've read a lot. I think I'd be able to see if what I'm writing is plagiarism.
I'm not sure I understand your point. Did someone suggest you couldn't?
 
  • #19
I may have interpreted your holododeck-scenario a little too narrow. Nvm.
 
  • Like
Likes DaveC426913
  • #20
sbrothy said:
I may have interpreted your holododeck-scenario a little too narrow. Nvm.
I have reread the opening post and I see where the confusion came from.

I realize I was actually addressing this point by renormalize with my ST: TNG example:
renormalize said:
...should a human author be required to seek the consent of, or offer reimbursement to, the artists that created those influential works, lest the human author be accused of theft or plagiarism? If not, why is an AI author to be treated differently?

Sorry about that.

I wasn't comparing your writing to the Enterprise computer. :smile:
 
  • #21
DaveC426913 said:
I have reread the opening post and I see where the confusion came from.

I realize I was actually addressing this point by renormalize with my ST: TNG example:Sorry about that.

I wasn't comparing your writing to the Enterprise computer. :smile:
Good heavens thank you so much! :)
 
  • #22
IMO if you want to write something that's popular it should be about 95% derivative. This is why I never read New York Times bestsellers and avoid all fiction published after 1980.

I've perused writers' sites and they are preoccupied with word count, with cranking out words. How can I make myself do this, they cry.

Grinding out fiction is largely a mechanical process. That's why James Patterson, the most popular author of them all, does not write his books. He hires flunkies to do this. Smart guy. I don't read his stuff but why should he care about me? He knows what will sell.

I don't see how having an AI write your book is any different, except that some hack doesn't get paid.
 

Similar threads

Replies
4
Views
2K
Replies
1
Views
2K
Replies
14
Views
3K
Replies
13
Views
3K
Replies
13
Views
4K
Back
Top