How Are Reddit and Stack Planning to Monetize LLM Training?

  • Thread starter jedishrfu
  • Start date
  • Tags
    Work
In summary, ChatGPT is a chatbot that was developed to help with summarizing content from sources such as Reddit and StackExchange. It is being marketed as a way to make money from LLM training, but it has not been successful thus far.
Computer science news on Phys.org
  • #2
If a social network can monetize my posts, I want my cut.
 
  • #3
Grelbr42 said:
If a social network can monetize my posts, I want my cut.
That's not how it works now. You are the product and the social media company makes its money by selling your eyes to advertisers, using your posts (including non public ones) to target the ads. Your "payment" is the free use of the website.

I'm skeptical though that publicly available information can be monetized for LLM training. These posts are public because we want as many people to read them as possible.
 
Last edited:
  • #4
In case others didn't see this:

Greg Bernhardt said:
We're the #1 most represented physics domain in Google C4 dataset!

View attachment 325203
 
  • Wow
  • Like
Likes ComplexVar89, jedishrfu and FactChecker
  • #5
jedishrfu said:
What do you think @Greg Bernhardt ?
PF will have a search chatbot trained on our posts at some point, yes. I don't know about monetizing though. Also, Reddit and Stack and many times larger than PF, so they have much more data to work with.
 
  • #6
Greg Bernhardt said:
I don't know about monetizing though.
You should inform yourself quickly!
I have read somewhere that SE went over the counter for billions! Ok, let's be honest. It was only 1.8 billion.

6/3/21 said:
StackExchange, known as StackOverflow, has been sold to Prosus for $1.8 billion. This includes the TeX StackExchange Q&A site (TeX.SE). Prosus is a technology investor and holding company that already owns companies like Udemy, Codecadamy and Brainly (“Your 24/7 homework helper”). After $153 million in investor funds, hiring a former investment banker as head, several rounds of layoffs, moderators leaving, becoming more efficient by standardizing sites, exit speculation, it wasn't really surprising. Official communications and Joel Spolsky's announcement say "business as usual" and that everything will continue as is. So, nothing to see here.
 
Last edited:
  • #7
fresh_42 said:
I have read somewhere that SE went over the counter for billions! Ok, let's be honest. It was only 1.8 billion.
Pretty aggressive considering LLMs are likely going to put SE out of business.
 
  • #8
Greg Bernhardt said:
Pretty aggressive considering LLMs are likely going to put SE out of business.
Maybe SE but certainly not MO (and possibly other Overflows I don't know of). The discussions there are really on an academic level.

And with regards to SE: yes, they might have more sections than we have, but we are far better than they are.
We do not downgrade users for correct answers, and we do not delete seemingly silly questions as long as the user shows his efforts to come to a conclusion, just to mention two aspects we are better at, despite all users who think we would be unfair. At least we try. SE punishes.
 
  • Like
Likes ComplexVar89 and jedishrfu
  • #9
fresh_42 said:
Maybe SE but certainly not MO (and possibly other Overflows I don't know of). The discussions there are really on an academic level.
You just wait a couple of years. I'm interested to see what Wolfram Alpha's plugin can do with ChatGPT.
 
  • Haha
Likes jedishrfu
  • #10
Greg Bernhardt said:
You just wait a couple of years. I'm interested to see what Wolfram Alpha's plugin can do with ChatGPT.
I would have expected that ChatGPT would analyze at least Wikipedia in general and WA on my specific question
1682736311918-png.png


Both are obviously not the case. It failed to check Wikipedia on bismuth, and WA on primes. There is a long way ahead once the general hype has settled down.
 
  • Like
Likes jedishrfu
  • #11
fresh_42 said:
I would have expected that ChatGPT would analyze at least Wikipedia in general and WA on my specific question
View attachment 325759

Both are obviously not the case. It failed to check Wikipedia on bismuth, and WA on primes. There is a long way ahead once the general hype has settled down.
It can't browse the internet yet (there is an alpha model) but it is trained on a lot of wiki pages. I don't disagree there are issues, but the tech is moving quickly. We'll get there before too long.
 
  • Like
Likes ComplexVar89 and jedishrfu

FAQ: How Are Reddit and Stack Planning to Monetize LLM Training?

How do LLMs work?

LLMs, or Large Language Models, work by using deep learning techniques to process and understand large amounts of text data. They are trained on vast datasets to learn patterns and relationships within the data, allowing them to generate human-like text responses.

What are some common applications of LLMs?

LLMs are commonly used in natural language processing tasks such as text generation, language translation, sentiment analysis, and chatbots. They can also be used for tasks like summarization, question-answering, and content recommendation.

How are LLMs different from traditional machine learning models?

LLMs differ from traditional machine learning models in that they have a much larger number of parameters and are trained on significantly larger datasets. This allows them to capture more complex patterns in the data and generate more nuanced responses.

What are some limitations of LLMs?

Some limitations of LLMs include their tendency to generate biased or inaccurate responses based on the data they were trained on. They also require significant computational resources and data to train effectively, making them inaccessible to many researchers and organizations.

How can LLMs be fine-tuned for specific tasks?

LLMs can be fine-tuned for specific tasks by providing them with additional training data that is relevant to the task at hand. By adjusting the model's parameters and hyperparameters, researchers can optimize the model for specific performance metrics and improve its accuracy on targeted tasks.

Similar threads

Replies
8
Views
3K
Replies
7
Views
1K
Replies
44
Views
4K
Replies
13
Views
3K
Replies
3
Views
2K
Replies
31
Views
6K
  • Sticky
Replies
0
Views
2K
Replies
1
Views
2K
Back
Top