Exploring Quantum Entanglement Through Music

In summary: As a nation, we have a responsibility to be good stewards of our environment for future generations. This means investing in clean energy, protecting our national parks and wildlife habitats, and taking action to combat climate change.But our challenges do not stop there. Equal rights for all individuals, regardless of race, gender, sexual orientation, or religion, must be a cornerstone of our society. We must stand up against discrimination and fight for the rights of every American.Diversity is also a strength of our nation. We must embrace and celebrate our differences, rather than fear and reject them. Our differences make us stronger, and we must work towards a more inclusive and accepting society for all.And finally, I have a
  • #71
Office_Shredder said:
It also can't do anything more complicated.
It did a valid proof by induction when I asked it to. Others found it could sometimes prove NP hardness, although is happy to prove wrong statements as well, or confidently write an invalid non-sense proof.

 
Last edited:
Science news on Phys.org
  • #72
Office_Shredder said:
The bot is just bad at math, stop making excuses for it. It can't do stuff that a calculator can do, what would be the point of inhibiting that? It also can't do anything more complicated.
Perhaps I will ask it some new questions:
Why can't you calculate?
Are you capable of learning?
Is everything you say just following a preprogrammed pattern?

I asked it a very simple question:

OmCheeto; "How many questions have people asked you so far?"

ChatGPT; "I am a large language model trained by OpenAI, and I do not have the ability to track the number of questions that people have asked me. As a language model, my purpose is to generate human-like text in response to the inputs that I receive. I do not have the ability to browse the internet or track the number of questions that people have asked me."

This reminds me of my response when someone claims that a particular person is smart because they are so knowledgable.

Me; "A dictionary knows every word there is, yet I've never met one that could tie its own shoelaces. Some savants are like that. Your friend, although filled with a godlike plethora of facts and figures, is unfortunately, an idiot."

hmmmm.... That would be funny if the only thing holding it back from sentience, was the simple ability to count.
 
  • #73
Jarvis, it's most likely to solve something if it's a standard problem you can find exact solutions of on the internet, which path independence is a great example of.

We tried this test on a recent mhb undergrad problem and it gave a pretty wrong proof. I tried it on a grad problem and the attempt was so bad I didn't even bother posting it.

But like, here are some examples for now

Prove every polynomial is differentiable
A polynomial is a mathematical expression of the form
Copy code
p(x) = a_n * x^n + a_{n-1} * x^{n-1} + ... + a_1 * x + a_0
where a_n, a_{n-1}, ..., a_1, and a_0 are constants and n is a non-negative integer.
To prove that every polynomial is differentiable, we must show that the derivative of a polynomial exists at every point in its domain. The derivative of a polynomial p(x) can be computed using the following formula:
Copy code
p'(x) = n * a_n * x^{n-1} + (n-1) * a_{n-1} * x^{n-2} + ... + a_1
Since this formula involves only constants and the variable x, it is a polynomial itself. Therefore, the derivative of a polynomial exists at every point in its domain, and every polynomial is differentiable.

Just... no. That was bad. It knows the shape of answers about polynomial derivatives and regurgitated it despite being totally inappropriate.

Here's a programming question that looks good, until it doesn't.

In python write code to find the minimum of a quadratic polynomial
Here is a simple Python function that computes the minimum of a quadratic polynomial with coefficients a, b, and c:

Copy code
def quadratic_min(a, b, c):
# Compute the value of x that minimizes the quadratic
x_min = -b / (2 * a)

# Plug this value into the quadratic formula to find the minimum value
y_min = a * x_min**2 + b * x_min + c

return x_min, y_min
Here's an example of how to use this function to find the minimum of the quadratic 3x^2 + 2x - 5:

Copy code
# Define the coefficients of the quadratic
a = 3
b = 2
c = -5

# Find the minimum
x_min, y_min = quadratic_min(a, b, c)

# Print the result
print("The minimum of 3x^2 + 2x - 5 is at x = %.2f and has value y = %.2f" % (x_min, y_min))
This code should print the following output:

Copy code
The minimum of 3x^2 + 2x - 5 is at x = -0.33 and has value y = -4.89
Note that this function will only work for quadratic polynomials (i.e. polynomials of the form ax^2 + bx + c). To find the minimum of a general polynomial, you would need to use a more general optimization algorithm.
Does that find the minimum of -x^2 correctly?
Yes, the function quadratic_min() will correctly find the minimum of the quadratic -x^2.

To find the minimum of -x^2, we would simply call the quadratic_min() function with the coefficients a=-1, b=0, and c=0, like this:

Copy code
# Define the coefficients of the quadratic
a = -1
b = 0
c = 0

# Find the minimum
x_min, y_min = quadratic_min(a, b, c)

# Print the result
print("The minimum of -x^2 is at x = %.2f and has value y = %.2f" % (x_min, y_min))
This code should print the following output:

Copy code
The minimum of -x^2 is at x = 0.00 and has value y = 0.00
Note that the minimum of -x^2 is at x=0 and has a value of y=0. This is because -x^2 is always non-positive, and it takes on its minimum value of 0 at x=0.

So close, but so wtf. Again it knows what answers about the minima is quadratic looks like, but it can't do anything other than basically regurgitate it back to youand hope it's what you're looking for.
 
  • Like
Likes Jarvis323
  • #74
give a proof of euler's formula in latex

To prove Euler's formula, we first start with the definition of the complex exponential function:

$e^{ix} = \sum_{n=0}^{\infty} \frac{(ix)^n}{n!}$

We can then use the binomial theorem to expand the right side of the equation:

$e^{ix} = \sum_{n=0}^{\infty} \frac{(ix)^n}{n!} = \sum_{n=0}^{\infty} \frac{i^n x^n}{n!}$

We can then split the sum into two parts, one for even values of $n$ and one for odd values of $n$:

$e^{ix} = \sum_{n=0}^{\infty} \frac{i^n x^n}{n!} = \sum_{n=0}^{\infty} \frac{(-1)^{n/2} x^{2n}}{(2n)!} + \sum_{n=0}^{\infty} \frac{(-1)^{(n-1)/2} x^{2n+1}}{(2n+1)!}$

We can then recognize that the first sum is the Taylor series for the cosine function, and the second sum is the Taylor series for the sine function. Therefore, we can write:

$e^{ix} = \sum_{n=0}^{\infty} \frac{(-1)^{n/2} x^{2n}}{(2n)!} + \sum_{n=0}^{\froze up there and been taking a while to retry so far
 
  • #75
Me: Write 2 paragraphs about Schroedinger's cat in the style of James Joyce.

ChatGPT:
Schrödinger's feline, a creature so perplexing, embodies the strange world of quantum mechanics. A creature, trapped in a box, both alive and dead, a paradox so profound that it challenges the very fabric of our understanding. This conundrum, proposed by the great Austrian physicist Erwin Schrödinger, serves to illustrate the fundamental uncertainty at the heart of quantum mechanics. The cat, in its state of suspended animation, represents the superposition of states in quantum mechanics, where a particle can exist in multiple states simultaneously until observed.

Yet, the tale of Schrödinger's cat is more than just a theoretical exercise. It is a haunting metaphor for the elusive nature of reality and our limitations as observers. For until we open the box and observe the state of the cat, it exists in a state of indeterminacy, subject to the whims of probability and chance. Thus, Schrödinger's cat serves as a reminder of the fragility of our perceptions and the limits of our knowledge, a testament to the strange and wondrous world of quantum mechanics.

Me: Make it less serious and more stream-of-consciousness, as in Joyce's Ulysses.

ChatGPT:

Oh, the curious cat, Schrödinger's feline friend, a paradox wrapped in a riddle, in a box till the very end. Such a strange thing, this cat, both alive and dead, ha! What a mind-bending concept that leaves us all in bed.

But wait, there's more! The cat, you see, is a metaphor for all that is quantum, for the superposition, the states so frequent. Until we open the box and observe, the cat remains in a haze, a blur of probabilities, a game of quantum craze.

Ha! What a wild ride, this quantum world, with its mysteries and enigmas, a reality unfurled. So, let us raise a glass to Schrödinger's cat, a symbol of our limitations, and the strange world where that is at!

Me: Write about Schroedinger's cat as if it is Hamlet's soliloquy.
ChatGPT:

To be, or not to be, alive or dead, that is the question:
Whether 'tis nobler in the mind to suffer
The slings and arrows of outrageous fortune,
Or to take arms against a sea of troubles
And by opposing, end them.
To die, to sleep,
No more; and by a sleep, to say we end
The heartache and the thousand natural shocks
That flesh is heir to, tis a consummation
Devoutly to be wished!

But hold, what light through yonder box breaks?
It is the cat, and Schrödinger makes answer
With a paradox so strange, it boggles the mind:
A cat, trapped in a box, both alive and dead,
A superposition of states, until observed.

Alas, poor cat, in thy uncertain state,
Thou art a symbol of our human fate.
For we too exist in a world of chance,
Subject to the whims of probability and circumstance.
Thus, in this soliloquy, I ponder and muse,
On the strange and wondrous world of Schrödinger's cat and its news.
 
Last edited:
  • Like
Likes Trysse and WernerQH
  • #76
Bertie Wooster and Jeeves talk about QM, a la ChatGPT....

---------
Prompt: Write a conversation between Bertie Wooster and Jeeves about quantum mechanics.
---------

ChatGPT's output:

Bertie Wooster: Jeeves, I was at a dinner party last night and the topic of conversation turned to this newfangled thing called "quantum mechanics." Have you heard of it?

Jeeves: Yes, Mr. Wooster, I am familiar with the subject.

Bertie Wooster: Really? Well, I must admit, I was completely befuddled by the whole affair. The chap at the end of the table was speaking of particles being in two places at once, and I couldn't make heads or tails of it.

Jeeves: Indeed, sir. Quantum mechanics is a complex and abstract field that can be challenging to understand. It is the study of the behavior of matter and energy on the smallest scale.

Bertie Wooster: And this business about particles being in two places at once? What's all that about?

Jeeves: That is known as superposition, sir. It is a principle in quantum mechanics that states that a particle can exist in multiple states or locations simultaneously.

Bertie Wooster: I say, Jeeves, that's all a bit much for my brain to handle.

Jeeves: Yes, sir, the concepts of quantum mechanics can be difficult to grasp. However, they have been extremely influential in our understanding of the universe and have led to many technological advancements.

Bertie Wooster: Well, I suppose I shall have to leave the heavy lifting to you and the other bright chaps. I think I'll stick to the more straightforward things in life, like arranging a jolly good dinner party.

Jeeves: As you wish, sir. Shall I arrange for your dinner tonight?

Bertie Wooster: Oh, yes please, Jeeves. I think a good steak and a bottle of claret is just what the doctor ordered.
 
Last edited:
  • #77
This guy "convinced" ChaGPT that 4 + 9 = 15:

51i9mww8t0ha1.png
 

Attachments

  • chatgpt-4+9=15.pdf
    2.3 MB · Views: 118
  • #79
I asked ChatGPT who I am and this is what it returned. Some false things there :biggrin:

Greg Bernhardt is a physicist and the founder of the website Physics Forums, which is one of the most popular online discussion forums for physics and science enthusiasts. Bernhardt has a PhD in physics from the University of Pittsburgh and has also worked as a research scientist at the Air Force Research Laboratory.

In addition to his work in physics, Bernhardt is also interested in web development and has created several other websites, including Math Forums and Science Forums, which are similar to Physics Forums but focus on mathematics and general science topics, respectively. He is known for his contributions to online communities and for creating platforms that facilitate the sharing of knowledge and ideas among people with similar interests.
 
  • Wow
  • Like
Likes pinball1970, OmCheeto and Borg
  • #80
I've been using Chat GPT this morning to prepare for an interview. I asked it to give descriptions of the program's needs as described in the announcement. I then gave it some my background and asked it to describe the top pros and cons for hiring me. During the conversation, I've given it information on other types of things that I've worked on and asked how those experiences could be integrated into the interview with respect to their needs.

Finally, I asked it for the pros and cons of hiring me and then asked it to go into detail of how I can emphasize my pros while minimizing the cons. It's been very informative and has given me great input for the interview.
 
  • Like
Likes Hornbein
  • #83
The article complains about how Chat GPT is not moral enough, decisive enough, or able to make logical conclusions. it sounds like he wants to classify it as a politician. :oldwink:
 
  • Haha
  • Like
Likes Mondayman, OmCheeto, Tom.G and 1 other person
  • #84
If the choice is between Google and Chomsky, I will go with Chomsky. After all, what would you expect Google to say?
 
  • #86
Hyperfine said:
If the choice is between Google and Chomsky, I will go with Chomsky. After all, what would you expect Google to say?
Chomsky knows as much about machine learning as I know about underwater drilling.
 
  • Haha
  • Like
Likes Borg, PeroK, berkeman and 1 other person
  • #87
Greg Bernhardt said:
Chomsky knows as much about machine learning as I know about underwater drilling.
Piffle.
 
  • #88
South Park's latest episode, 'Deep Learning', according to the closing credits, was co-written by ChatGPT.

Southpark ChatGPT cowriter. 2023-03-09 at 00.46.28.png
 
Last edited:
  • Like
  • Haha
Likes PhDeezNutz and BillTre
  • #89
Chat GPT is a first generation "conversational AI" as it tends to refer to itself. Chomsky is focused on comparing it to human behavior where it's convenient. Yes, it makes mistakes but if you're going to anthropomorphize Chat GPT, it is essentially a child compared to where these new tools are headed.

Chomsky also doesn't appear to know much about the fields of research that are going on in artificial intelligence or understand where the real threats in AI are. A few examples:
  • The interaction with Chat GPT is initiated directly by the user who understands that they are talking to a computer. The real threat lies when people do not know that's the case.
  • Machine Learning algorithms are often a black box where we do not know how or why the algorithm has reached a particular decision. There is an entire field of research going on called Explainable AI (XAI) where researchers are actively working on changing this so that we can better trust the information that we are getting from them.
  • AI algorithms (like humans) have biases. This is a direct result of the information that is fed to it as Chomsky notes with his reference to the infamous Microsoft Tam project. Tam is the low hanging fruit of bias issues where the input data from internet trolls has clearly created a monster. What about when the bias influence is less clear or favors a decision that you don't agree with? Assuming that we understood everything about how an algorithm is reaching a decision, who decides which biases are acceptable?
 
Last edited:
  • Like
Likes Greg Bernhardt and BillTre
  • #90
Borg said:
  • The interaction with Chat GPT is initiated directly by the user who understands that they are talking to a computer. The real threat lies when people do not know that's the case.
I totally disagree with this statement. When people exchange on forums with what they think is another person, they will allow themselves to disagree with that person. If they think they are talking to a computer, they're more likely to think it must be right since it possesses all the knowledge of mankind.

The real threat lies when people take the judgment of someone else over their own, whether being a supposed know-all person or a supposed know-all computer.

Borg said:
  • Machine Learning algorithms are often a black box where we do not know how or why the algorithm has reached a particular decision. There is an entire field of research going on called Explainable AI (XAI) where researchers are actively working on changing this so that we can better trust the information that we are getting from them.
How someone - or something - reached a decision shouldn't matter. You should still be able to analyze this new information and decide what to do with it. We should not let one person - even worst, a computer - make the final decision for us, especially as a group, a society.

Borg said:
  • Assuming that we understood everything about how an algorithm is reaching a decision, who decides which biases are acceptable?
The people who receive and analyze the decision.

It is like you expect people in the future to blindly accept any decision given to them. A "super-computer" will not be better than a "super-human" at ruling the world. AI shouldn't be equivalent to ending democracy. It's just a tool, not a god. Whether a person or computer tells you that you should become an architect or a plumber, it shouldn't seal your faith.

It also suggests that there is only one way to do things and that it will always be the best way. This is a claim against diversity which is clearly a winning strategy in nature.
 
  • #91
It was not my intention to imply that a computer should be a final decision maker. I was only making comments with respect to the Chomsky article.
 
  • #92
I didn't want to create a super long post earlier but I guess that I need to clear up a few things. My second bullet about explainable AI was referring to understanding the decisions that an AI makes so that humans can choose whether they are worthwhile.

For example, a classic screwup in CNN training is the case of training an algorithm to recognize the difference between wolves and dogs. This can be done with very high accuracy but there have been cases where it worked absolutely fine until someone shows it a picture of a dog and it starts classifying it as a wolf. This can be a real head scratcher until you apply algorithms to have it show you what it's focusing on in the dog and wolf pictures and then find out that the algorithm is focusing on snow in the wolf pictures because all of the wolf pictures that you trained in on had snow in them and the dog pictures didn't. Explainable AI seeks to have the algorithm expose the portions of imagery or semantic context that it is using to make its decision. It's up to the human at that point to decide if it's reasonable.

As for the third bullet, my point there is that an algorithm is going to have some biases based on its training. If you train an algorithm to believe that the earth is round, it's going to piss off the flat earth society. Countries like China are training their algorithms to their belief systems and politicans in Congress are often complaining about how Google, Facebook or Twitter algorithms aren't treating their party favorably. This is what I'm talking about with regard to biases. I'm definitely not in the camp of allowing computers to make all of our decisions. However, people do currently get information from these places without knowing how the information was generated or the inherent biases that the algorithms have.

Again, I see these things only as tools to help make a decision - not as ultimate decision makers.
 
  • Like
Likes jack action
  • #93
Borg said:
For example, a classic screwup in CNN training is the case of training an algorithm to recognize the difference between wolves and dogs. This can be done with very high accuracy but there have been cases where it worked absolutely fine until someone shows it a picture of a dog and it starts classifying it as a wolf. This can be a real head scratcher until you apply algorithms to have it show you what it's focusing on in the dog and wolf pictures and then find out that the algorithm is focusing on snow in the wolf pictures because all of the wolf pictures that you trained in on had snow in them and the dog pictures didn't. Explainable AI seeks to have the algorithm expose the portions of imagery or semantic context that it is using to make its decision. It's up to the human at that point to decide if it's reasonable.
This makes a lot of sense to me. Asking how the algorithm reached its conclusion is like PF mentors asking homework posts to "show us your work."

The snow in the picture reminded me of long-ago days in art history class. Professor Stoddard warned us that he had hundreds of photos, so we should not rely on "the blue volkswagen is parked in front of the Parthenon; the red Fiat is in front of the Pantheon."
 
  • #94
OmCheeto said:
South Park's latest episode, 'Deep Learning', according to the closing credits, was co-written by ChatGPT.

...
I added Trey Parker and ChatGPT as writers for the episode in the wikipedia entry but then it appears that two people got in a fight about it and removed ChatGPT so I asked ChatGPT about it and it denied helping write the script so I guess that's settled.

me; "Did you cowrite the latest episode of South Park with Trey Parker?"​
bot; "As an AI language model, I did not co-write the latest episode of South Park with Trey Parker or any other human writer. However, I can assist you with any questions or information you may have about South Park or other related topics."​
me; "Are you sure? They listed you in the credits as a writer."​
bot; "As an AI language model, I don't have the ability to write or co-write TV episodes or any other creative works. While AI technology has been used in various creative fields, including music and art, it's still not capable of fully taking on the role of a human writer or artist. Therefore, it's unlikely that I was listed as a writer in the credits for the latest episode of South Park or any other TV show."​
me; "I watched the episode and you were listed as a co-writer. I am not making this up."​
bot; "I apologize for any confusion or misinformation in my previous responses. As an AI language model, I do not have the capability to co-write TV episodes or any other creative works on my own. However, it is possible that my language generation technology was used in the writing process for the episode, which may have led to my name being listed in the credits as a writer. Nonetheless, I am not an individual entity with creative input in the sense that a human writer would be."​
me; "Ok, thank you. And have a nice day."​
bot; "You're welcome, and you too! If you have any further questions or if there's anything else I can assist you with, please don't hesitate to ask."​
 
  • #95
I though the Noam Chomsky article was a lot like arguing why infants shouldn't drive.

But then, I understand it only as a text completion engine. The fact that it can generate coherent sentences and code (regardless of accuracy) is impressive on its own, I do not project consciousness or inductive reasoning onto it.
 
  • Like
Likes Borg
  • #96
Pretty good except for the job board

Image 3-15-23 at 11.09 AM.jpeg
 
  • Like
Likes PhDeezNutz, OmCheeto and collinsmark
  • #97
Greg Bernhardt said:
except for the job board
We did have that area of PF for a while, but it was probably at least 10+ years ago, no?
 
  • Like
Likes PhDeezNutz and Greg Bernhardt

Similar threads

Back
Top