A couple of questions on set theory

In summary, the Cartesian product and Cartesian square can be constructed in a way that follows the format \{x\in A~\vert~P(x)\}.
  • #106
Ok, great :) I'll get on it.

I have a question though. There are shapes whose area has to be found through integration. Will this area function also have to account for them? That would basically mean I'd need to construct an integral.
 
Physics news on Phys.org
  • #107
Dods said:
Ok, great :) I'll get on it.

I have a question though. There are shapes whose area has to be found through integration. Will this area function also have to account for them? That would basically mean I'd need to construct an integral.

The Lebesgue measure can be constructed without any notion of integral. So the definition of area doesn't need integration. However, if you want to explicitely find the area, then integrals might be handy. I mean: with measures you can show that something has an area, but you might not be able to find it.

The logical procedure is: first you construct measures, then you construct the associated integral.
 
  • #108
Right, so if I just want to show that an object has area I wouldn't need integrals, although for figuring out what that area is they might be needed.

I though of starting with defining a rectangles area, and then expanding that definition to other shapes. Do you think that's a good idea or would I need to start at a level where I define some concepts and then use them to show that a rectangle has an area, and then expand the definition?

I very much appreciate you advice on this :)
 
  • #109
Dods said:
Right, so if I just want to show that an object has area I wouldn't need integrals, although for figuring out what that area is they might be needed.

I though of starting with defining a rectangles area, and then expanding that definition to other shapes. Do you think that's a good idea or would I need to start at a level where I define some concepts and then use them to show that a rectangle has an area, and then expand the definition?

I very much appreciate you advice on this :)

Sure starting with rectangles is a good idea. In fact, try to start with rectangles whose sides are parallel to the x-axis and y-axis. So rectangles of the form ##(a_1,a_2]\times (b_1,b_2]## (I exclude the sides because it is technically easier).
 
  • #110
Thanks micromass!

I don't think I'm reading that cartesian product right - wouldn't you need open intervals, not half-open ones to exclude the sides?

Also, at some point we defined a rectangle-like object before in this thread.

Anyway, I'll start coming up with it, I'll post the start of it soon. :)
 
  • #111
Dods said:
Thanks micromass!

I don't think I'm reading that cartesian product right - wouldn't you need open intervals, not half-open ones to exclude the sides?

I only exclude two of the sides.

I do it this way because things end up to be easier. Why? Because you can easily "paste" together rectangles this way. For example, the union of ##(0,1]\times (0,1]## and ##(1,2]\times (0,1]## is again a rectangle of the same sort and we have no overlapping. If we take closed or open rectangles, then we have the sides to worry about.

Also, at some point we defined a rectangle-like object before in this thread.

OK, but that's not the same rectangle as we'll use here.
 
Last edited by a moderator:
  • #112
I've been doing another round of exams (last day of high school on Sunday! :]). I'll come back and update soon. I'm trying to come up with definitions that are readily generalizable to n-dimensional area/volume/equivalent.

Dods
 
  • #113
Heya,

I finally have time to come back to this :) !

I have a question:

If I have a set that is a union of several sets, say [itex]Y = A \cup B \cup C[/itex] is there a way using set-builder notation to refer to the set (let's call it [itex]S_Y[/itex]) of all the sets that are part of the union?

That is [itex]S_Y = \{A, B, C\}[/itex] in this case. I'm not sure how to approach this, especially given that the same set can be described by different unions: we can have [itex]Y = A \cup B \cup C = E \cup F[/itex].

Thanks!
 
  • #114
If you're only given the set ##Y##, then there is no way to find the sets whose union give ##Y##.

I mean, you could find all sets whose union is ##Y##, but you're not going to find the specific sets.
 
  • #115
By the way, in general if I have a list of elements where each successive element is connected to the one before it by the same operation, like:


[tex]x_1 + x_2 + x_3 + x_4 + x_5 + x_6[/tex]

or


[tex]x_1 \cdot x_2 \cdot x_3 \cdot x_4 \cdot x_5[/tex]

or even

[tex]x_1 \times x_2 \times x_3 \times x_4[/tex]

and you want to talk about the same list with [itex]n[/itex] elements, it considered rigorous to express it as something like:

[tex]x_1 + ... + x_n[/tex]

or

[tex]x_1 \times ... \times x_n[/tex]

? Or is this shorthand for a more rigorous statement?

The issue just came up with some sets I was considering.

Thanks :)
 
  • #116
Good question. Let's look at the definition of addition. Addition is defined as a binary operation. What this means is that it is a function ##+:\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}##. So to ##(a,b)##, we associate ##+(a,b)##. Since this looks weird, we just write it as ##a+b##.

From this definition, we see that something like ##a+b+c## does not make sense. We can't define it as ##+(a,b,c)##, since ##+## only operates on ##2## terms. However, the following is defined:

[tex](a+b)+c[/tex]

In more primitive notation, we have ##+(+(a,b),c)##. So we let ##+## operate on ##+(a,b)## and ##c##. But we can also define

[tex]a+(b+c)[/tex]

which of course means ##+(a,+(b,c))##. And it turns out that both are equal. So, we have

[tex]+(+(a,b),c) = +(a,+(b,c))[/tex]

It is for this reason, that we can invent the notation ##a+b+c## to mean the value of the above. This is an abuse of notation. Purely rigorously, we can't write it like ##a+b+c##. But of course, the abuse is very convenient.

There is an operation, called the Lie bracket (denoted by ##[a,b]##). What it is, is of no importance. But it is also a binary operation. However, it is not associative in general. Thus we have

[tex][a,[b,c]] \neq [a,[b,c]][/tex]

This is a major example of a nonassociative operation. Because it is nonassociative, we do not write it as ##[a,b,c]##. We always write it as a binary operation.

The cartesian product of sets is a bit more subtle. There, it does not holds that

[tex]A\times (B\times C) = (A\times B)\times C[/tex]

So the operation fails to be associative. However, we do write it as ##A\times B\times C##. The reason is that there is a natural bijection between the two sets. That is, we can identity the element ##(a,(b,c))\in A\times (B\times C)## with the element ##((a,b),c)\in (A\times B)\times C##. This is a canonical or natural identificiation. So when we write ##A\times (B\times C) = (A\times B)\times C##, we always keep this identification in mind.
 
  • Like
Likes 1 person
  • #117
Thank you for an excellent explanation!

micromass said:
Good question. Let's look at the definition of addition. Addition is defined as a binary operation. What this means is that it is a function ##+:\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}##.

Could you say that for any[itex]_1[/itex] binary operation [itex]*[/itex], we can define it as ##*:S \times S\rightarrow S##?

That is, a function that takes two elements of the same set to another element of that set?

[itex]_1[/itex] I mean as a basic definition - I assume that you'd need to amend this for an operation like division over the real numbers (but not for division over the "real numbers not including zero") ?

micromass said:
In more primitive notation, we have ##+(+(a,b),c)##. So we let ##+## operate on ##+(a,b)## and ##c##. But we can also define

[tex]a+(b+c)[/tex]

which of course means ##+(a,+(b,c))##. And it turns out that both are equal. So, we have

[tex]+(+(a,b),c) = +(a,+(b,c))[/tex]

In Spivak's Calculus book he shows this for [itex] a + b + c + d[/itex] using properties of real numbers (I did it as an exercise and then checked the book). Is this how it turns out both are equal?

micromass said:
There is an operation, called the Lie bracket (denoted by ##[a,b]##). What it is, is of no importance. But it is also a binary operation. However, it is not associative in general. Thus we have

[tex][a,[b,c]] \neq [a,[b,c]][/tex]

This is a major example of a nonassociative operation. Because it is nonassociative, we do not write it as ##[a,b,c]##. We always write it as a binary operation.

Do you mean [tex][a,[b,c]] \neq [[a,b],c][/tex]?

micromass said:
The cartesian product of sets is a bit more subtle. There, it does not holds that

[tex]A\times (B\times C) = (A\times B)\times C[/tex]

So the operation fails to be associative. However, we do write it as ##A\times B\times C##. The reason is that there is a natural bijection between the two sets. That is, we can identity the element ##(a,(b,c))\in A\times (B\times C)## with the element ##((a,b),c)\in (A\times B)\times C##. This is a canonical or natural identificiation. So when we write ##A\times (B\times C) = (A\times B)\times C##, we always keep this identification in mind.

That's very interesting. If for a minute we think about ordered 4-tuples as nested ordered pairs, if the definition "nest"s them from the left

[tex](((x,y)z)w)[/tex]

or from the right

[tex](x(y(z,w)))[/tex]

it does change the set's definition but we associate both ways of defining it with the ordered 4-tuple (x,y,z,w)...

Is that relevant or have I started a whole other thing here?
 
Last edited:
  • #118
Dods said:
Could you say that for any[itex]_1[/itex] binary operation [itex]*[/itex], we can define it as ##*:S \times S\rightarrow S##?

That is, a function that takes two elements of the same set to another element of that set?

Yes, that is essentially how a binary operation is defined.

[itex]_1[/itex] I mean as a basic definition - I assume that you'd need to amend this for an operation like division over the real numbers (but not for division over the "real numbers not including zero") ?

Division is usually not seen as a binary operation. It is possible to see it as a binary operation, but this is annoying. In abstract algebra, we usually look at the unary operation

[tex]i:\mathbb{R}\setminus \{0\}\rightarrow \mathbb{R}:x\rightarrow 1/x[/tex]

then division ##a/b## is just ##a*i(b)##. This is the usual way to deal with this.

In Spivak's Calculus book he shows this for [itex] a + b + c + d[/itex] using properties of real numbers (I did it as an exercise and then checked the book). Is this how it turns out both are equal?

Yes.

Do you mean [tex][a,[b,c]] \neq [[a,b],c][/tex]?

Yes, sorry.

That's very interesting. If for a minute we think about ordered 4-tuples as nested ordered pairs, if the definition "nest"s them from the left

[tex](((x,y)z)w)[/tex]

or from the right

[tex](x(y(z,w)))[/tex]

it does change the set's definition but we associate both ways of defining it with the ordered 4-tuple (x,y,z,w)...

Is that relevant or have I started a whole other thing here?

No, what you said is exactly right. We "associate" ##(((x,y),z),w)## and ##(x,y,z,w)##. So although they aren't the same thing, we do identify them informally.
 
  • #119
I've been reviewing some of the material we've covered and I had some miscellaneous questions:

First of all, I've seen [itex]A \setminus D[/itex] type sets and while I've understood them from context, how is the "\" operation defined?

-----

Second, let's say we want the set of all functions from [itex]\{0,1\}[/itex] to [itex]\mathbb{R}[/itex]. We could write [itex]\{f: \{0,1\} \rightarrow \mathbb{R} \vert f \ \text{is a function} \}[/itex] and that'd be clear enough. But if we were being pedantically rigorous and we wanted that in ##\{x \in B~\vert~P(x)\}## format, could we write: [itex]\{f \in \mathcal{P}(\{0,1\}\times \mathbb{R})~\vert~\text{f is a function}\}[/itex], seeing as our ##f## is always a subset of ##\{0,1\} \times \mathbb{R}##?

As always, thanks!
 
  • #120
Dods said:
I've been reviewing some of the material we've covered and I had some miscellaneous questions:

First of all, I've seen [itex]A \setminus D[/itex] type sets and while I've understood them from context, how is the "\" operation defined?

The definition is [tex]A\setminus D = \{x\in A~\vert~x\notin D\}[/tex]

Second, let's say we want the set of all functions from [itex]\{0,1\}[/itex] to [itex]\mathbb{R}[/itex]. We could write [itex]\{f: \{0,1\} \rightarrow \mathbb{R} \vert f \ \text{is a function} \}[/itex] and that'd be clear enough. But if we were being pedantically rigorous and we wanted that in ##\{x \in B~\vert~P(x)\}## format, could we write: [itex]\{f \in \mathcal{P}(\{0,1\}\times \mathbb{R})~\vert~\text{f is a function}\}[/itex], seeing as our ##f## is always a subset of ##\{0,1\} \times \mathbb{R}##?

Yes, that is correct.
 
  • Like
Likes 1 person
  • #121
Right.

So if I wanted the set of numbers [itex]1,2,3...n[/itex], I could write it as [tex]n+1 \setminus \{0\}[/tex] seeing as we said [itex]n+1 = \{0,1,2,...n\}[/itex]?

Do you have any ideas for either things I could prove, or define using set theory? I like the problems you give me, they're fun :)

Thanks a lot micro :)
 
  • #122
Dods said:
Right.

So if I wanted the set of numbers [itex]1,2,3...n[/itex], I could write it as [tex]n+1 \setminus \{0\}[/tex] seeing as we said [itex]n+1 = \{0,1,2,...n\}[/itex]?

Do you have any ideas for either things I could prove, or define using set theory? I like the problems you give me, they're fun :)

Thanks a lot micro :)

Seems right!

What about the following:

For any set ##A##, we define ##S(A) = A\cup \{A\}##. We say that a set ##X## is inductive, if ##0:=\emptyset\in X## and if ##x\in X##, then ##S(x)\in X##. Let ##\mathcal{A}## be the set of all inductive sets, define

[tex]\mathbb{N} = \bigcap_{A\in \mathcal{A}} A[/tex]

So ##\mathbb{N}## is the smallest inductive set.

Maybe you can try to prove the Peano axioms:
  • For every ##n\in \mathbb{N}##, we have that ##S(n)\in \mathbb{N}##
  • For every ##n,m\in \mathbb{N}## holds that if ##S(n) = S(m)##, then ##n=m##.
  • There is no ##n\in \mathbb{N}## such that ##S(n) = 0##
  • If ##K\subseteq \mathbb{N}## satisfies ##0\in K## and if ##n\in K## implies ##S(n)\in K##, then it holds that ##K=\mathbb{N}##

Now, can you define addition and multiplication on ##\mathbb{N}##? Can you check the usual properties:

  • ##n + (m + k) = (n+m) + k##
  • ##n + 0 = n = 0 + n##
  • ##n + m = m + n##
  • If ##n+m = n + k##, then ##m=k##
  • ##n\cdot (m\cdot k) = (n\cdot m)\cdot k##
  • ##n\cdot m = m\cdot n##
  • ##n\cdot 1 = n = 1\cdot n##
  • If ##n\cdot m = 0##, then ##n=0## or ##m=0##
  • ##n\cdot (m + k) = (n\cdot m) + (n\cdot k)##

What about an ordering relation ##<##, can you define that?
Can you define the integers ##\mathbb{Z}##?
 
  • #123
micromass said:
Seems right!

Great, thanks :)

micromass said:
What about the following:

For any set ##A##, we define ##S(A) = A\cup \{A\}##. We say that a set ##X## is inductive, if ##0:=\emptyset\in X## and if ##x\in X##, then ##S(x)\in X##. Let ##\mathcal{A}## be the set of all inductive sets, define

[tex]\mathbb{N} = \bigcap_{A\in \mathcal{A}} A[/tex]

So ##\mathbb{N}## is the smallest inductive set.

^ So you mean I can take this information as "given" or is there a part of this you want me to prove?

micromass said:
Maybe you can try to prove the Peano axioms:
  • For every ##n\in \mathbb{N}##, we have that ##S(n)\in \mathbb{N}##
  • For every ##n,m\in \mathbb{N}## holds that if ##S(n) = S(m)##, then ##n=m##.
  • There is no ##n\in \mathbb{N}## such that ##S(n) = 0##
  • If ##K\subseteq \mathbb{N}## satisfies ##0\in K## and if ##n\in K## implies ##S(n)\in K##, then it holds that ##K=\mathbb{N}##

Now, can you define addition and multiplication on ##\mathbb{N}##? Can you check the usual properties:

  • ##n + (m + k) = (n+m) + k##
  • ##n + 0 = n = 0 + n##
  • ##n + m = m + n##
  • If ##n+m = n + k##, then ##m=k##
  • ##n\cdot (m\cdot k) = (n\cdot m)\cdot k##
  • ##n\cdot m = m\cdot n##
  • ##n\cdot 1 = n = 1\cdot n##
  • If ##n\cdot m = 0##, then ##n=0## or ##m=0##
  • ##n\cdot (m + k) = (n\cdot m) + (n\cdot k)##

What about an ordering relation ##<##, can you define that?
Can you define the integers ##\mathbb{Z}##?

That should keep me busy! These look great, thank you!
 
  • #124
OK, assuming I understood you correctly and I can take the stated definitions as given, I'll start by proving the Peano axiom:

There is no [itex]n \in \mathbb{N}[/itex] such that [itex]S(n) = 0[/itex]

Proof -

Lets assume there does exist an ##n## in ##\mathbb{N}## such that [itex]S(n) = 0[/itex]. If we recall the definitions of [itex]S(n)[/itex] and ##0##, that means there exists an ##n## such that ##n \cup \{n\} = \emptyset##, or put another way, ##\{x\vert x\in n \vee x = n\} = \emptyset##.##^1## ##n## is either the empty set or it is nonempty. If ##n## is nonempty, there exist an ##x## in ##n## and therefore ##\{x\vert x\in n \vee x = n\}## is nonempty and we have ##\{x\vert x\in n \vee x = n\} \neq \emptyset## - which contradicts our initial assumption. If ##n## is the empty set, we have ##\{x\vert x\in n \vee x = n\} =\{x\vert x =\emptyset \} = \{\emptyset\} \neq \emptyset## - also a contradiction. Either way our initial assumption that "there does exist an ##n## in ##\mathbb{N}## such that [itex]S(n) = 0[/itex]" leads us to a contradiction and so must be false. Therefore there does not exist an ##n## in ##\mathbb{N}## such that [itex]S(n) = 0[/itex]. QED

##^1## I assume that although not in standard format, this set is acceptable because of the axiom of union (and possibly pairing?).

-----------

Hows that?
 
  • #125
Dods said:
^ So you mean I can take this information as "given" or is there a part of this you want me to prove?

Oops, didn't see this question. All of that were basically definitions, so you can take them as given.

Dods said:
OK, assuming I understood you correctly and I can take the stated definitions as given, I'll start by proving the Peano axiom:

There is no [itex]n \in \mathbb{N}[/itex] such that [itex]S(n) = 0[/itex]

Proof -

Lets assume there does exist an ##n## in ##\mathbb{N}## such that [itex]S(n) = 0[/itex]. If we recall the definitions of [itex]S(n)[/itex] and ##0##, that means there exists an ##n## such that ##n \cup \{n\} = \emptyset##, or put another way, ##\{x\vert x\in n \vee x = n\} = \emptyset##.##^1## ##n## is either the empty set or it is nonempty. If ##n## is nonempty, there exist an ##x## in ##n## and therefore ##\{x\vert x\in n \vee x = n\}## is nonempty and we have ##\{x\vert x\in n \vee x = n\} \neq \emptyset## - which contradicts our initial assumption. If ##n## is the empty set, we have ##\{x\vert x\in n \vee x = n\} =\{x\vert x =\emptyset \} = \{\emptyset\} \neq \emptyset## - also a contradiction. Either way our initial assumption that "there does exist an ##n## in ##\mathbb{N}## such that [itex]S(n) = 0[/itex]" leads us to a contradiction and so must be false. Therefore there does not exist an ##n## in ##\mathbb{N}## such that [itex]S(n) = 0[/itex]. QED

##^1## I assume that although not in standard format, this set is acceptable because of the axiom of union (and possibly pairing?).

-----------

Hows that?

That's all fine. Even the ##^1## remark is fine.
 
  • #126
I'm pleased - I'm not that comfortable yet with verbose proofs, you don't really do those in high-school I guess :)

I have a question about some terminology I've seen, like "pair" and "triple". In several places I've seen the phrases like "the pair ##(G,*)## is a group if..." or in an alternate definition of function:

Fredrik said:
Definition 2

A triple ##f=(X,Y,G)## such that ##G\subseteq X\times Y## is said to be a function from X into Y, if
(a) For all ##x\in X##, there's a ##y\in Y## such that ##(x,y)\in G##.
(b) For all ##x,x' \in X## and all ##y\in Y##, if ##(x,y)\in G## and ##(x',y)\in G##, then ##x=x'##.
X is said to be the domain of f. Y is said to be the codomain of f. G is said to be the graph of f.

What does "pair" and "triple" mean in these contexts? Is it just a handy way of referring to a bunch of interrelated sets that match a definition? Does a triple for instance have some set-theoretic or logical definition or sub-structure?

This might be a weird question if that's just notation, but for all I know a "triple" refers to some deep mathematical logic construction...:-p
 
  • #127
In this context it is nothing special. It's just a way of writing things down. We can as well say "A group is given by a set ##G## and a function ##*## such that..." But using pair and triple notation is standard.

More formally, given two objects ##x## and ##y##, we define the pair ##(x,y) := \{\{x\},\{x,y\}\}##. The crucial point here is that ##(x,y)= (a,b)## implies that ##x=a## and ##y=b##.

A triple can then be defined as ##(x,y,z) := ((x,y),z)##. And we can go on: ##(x,y,z,w) := ((x,y,z),w)##.

It's nice that we can make formal definitions of these concepts, but they're not all that important. All we want to say is that we are given 2 or 3 objects.
 
  • Like
Likes 1 person
  • #128
Right, I see, so it's just like the ordered pairs we've discussed, extended to ordered n-tuples.

Thanks!
 
  • #129
I noticed something - in Fredrik's post that I quoted, he says that according to the definition of a function that we've been using, each function has many codomains, and according to the second definition, each function has one codomain.

I'm not sure I understand this.

I think I understand why in the definition we've been using there are many codomains. I mean, if we consider for example the functions ##f:\mathbb{R}\rightarrow \mathbb{R} : x \rightarrow x^2## and ##f:\mathbb{R}\rightarrow \mathbb{R_+} : x \rightarrow x^2##, they are equal but have different codomains (Still, this seems to me to rest on the fact that that for both functions we can tell exactly what elements are in the respective ranges. I'm not sure if an arbitrary ##f:A\rightarrow B## has multiple codomains).

But I'm not seeing why in the second definition, each function has one codomain.

Dods :)
 
  • #130
Dods said:
I noticed something - in Fredrik's post that I quoted, he says that according to the definition of a function that we've been using, each function has many codomains, and according to the second definition, each function has one codomain.

I'm not sure I understand this.

I think I understand why in the definition we've been using there are many codomains. I mean, if we consider for example the functions ##f:\mathbb{R}\rightarrow \mathbb{R} : x \rightarrow x^2## and ##f:\mathbb{R}\rightarrow \mathbb{R_+} : x \rightarrow x^2##, they are equal but have different codomains (Still, this seems to me to rest on the fact that that for both functions we can tell exactly what elements are in the respective ranges. I'm not sure if an arbitrary ##f:A\rightarrow B## has multiple codomains).

Exactly, two functions ##f:A\rightarrow B## and ##g:A\rightarrow C## are equal (in this case) if their graphs coincide, that means that for any ##x\in A##, we have ##f(x) = g(x)##. So ##B## and ##C## can be very different. But these are called the codomain. So we see that it is perfectly possible to have more than one codomain. As an example, see the function that you have given.

But I'm not seeing why in the second definition, each function has one codomain.

Because now, if we have two functions ##f:A\rightarrow B## and ##g:A\rightarrow C##, then you are actually given triples ##(A,f,B)## and ##(A,g,C)##. If these triples are equal, thus if ##(A,f,B) = (A,g,C)##, then ##f=g## (meaning that for any ##x\in A##, we have ##f(x) = g(x)##, as before), but we also have ##B=C## (and this is now called the codomain).

So in this case, your examples ##f:\mathbb{R}\rightarrow \mathbb{R} : x \rightarrow x^2## and ##f:\mathbb{R}\rightarrow \mathbb{R_+} : x \rightarrow x^2## describe two entirely different functions, although they were equal in the first case.
 
  • #131
I think I understand - there isn't a unique codomain as a result of the definitions, but rather by definition (of the triple).

I got to catch some sleep but this was bugging me on Wikipedia:

'About the same time as Wiener (1914), Felix Hausdorff proposed his definition:

[tex](a, b) := \{ \{a, 1\}, \{b, 2\}\}[/tex]

"where 1 and 2 are two distinct objects different from a and b."[3]'

Why do 1 and 2 need to be distinct for the characteristic property of ordered pairs to hold? I tried proving that [itex](a, b) = (c, d) \ \text{if and only if} \ a=c \wedge b=d[/itex] using this definition and it seems to work (maybe I made an error in the proof - I'll check in the morning). Am I just missing something basic?

Thanks :)
 
  • #132
Dods said:
I think I understand - there isn't a unique codomain as a result of the definitions, but rather by definition (of the triple).

I got to catch some sleep but this was bugging me on Wikipedia:

'About the same time as Wiener (1914), Felix Hausdorff proposed his definition:

[tex](a, b) := \{ \{a, 1\}, \{b, 2\}\}[/tex]

"where 1 and 2 are two distinct objects different from a and b."[3]'

Why do 1 and 2 need to be distinct for the characteristic property of ordered pairs to hold? I tried proving that [itex](a, b) = (c, d) \ \text{if and only if} \ a=c \wedge b=d[/itex] using this definition and it seems to work (maybe I made an error in the proof - I'll check in the morning). Am I just missing something basic?

Thanks :)

If you define ##(a,b) := \{\{a,1\},\{b,1\}\}##, then I really doubt you can prove the characteristic property. If you can't find the error in your proof, then please post it here and I'll try to find it.
 
  • #133
Ahh, I read "where 1 and 2 are two distinct objects different from a and b" as meaning 1 and 2 are distinct from a and b, not that they are distinct from each other! In that case my proof does indeed collapse. :-p

I knew it was something silly like that. :)

But when ##1 \neq 2## I think the proof works.
 

Similar threads

Replies
2
Views
5K
Replies
2
Views
2K
Replies
14
Views
5K
Replies
2
Views
1K
Replies
3
Views
3K
Replies
4
Views
2K
Replies
9
Views
3K
Back
Top