- #1
ubergewehr273
- 142
- 5
Let a function ##f:X \to X## be defined.
Let A and B be sets such that ##A \subseteq X## and ##B \subseteq X##.
Then which of the following are correct ?
a) ##f(A \cup B) = f(A) \cup f(B)##
b) ##f(A \cap B) = f(A) \cap f(B)##
c) ##f^{-1}(A \cup B) = f^{-1}(A) \cup f^{-1}(B)##
d) ##f^{-1}(A \cap B) = f^{-1}(A) \cap f^{-1}(B)##
My attempt-
For option a, let an element ##x \in f(A \cup B)##
##\Leftrightarrow## ##f^{-1}(x) \in A \cup B##
##\Leftrightarrow## ##f^{-1} (x)\in A## or ##f^{-1}(x) \in B##
##\Leftrightarrow## ##x \in f(A)## or ##x \in f(B)##
##\Leftrightarrow## ##x \in f(A) \cup f(B)##
##\Rightarrow## ##f(A \cup B) = f(A) \cup f(B)##
A similar analogy can be applied to options c and d as well.
However, option b doesn't seem to fit into this argument. Even though this approach seems rationale, a counter example can be given to disprove option b. It goes as follows :
Let ##X=\left\{1,2,3 \right\}##, ##A=\left\{1 \right\}## and ##B=\left\{2 \right\}##
Let function ##f## be defined as ##f(1)=3## and ##f(2)=3##
Clearly ##A \cap B = \phi## and hence ##f(A \cap B)## becomes undefined.
Therefore disproving option b.
But option d is correct even though option b is incorrect. Can somebody clarify this for me ?
In simple terms, if option a is right then why not option b (surely there must be some flaw in the above proof when applied for option b but what is it) ? And since option b is incorrect how can option d be correct ?
NOTE: The above question appeared in an exam and the correct answers are options a,c,d.
Let A and B be sets such that ##A \subseteq X## and ##B \subseteq X##.
Then which of the following are correct ?
a) ##f(A \cup B) = f(A) \cup f(B)##
b) ##f(A \cap B) = f(A) \cap f(B)##
c) ##f^{-1}(A \cup B) = f^{-1}(A) \cup f^{-1}(B)##
d) ##f^{-1}(A \cap B) = f^{-1}(A) \cap f^{-1}(B)##
My attempt-
For option a, let an element ##x \in f(A \cup B)##
##\Leftrightarrow## ##f^{-1}(x) \in A \cup B##
##\Leftrightarrow## ##f^{-1} (x)\in A## or ##f^{-1}(x) \in B##
##\Leftrightarrow## ##x \in f(A)## or ##x \in f(B)##
##\Leftrightarrow## ##x \in f(A) \cup f(B)##
##\Rightarrow## ##f(A \cup B) = f(A) \cup f(B)##
A similar analogy can be applied to options c and d as well.
However, option b doesn't seem to fit into this argument. Even though this approach seems rationale, a counter example can be given to disprove option b. It goes as follows :
Let ##X=\left\{1,2,3 \right\}##, ##A=\left\{1 \right\}## and ##B=\left\{2 \right\}##
Let function ##f## be defined as ##f(1)=3## and ##f(2)=3##
Clearly ##A \cap B = \phi## and hence ##f(A \cap B)## becomes undefined.
Therefore disproving option b.
But option d is correct even though option b is incorrect. Can somebody clarify this for me ?
In simple terms, if option a is right then why not option b (surely there must be some flaw in the above proof when applied for option b but what is it) ? And since option b is incorrect how can option d be correct ?
NOTE: The above question appeared in an exam and the correct answers are options a,c,d.
Last edited by a moderator: