- #1
- 1,231
- 0
"examples for proof"
Could it be possible to prove some general statement by providing enough examples in a certain context--with the proviso that the examples are in some way more "complicated" than your axiom system? This would be a proof of the form
(Of a general statement, all of a set of instances that are different from each other in a fundamental way are true) and (the axiom system is not complicated enough to support such a large number of statements that differ in that crucial way without the general statement actually being true) implies (the general statement is true)
For an example of the kind of thinking I mean, to show that the null space of a linear transformation T is Rn, it is sufficient to give n independent vectors that T maps to zero. T is in a sense not "complicated" enough to support n independent nonzero vectors being mapped to zero without having, in general, all vectors being mapped to zero.
For another example, let's say that I'm looking at a polynomial sequence and I determine that the first n + 1 terms of it fit a certain polynomial, and I know that the polynomial for the whole sequence has degree n. Then I can conclude that the polynomial that fits those n terms does indeed fit the rest of the terms.
But these are very narrow examples and they both have to do with linear algebra. Are there more general statements of this kind that have to do with general logic systems?
Could it be possible to prove some general statement by providing enough examples in a certain context--with the proviso that the examples are in some way more "complicated" than your axiom system? This would be a proof of the form
(Of a general statement, all of a set of instances that are different from each other in a fundamental way are true) and (the axiom system is not complicated enough to support such a large number of statements that differ in that crucial way without the general statement actually being true) implies (the general statement is true)
For an example of the kind of thinking I mean, to show that the null space of a linear transformation T is Rn, it is sufficient to give n independent vectors that T maps to zero. T is in a sense not "complicated" enough to support n independent nonzero vectors being mapped to zero without having, in general, all vectors being mapped to zero.
For another example, let's say that I'm looking at a polynomial sequence and I determine that the first n + 1 terms of it fit a certain polynomial, and I know that the polynomial for the whole sequence has degree n. Then I can conclude that the polynomial that fits those n terms does indeed fit the rest of the terms.
But these are very narrow examples and they both have to do with linear algebra. Are there more general statements of this kind that have to do with general logic systems?
Last edited: