Jump to content

Wikipedia:Reference desk/Archives/Mathematics/2015 April 7

From Wikipedia, the free encyclopedia
Mathematics desk
< April 6 << Mar | April | May >> April 8 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


April 7[edit]

Generalization of a function[edit]

Let f : R+ → R be a monotonically increasing function on the positive real line, and require that f be greater than 1. Define g : NR by . Finally, let h : NR be defined by where j is a positive continuous function defined on the positive real line. Under suitable conditions, how do I generalize g and h to the real numbers rather than the natural numbers N? Let f and j be themselves real analytic if that helps.--Jasper Deng (talk) 05:05, 7 April 2015 (UTC)[reply]

for 0≤x<1. Bo Jacoby (talk) 07:22, 7 April 2015 (UTC).[reply]
That's not continuous. -- Meni Rosenfeld (talk) 11:26, 7 April 2015 (UTC)[reply]
The first question reduces to by way of logarithms, and the second reduces to it by (maybe you give up some of the assumptions in the transformation, but I'm not sure it matters). So I'll focus on that form.
You'd be in a better position if you had . Then you could use Bo's construction . If converges, you can still make that work: . Then .
If instead you have f decreasing, there is a possibility that g has a power series around ∞ which converges for every n. Then it converges for every sufficiently large x and is unique, and can be used to define g for every x. -- Meni Rosenfeld (talk) 11:26, 7 April 2015 (UTC)[reply]
@Meni Rosenfeld: What if I extended the domain of f such that it's 1 on the negative real line? Then its logarithm there is always 0 and S = 0. Is my original question then not a special case of what you describe?--Jasper Deng (talk) 00:55, 9 April 2015 (UTC)[reply]
Yes. But you're looking for a "natural-seeming" generalization - you might not get one if you extend f in this arbitrary, non-smooth way. E.g., wouldn't give you the gamma function. -- Meni Rosenfeld (talk) 09:03, 9 April 2015 (UTC)[reply]
I'm not exactly sure what counts as a generalization, but one approach would be to use product integrals. Let and . This does not interpolate the g and h from the discrete case, unless one assumes that j and f are step functions. Sławomir Biały (talk) 12:04, 7 April 2015 (UTC)[reply]
What happens when the function f(x) = x + 1 is plugged into yours respective suggestions? Anything for g that reduces to the gamma function (Γ(x + 2) to be precise) in this case will pass a sanity test in my eyes. YohanN7 (talk) 14:06, 7 April 2015 (UTC)[reply]
Well, I'm not sure about the j business or that the goal should be to reproduce the gamma function. But my construction does not reduce to the gamma function. The most natural generalization of the gamma function is probably something like a Mellin transform, which also is not very helpful here. I think something along the lines of fractional calculus could work instead. For instance, if one can construct an auxiliary function k on [0,1] such that (here Jn is the nth iterated integral), then we could interpolate using the fractional integrals. I'm not sure how the details would work, but it might be worth thinking about it this way. Sławomir Biały (talk) 13:00, 9 April 2015 (UTC)[reply]
Basically, a logical interpolation was what I was looking for, in the style of that done to the factorial by the gamma function.--Jasper Deng (talk) 18:08, 8 April 2015 (UTC)[reply]

Can the transitivity of multiplication be proven by a computer?[edit]

It was conjectured by Roger Penrose (in The Emperor's New Mind) that a computer could never prove that a×b always equals b×a. Has he been proven wrong? Can a computerized mathematical proof of this be generated from more elementary axioms? Same question for Penrose tiling - he asserts that a computer can never prove that Penrose tiling will fill a plane, since it does not repeat. — PhilHibbs | talk 21:12, 7 April 2015 (UTC)[reply]

I'm not sure what Penrose means by "computerized proof". As someone who accepts the Church–Turing thesis, it seems redundant. Certainly a computer can simply enumerate all proofs until it finds one. So since commutativity can be proven in first-order Peano arithmetic, which is arguably more elementary, a computer can find a proof. Similarly, there exists a proof that the Penrose tiling is a tiling of the plane, so a computer can find a proof.--80.109.80.31 (talk) 21:48, 7 April 2015 (UTC)[reply]
More context for the first statement would be helpful. Penrose defines multiplication using the lambda calculus. So, is something equivalent but not identical to . If induction is not allowed as a rule of inference (so we're confined to something weaker than Peano arithmetic), then we might not even be able to prove equivalence. It depends on the rules. Sławomir Biały (talk) 11:33, 8 April 2015 (UTC)[reply]
A little more... As Sławomir Biały says, it depends on the rules. However, Penrose is actually making his remark in a wider context. His underlying point is that he thinks of computers as being entirely bound by rules, and that these rules restrict the conclusions that a computer can arrive at. In particular, he is, I think, echoing (or being heavily influenced by) the ideas of John Lucas in his famous paper Minds, Machines and Gödel. Lucas argued that computers are unable to "see" the truth of statements that human mathematicians can see and prove. The reasoning is closely linked to the work of Alonzo Church and Alan Turing in their negative answer to David Hilbert's Entscheidungsproblem; in effect, they proved that there is no decision procedure that a computer can carry out that determines whether an arbitrarily chosen formula in the first order predicate calculus (i.e. the normal rules of logical deduction) is true or false. That is, Penrose joins Lucas in thinking that computers can't always decide whether a randomly chosen mathematical or logical statement is true or false, and thus that the "mind" of a computer is in some way qualitatively different from the mind of a human.
For what it's worth, there is substantial academic debate on the validity of Penrose's/Lucas' conclusions. RomanSpa (talk) 17:50, 9 April 2015 (UTC)[reply]
The statement listed by the OP appears to be a statement of commutativity rather than of transitivity. Is the OP really asking about a*b =? b*a, or about (a*b)*c =? a*(b*c)? Robert McClenon (talk) 19:03, 9 April 2015 (UTC)[reply]
Wait a minute. The second example is of associativity. Transitivity applies to relationships, not operations. Robert McClenon (talk) 19:32, 9 April 2015 (UTC)[reply]
Penrose and Lucas do appear to make the same argument. Lucas's paper starts out "Gödel's theorem states that in any consistent system which is strong enough to produce simple arithmetic there are formulae which cannot be proved-in-the-system, but which we can see to be true", which is already wrong. Gödel's argument shows that the Gödel sentence G is provable iff ¬G is provable. If both are provable, the system is inconsistent (and G is false in the standard interpretation). If neither is provable, the system is incomplete (and G is true in the standard interpretation). Saying that we "see" that G is true (in the standard interpretation) is the same as saying that we "see" that the system is consistent, and that's merely an educated guess based on the information available to us. A computer limited to reasoning within a Gödelizable system knows as much as we know about its own Gödel sentence—that it's provable iff its negation is provable iff the system is inconsistent iff it's false in the standard interpretation, etc.—because all of that is provable within the system. There's no evidence that "seeing" (i.e. guessing like humans do) the consistency of the system is not also formalizable within the system. It's simply not defined well enough for anyone to argue anything about it. All of this is obvious to most mathematicians, and I don't think there are many who think the argument of Penrose and Lucas is valid or interesting. -- BenRG (talk) 23:37, 9 April 2015 (UTC)[reply]
Whilst I'm largely of the view that the Penrose/Lucas reasoning is ultimately unconvincing, because of a misunderstanding of the nature of what computers can do, I'd like to suggest that BenRG's comments don't seem to me to provide a rigorous critique of their position. I'm reluctant to act as apologist for Lucas, though, for personal reasons. RomanSpa (talk) 01:45, 10 April 2015 (UTC)[reply]