# number theory

## A178830

Sun, 2011-01-02 18:02

I just added an entry in the Online Encyclopedia of Integer Sequences. The entry is related to a problem recently discussed at Stack Exchange regarding the partitioning of the integer set {1,...,n} into two sets P and S with the product of the elements of P equal to the sum of elements in S.

For instance, (2)(4)(11)=1+3+5+6+7+8+9+10+12+13+14 so P={2,4,11} and S={1,3,5,6,7,8,9,10,12,13,14} is one such partition of {1,...,14}. With n=14, P={1,6,14} and P={1,3,5,6} also work, so these partitions are not necessarily unique.

Here are some questions, answers and guesses:

1. Can you do this for any n?

Yes, if n>4. n=3 can be done with P={3}, S={1,2}, but n=4 is not doable. For n at least 5, you can use
P={1,(n-1)/2,n-1} if n is odd, and
P={1,(n-2)/n,n} if n is even.

2. Is there only one way to do this?

It appears that for n>34, you can always do this in at least 2 ways. In fact, based on calculations I've been doing, I conjecture that the number of ways to do this tends to infinity with n.

Derek Jennings on Stack Exchange noted that if n=4m, and m>9, then P={8,m-1,m+2} works, so if n is a multiple of 4, there are at least 2 ways to do this.

3. Some numbers (n=10,17,26,36,...) can be partitioned using a P with only two elements (e.g., for n=10, P={6,7} works). Are there infinitely many such n? Do they have a positive density?

I expect that there are an infinite number of such n, as I have been able to find quite a few such. Some n can be partitioned using 2-element P sets in more than one way. For instance, with n=906, P can be taken to be any of {505,811}, {615,666}, or {637,643}. n=855333 can be done with 19 different 2-element P sets.

We can formulate an equivalent condition. The set {1,...n} can be partitioned using a 2-element P set only if k=1/2 n (n+1) + 1 has at least two divisors between n/2 and n. I see no reason at all why there should not be an infinite number of n with this property, and in fact, I expect I can prove it with some work.

4. Can all n be done in more than one way with |P|=2 or |P|=3?

I wish. It seems almost all n can be done in this way but, for instance, ...205,217,227,241,578,794,806,925,970,985... cannot be done in more than one way without using a |P|>3. This list might be finite, but I'm not counting on it. What this means is that to come up with a scheme that takes care of all n, one might need to go to P with at least 4 elements (which just makes things more complicated).

Fun stuff!

## some number theoretic thoughts

Sun, 2010-11-07 21:17

I've been thinking a little more than the usual amount about number theoretic problems. This is the result of reading problems here and here. So I thought I'd put down some thoughts so I might refer to them later.

Let phi(n) be Euler's phi function; the number of positive integers less than n which are relatively prime to n. So, for instance, phi(6)=2, since only 1 and 5 are relatively prime to 6. Someone asked, essentially, is phi(n+2)/phi(n) dense in the positive reals? The answer is almost certainly yes, but I don't think this is a solved problem. The "2" is arbitrary, I think; the problem would be just as difficult with any other positive integer.

A related, solvable problem is: is phi(a)/phi(b) dense in the positive reals, when a and b run over all positive integers? This is true, but since a and b are unrelated, this doesn't have quite the same impact. However, more is true, I think: there exist a and b so that, for any positive rational r, phi(a)/phi(b)=r. I'm pretty sure this is true, but I'll have to check. In any case, it must be true.

These questions lead one to wonder: phi is a multiplicative function with a fairly steady growth. Can we replace phi by other multiplicative functions and get the same answers to these questions? Obviously not all multiplicative functions work. There must be a certain robustness to the prime factors that the function's values take on. What's a good way to measure this robustness (other than answering these questions)?

(Trivially, let f be a multiplicative function with f(p^k)=2^k, where p is prime. Then f(a)/f(b)=2^m, which is not dense in anything. So, multiplicativity is not enough.)

I wonder if it would be enough to be multiplicative, and to have the property that for all prime powers p^k, there exists an n such that p^k divides f(n). Euler's phi certainly satisfies this (proof?) despite the fact that f(n) does not take on all positive values (14 is the smallest nontotient). Other functions, like the sum of divisors function, are similar in this way, while the number of divisors function is not: there are numbers with 1, 2, 3, etc. divisors.

More later.

## generalization of highly composite numbers, 2

Mon, 2010-05-31 16:57

Just a short comment.

If we let f(n,k) be the sum of the divisors of n, each raised to the k power, then the highly composite numbers are those n which are record setters for k=0. For k=1, we get the highly abundant numbers, and for k=-1 we get the superabundant numbers.

A curious question to consider is the set of k values for which a given n is a record setter.

I just calculated that 672 is a record setter for 0 < k < 0.3705405845106956751517917 using Pari/GP. Note that k>0, so 672 is not highly composite, but minimally so (I haven't actually checked that, but it is certainly a record setter down to tiny positive k, yet not for k=0).

Plouffe's Inverter gives me nothing on this 0.3705.. number. It's certainly some solution of a ridiculous exponential sum equation.

Here's more (correct) digits just for fun: 0.3705405845106956751517917005244140444958735486823441966658021494
305594049833761245395005733860464257

## generalization of highly composite numbers

Mon, 2010-05-24 00:02

Highly composite numbers are those positive integers with more divisors than any smaller number. In other words, they are the record setting numbers for the divisor function. The sequence begins 1, 2, 4, 6, 12, 24, 36, 48, 60, 120, ... .

Now, the divisor function τ(n) counts the number of divisors of n, including 1 and itself. If n has a prime factorization that is the product of piai, with i running from 1 to k, then τ(n)=(a1+1)(a2+1)...(ak+1).

We can generalize this function by replacing 1 in this product by a complex z.

We could say τz(n) = (a1+z)(a2+z)...(ak+z).

Then we can define highly z-composite numbers to be those positive integers for which |τz(n) | is greater than |τz(j)| for all j < n.

Also, for each n, we can consider the set z of complex numbers for which n is highly z-composite. For most n, there will be no such z. But for some, this set might be quite interesting.

I will have to draw some pictures.

## gcd grid explorer

Mon, 2010-04-26 23:35

A new applet that I've posted at Open Processing for exploring the "gcd grid".

## a curiosity

Mon, 2010-02-15 23:29

Just calculated that
τ(819000)=τ(818999)τ(818998)τ(818997)τ(818996)
where τ(n) is the number of divisors of n (e.g., τ(6)=4 since 1, 2, 3, and 6 are the divisors of 6).

This is the smallest such number.

Since τ(n)≥2 for n>1,
and τ(n)=2 only if n is prime,
and among four consecutive integers there are at most two primes,
any number with the property that 819000 has must have at least 36 divisors.
In fact, only one integer among four consecutive integers can have exactly three divisors, so (2)(2)(3)(4)=48 divisors are required.

Now, is there an efficient way to generate only numbers which have at least 48 divisors? I suppose looping over the number of prime divisors, then over the exponents in the prime factorization might be reasonable. Complicated little piece of code that would be.

## highly composite and highly abundant numbers

Tue, 2009-11-10 00:32

Someone I correspondend with years ago wrote me today to ask me about a comment in a paper by Erdos (On Highly Composite and Similar Numbers, Erdos and L. Alaoglu, Transactions of the AMS, 56, 3, pp. 448-469).

Highly composite numbers are those numbers with more divisors than any smaller number. That is, they are "record setting" numbers for the number of divisors function. The sequence of h.c. numbers starts: 1, 2, 4, 6, 12, 24, 36, 48, 60, 120, 180, 240, 360, 720, 840, ....

Highly abundant numbers are those numbers for which the sum of their divisors is greater than the sum of the divisors of any smaller numbers. That is, they are the "record setting" numbers for the sum of divisors function. The sequence of h.a. number starts: 1, 2, 3, 4, 6, 8, 10, 12, 16, 18, 20, 24, 30, 36, 42, 48, 60, 72, 84, 90, 96, 108, 120,... .

As suggested by these examples, the sequence of h.a. numbers is more dense than the sequence of h.c. numbers. Erdos and Alaoglu prove various things about the density of these, and other, sequences of numbers. At one point, they make the surprising comment that "only a finite number of h.a. numbers can be highly composite". This is surprising since every small h.c. number is also h.a., and there are an infinite number of h.c. numbers. However, not all h.c. numbers are h.a.: tonight I found the example
1084045767585249647898720000 =(2^8)(3^4)(5^4)(7^2)(11^2)(13)...(53)
from some lists of the first 1000 h.c. and h.a. numbers. This number is highly composite but is not highly abundant. So at least we know one sequence is not contained in the other.

Now, the comment in the paper is following a corollary that says something about the shape of the prime factorization of a highly abundant number. I will have to look into how the shape of a h.a. number compares to the shape of a h.c. number.

## integer sequence noise

Thu, 2009-06-04 21:50

I finally got around to creating a page with my noise from integer sequences.

Put your headphones on and check it out and let me know what you think (or hear), and if you have suggestions for other sequences.

## miscellany

Fri, 2007-10-19 17:45

This week I've been thinking about weird numbers, a number theoretic concept that, it turns out, hasn't been investigated nearly as heavily (from a computational point of view) as one might think it would have. With another editor at Wikipedia, I've been looking for better lower bounds on odd weird numbers (i.e., a value x that we can be sure that any odd weird number n must be greater than: n>x). I found a paper from 1976 that gives a method for calculating rather large weird numbers, but it says nothing about odd weird numbers. I might try some computations this weekend.

Also this week, I had an MRI on Tuesday. This was my first MRI. My liver doctor wants me to have them once a year to make sure we detect any anomalies (read: tumors) in my liver as early as possible. I've been having liver ultrasounds every siz months since my interferon treatment for hepatitis C, but now he wants to replace every other one with an MRI. I have to say, ultrasounds are more fun. The MRI required an IV, and although I'm not claustrophobic per se, it was a bit unpleasant being in the tube and subjected to quite loud noises (actually, it was kind of funny, too). On the plus side, the MRI is quite short, so it's really not that bad.