June 28th, 2013, 05:49 PM

Squaring all odd numbers in a range
So this is the code:
Code:
#include <stdio.h>
int main(void)
{
int i, n, square;
printf("Enter number of entries in table: ");
scanf("%d", &n);
for (i = 1; i <= n; i += 2)
printf("%10d %10d\n", i, i * i);
return 0;
}
My question is, in 'printf' is there a way to square i without invoking multiplication? If there is, please let me know. Thank you :)
June 28th, 2013, 10:50 PM

Sure!
#include<math.h>
/*...*/
exp(log(i)+log(i));/* square i without multiplication */
[code]
Code tags[/code] are essential for python code and Makefiles!
June 29th, 2013, 04:09 AM

cool!
y = i x i
ln(y) = ln(i) + ln(i)
e^ln(y) = y
=> e^(ln(i) + ln(i)) = i x i
Alternative square
pow(i, 2)
June 29th, 2013, 04:32 AM

Not if you are looking for a more efficient method no. Integer multiplication is a single machine instruction on all but the simplest microprocessor architectures. Why would you want to do anything else? There is no operator for squaring or exponentiation to any power in C or C++ as there is in some languages, and even those that do support it generally do so for floating point types.
If you really want to be perverse about it though, there's b49P23TIvg's suggestion (also floating point) or simply;
Code:
int sqr( int i )
{
int s = 0 ;
for( j = 0; j < i; j++ )
{
s += i ;
}
}
Then:
Code:
printf( "%10d %10d\n", i, sqr(i) ) ;
or
Code:
printf( "%10d %10d\n", i, (int)pow(i,2) ) ;
but the pow() function is floating point, and may in fact use multiplication in any case.
June 29th, 2013, 01:44 PM

Applications to powers
This question becomes interesting when applied to computing large powers. How many multiplications do we need to compute n to the 25th? I can compute pow(n, 25) with 6 multiplications rather than 24. Using Iverson notation j implemented as an interactive language with system prompt of 3 space characters,
Code:
#: 25 NB. binary representation of 25
1 1 0 0 1
P1 =: i.9x
P1 NB. P1 is the extended precision integers from 0 to 8
0 1 2 3 4 5 6 7 8
P2 =: P1 * P1 NB. P2 is P1 squared
P4 =: P2 * P2 NB. P4 is P2 squared, which is P1 ^ 4
P8 =: P4 * P4
P16 =: P8 * P8
P16 * P8 * P1
0 1 33554432 847288609443 1125899906842624 298023223876953125 28430288029929701376 1341068619663964900807 37778931862957161709568
P1 ^ 25
0 1 33554432 847288609443 1125899906842624 298023223876953125 28430288029929701376 1341068619663964900807 37778931862957161709568
P1 * P1 * P1 * P1 * P1 * P1 * P1 * P1 * P1 * P1 * P1 * P1 * P1 * P1 * P1 * P1 * P1 * P1 * P1 * P1 * P1 * P1 * P1 * P1 * P1
0 1 33554432 847288609443 1125899906842624 298023223876953125 28430288029929701376 1341068619663964900807 37778931862957161709568
And sometimes we need to compute matrices raised to large powers. For instance, suppose you want to find the probability of being in a certain state (condition) after "a long time". Given the matrix of transition probabilities P from each state to all the others the answer is P^infinity. Which you can easily compute by successively squaring P until you've reached your tolerance limit for change. P can be a large matrix, and this technique will of course be much faster than multiplying P mp P mp P mp P ... where "mp" represents the matrix product. Reference: see Markov chain.
In the following example, I show how to compute large Fibonacci numbers by matrix multiplication. Each matrix multiplication doubles the term number, costing 8 multiplications and 4 additions. Thus I should be able to find the least significant 8 base ten digits of the pow(10,20) using less than 1000 multiplications and less than 1000 additions and about as many modulus divisions whereas the task is "impossible" using the straightforward termbyterm addition definition. Again, in j:
Code:
[F1 =: 1 1 ,: 1 2x NB. Fibonacci matrix
1 1
1 2
F2 =: mp F1
F4 =: mp F2
F8 =: mp F4
F16 =: mp F8
F16 mp F8 mp F1
7778742049 12586269025
12586269025 20365011074
(i.56),.((, +/@:(_2&{.))^:54) 0 1x NB. Fibonacci numbers by addition
0 0
1 1
2 1
3 2
4 3
5 5
6 8
7 13
8 21
9 34
10 55
...
46 1836311903
47 2971215073
48 4807526976
49 7778742049
50 12586269025
51 20365011074
52 32951280099
53 53316291173
54 86267571272
55 139583862445
6 * 4 8 NB. additions and multiplications to find Fib 51 using matrix
24 48
The repeated number in the Fibonacci matrix is F_{2n}.
Comments on this post
Last edited by b49P23TIvg; June 29th, 2013 at 01:48 PM.
[code]
Code tags[/code] are essential for python code and Makefiles!
June 29th, 2013, 07:55 PM

Originally Posted by b49P23TIvg
Sure!
#include<math.h>
/*...*/
exp(log(i)+log(i));/* square i without multiplication */
Thanks a lot for helping me out! :)
June 30th, 2013, 03:12 PM

Originally Posted by arman.khandaker
Thanks a lot for helping me out! :)
So b49P23TIvg  the OP is either determined to be perverse, did not get your sense of humour, or you just did his homework for him. ;)

Originally Posted by clifford
So b49P23TIvg  the OP is either determined to be perverse, did not get your sense of humour, or you just did his homework for him. ;)
I am not exactly trying to be perverse. There are certain exercises I am facing difficulties with from the book I am learning to programme, which includes perverse methods like not using multiplication. :/

And which book is that?
There is no utility in learning "do x without y" problems, because you're never going to do that in a real program. The most infamous of these being the "swap two variables without using a temporary".
It's like learning a magic trick  it might impress some people, but to others, it's just a dull waste of time. It won't make you a better programmer.
There are better algorithms for multiplication than addition in a loop, and whilst b49P23TIvg's suggestion of logarithms is mathematically correct, it would be subject to all sorts of rounding errors on any finite machine.
Comments on this post

Originally Posted by salem
And which book is that?
There is no utility in learning "do x without y" problems, because you're never going to do that in a real program. The most infamous of these being the "swap two variables without using a temporary".
It's like learning a magic trick  it might impress some people, but to others, it's just a dull waste of time. It won't make you a better programmer.
There are better
algorithms for multiplication than addition in a loop, and whilst b49P23TIvg's suggestion of logarithms is mathematically correct, it would be subject to all sorts of rounding errors on any finite machine.
The book's name is, "C programming  A modern approach". My problems seem perverse because these exercises follow after a specific chapter, and thus these exercises must be completed before referring to concepts that will be introduced in future chapters.

Take this question as an example:
http://forums.devshed.com/cprogramming42/usingalooptofindthevalueofe947752.html
b49P23TIvg uses concepts which have not been introduced yet in the book I am referring to. If I would have said instead, "Is there a way to code it without using arrays?", it would seem similarly perverse to you. Hope you guys understand my problem. :/

Originally Posted by arman.khandaker
If I would have said instead, "Is there a way to code it without using arrays?", it would seem similarly perverse to you. Hope you guys understand my problem. :/
Not at all. Doing multiplication without using the most obvious and efficient method is perverse; using an array to store your input rather than independent variables is merely a design choice.

Originally Posted by arman.khandaker
The book's name is, "C programming  A modern approach".
Solutions to many of the exercises from the book can be found here(The author's web site).
I am not familiar with this book, but it seems is extraordinarily expensive for a language as small and as well covered as C.
The "modern approach" to C programming is called C++ ;)

Originally Posted by clifford
Solutions to many of the exercises from the book can be found
here(The author's web site).
I am not familiar with this book, but it seems is extraordinarily expensive for a language as small and as well covered as C.
The "modern approach" to C programming is called C++ ;)
Yes I am aware of that website. Answers to only few of the exercises can be found. Anyways, thanks :)

Originally Posted by arman.khandaker
My problems seem perverse because these exercises follow after a specific chapter, and thus these exercises must be completed before referring to concepts that will be introduced in future chapters.
I would suggest that this is not a programming problem that depends on material in future chapters, but rather the problem is mathematical. For centuries, mathematicians have been developing numerical methods for solving problems by hand (and mind). Since they didn't have the raw computational power that we have in front of us right this very instant, they had to instead devise ingenious methods that took advantage of the properties of numbers. It can be argued that people in the past had to work a lot smarter than we do because they lacked the powerful tools and technology at our disposal ... or that we can afford to work a lot dumber than they could because we can often just resort to brute force to get the job done.
For example, we could easily write a program that adds up all the integers from 1 to 100, whereas having to add them all up by hand would be tedious and timeconsuming. There is a story about a German primary school around 1787 where the boys were misbehaving, so as a punishment the teacher had them add up all the integers from 1 to 100, inclusive, before they could go home. Immediately, all the boys went to work. All except one boy who just sat there looking out the window. Then he suddenly wrote a few things on his chalkboard, handed it to the teacher, who, seeing that he had the correct answer, let him go home. That little boy was Johann Carl Friedrich Gauß and he used the relationships of the numbers to each other to come up with a numeric method that only took seconds to execute. An example contrasting working smarter with using brute force.
Apparently there was a numerical method for calculating squares that did not involve multiplying. There was most likely a need for early programmers to know about such methods, since many of the early processors did not have instructions for muliply or divide or for floatingpoint, so those operations had to be in software, which took a fair amount of what little processing power those early computers could muster. Early programmers had to be able to program smarter in order to squeeze all the efficiency they could out of the early computers, unlike our situation in which much of the inefficiency in our programs are hidden from us by the sheer massive processing power at our disposal.
You might want to use Google to look for discussions of the method that you are looking for; eg, Google on calculate square without multiplication.