ObjectOriented Implementation of Numerical Methods
20020401 The Java Specialists' Newsletter [Issue 044]  Review: ObjectOriented Implementation of Numerical Methods
Author:
Dr. Heinz M. Kabutz
If you are reading this, and have not subscribed, please consider doing it now by going to our
subscribe
page. You can subscribe either via email or RSS.
Welcome to the 44th edition of The Java(tm) Specialists'
Newsletter, sent to 3101 (!) Java experts in over 77
countries. A special welcome to our first subscriber from
Morocco!
I'm still in Germany, enjoying the technology (we won't mention
the weather again, OK?) the people, the food, the fantastic beer.
Not enjoying the shops that don't take credit cards. In South
Africa I once bought a cabbage costing about US$ 0.13 on my
credit card (long story, I thought I had some money in my wallet
but it was completely empty).
Administrative Note: Before I get into this week's book
review, there has been a slight change in the way this newsletter
is going to be funded. I have purchased the rights to use an
idea by Vince Sabio, now comfortably retired author of
HumourNet,
in order to enrich myself. He has patented a concept called an
"Unsubscription Fee" for newsletters such as this. It's very
simple really: Though subscriptions are free, unsubs cost
US$5.00 to US$7.00, depending on your geographical location. The
charge automatically appears on your credit card (when you
unsubscribe) as "Maximum Solutions (South Africa)" (Please
remember this when you get your bill.) This idea will also
enable me to measure the percentage of intellectual proletariat
(TM  Vince Sabio) on my distribution list.
Review: ObjectOriented Implementation of Numerical Methods by Dr. Didier Besset
As promised, this week I am going to look at an interesting book,
that I think most of you will enjoy. The author is a subscriber
to this newsletter, but that's no surprise, most of "who's who"
of Java authors are on this list :) My purpose in book reviews
is to tell you of interesting and different books
that I think you would appreciate.
In the days when I attended school we did not have computer
studies, so our universities could not make that a prerequisite
for Computer Science. The effect was that a wide variety of
talent arrived at our hallowed halls of learning. This presented
a problem for my old
Computer Science Department:
If they made CompSci too difficult, then those who had never seen
a computer would fail, and if they made it too easy, the hackers
would get full marks, get bored, not go to class, etc. They
therefore made the rule that you had to pass Mathematics II
before taking CompSci III. The hardest three years for
many a hacker was Mathematics II.
We learnt a whole lot of things at Mathematics, much of which I
never fully understood or appreciated. I think that playing with
computers was just so much more interesting than looking at a
blank piece of paper. I find it much more interesting figuring
out a computer program than reading mathematical proofs.
I wish I had had a copy of Dr. Didier Besset (PhD Physics from
University of Geneva) book
ObjectOriented Implementation of Numerical Methods in those
days! It marries numerical methods and programming in a
very interesting way. Just listen to these algorithms,
implemented in Java and Smalltalk:
 Interpolations: Lagrange, Newton, Neville, BulirschStoer and Cubic Spline
 Zero of Function: Bisection Algorithm, Newton's Method, Roots of Polynomials
 Integration of Functions: Trapeze, Simpson, Romberg
 Series: Infinite Series, Continued Fractions, Incomplete Gamma Function, Incomplete Beta Function
 Linear Algebra: Vectors, Matrices, and all that you might dream of for linear algebra
 Elements of Statistics: Moments, Histograms, Random number generators, probability distributions
 Statistical Analysis: FisherSnedecor, many others
 Optimization: HillClimbing, Powell's Algorithm, Genetic Algorithm
 Data Mining: Data Server, Covariance, Mahalanobis Distance, Cluster Analysis
It cuts, it slices, it dices! If you can't get excited by all
those algorithms, then you're in the wrong profession ;)
Dr. Besset sets the scene in the book by pulling out some
performance statistics. We all know Java is slow ... right?
Have a look at these stats from the book:
Operation  Units  C  Smalltalk  Java 
Polynomial 10th degree  msec.  1.1  27.7  9.0 
Neville Interpolation (20 points)  msec.  0.9  11.0  0.8 
LUP matrix inversion (100 x 100)  sec.  3.9  22.9  1.0 
The C measures are done using published algorithms, so Dr. Besset
didn't just add a whole lot of wait statements into the C code.
Dr. Besset says: "I want to emphasize here that all the code in
this book is real code that I have used personally in real
applications." Wow, that's certainly better than most books
nowadays :)
Comparing Doubles
Besides all the interesting algorithms, which are shown with
mathematical explanations and wellwritten Java code, Dr. Besset
also tackles issues such as the problems that happen when you
compare floating point numbers. Here's an extract written by
him for the Smalltalk Chronicles (edited by myself):
Dr. Didier Besset: "One classical caveat with
floatingpoint numbers is checking the equality between two
floatingpoint numbers. Now and then one bloke complains on some
news groups that Smalltalk does not compute right with
floatingpoint numbers. In the end it turns out that he was
computing a result with method A, the same result with method B,
and, to check the results, was evaluating the expression '
resultA == resultB . The fact that this expression
evaluates to false has nothing to do with Smalltalk. It is a
fundamental problem with floatingpoint numbers [HK: also in
Java].
"A floatingpoint number is only an approximation of a
mathematical real number. A small introductory article like this
one is too short to explain things in depth, but I would like to
quickly recall a few principles.
"Floatingpoint numbers are used to keep the relative error
constant. This is valid of course for a given number. As soon
as numbers are combined together one must follow the propagation
of rounding errors. Because the relative error is kept constant,
nothing serious happens with multiplications and divisions. The
error on additions and subtractions, however, can become
prohibitively large, up to the point of generating something
utterly wrong. To illustrate this point, try running the
following Java program:
public class DoubleTest {
public static void main(String[] args) {
System.out.println(2.71828182845905  2.71828182845904);
System.out.println(2.71828182845905  2.71828182845904 +
0.00000000000001 );
System.out.println(2.71828182845905 == (2.71828182845904 +
0.00000000000001));
}
}
[HK: The answer is the following, a free unsubscription credit to
anyone who guessed it... ;]
9.769962616701378E15
1.976996261670138E14
false
"Mark the difference in the last digits! The result you will get
is 100% wrong.
"Unless you are a very good and courageous mathematician, I would
not recommend you to attempt to predict error propagation. The
easiest and surest thing to do is to measure error propagation
experimentally.
"After coding an algorithm, you can predict roughly where the
infinities or the nearly zero cases are located. I am not
speaking only about the result. All steps of the algorithm must
be checked against the occurrences of infinities. In these
areas, try to evaluate a few results by changing the values by a
very small amount (1012 or so for standard IEEE double format).
In general the difference between the results will be one or two
order of magnitudes larger than the original variation. If you
observe something much larger, the algorithm used is not made for
computers and must be adapted. I give several examples of such
modifications in my book. Other examples can be found in
Numerical Recipe for C
."
How should you compare Doubles?
Shortly after I started sending out my newsletter, a friend of
mine mentioned to that he was surprised that Java programmers did
not know how to compare doubles. If you just use "==" as in our
example above, you will get incorrect results. Dr. Besset also
has a section about that in his book. It is my understanding
that in Java the precision of doubles and floats is defined by
the IEEE 754 floating point format, so there should not be
differences between physical architectures, since in Java we are
running on a virtual machine. Please run these examples and tell
me if your results vary. The results were identical on Wintel,
AIX box running Java 1.3 on AIX version 4.3.3.0 (IBM 2 processor
4 cpu model 7044270) and on a Solaris Box (SunOS Mars2 5.6
Generic_10518129 sun4u sparc SUNW,Ultra5_10).
Dr. Besset presents the following algorithm for comparing double
precision numbers (reproduced with permission):
/**
* This class determines the parameters of the floating point
* representation
* HK: I have refactored it somewhat to make it threadsafe and
* to make it easier to understand and to fit into my newsletter.
* The algorithms are the same as in the book.
*
* @author Didier H. Besset
*/
public final class DhbMath {
/** Radix used by floatingpoint numbers. */
private final static int radix = computeRadix();
/** Largest positive value which, when added to 1.0, yields 0 */
private final static double machinePrecision =
computeMachinePrecision();
/** Typical meaningful precision for numerical calculations. */
private final static double defaultNumericalPrecision =
Math.sqrt(machinePrecision);
private static int computeRadix() {
int radix = 0;
double a = 1.0d;
double tmp1, tmp2;
do {
a += a;
tmp1 = a + 1.0d;
tmp2 = tmp1  a;
} while (tmp2  1.0d != 0.0d);
double b = 1.0d;
while (radix == 0) {
b += b;
tmp1 = a + b;
radix = (int)(tmp1  a);
}
return radix;
}
private static double computeMachinePrecision() {
double floatingRadix = getRadix();
double inverseRadix = 1.0d / floatingRadix;
double machinePrecision = 1.0d;
double tmp = 1.0d + machinePrecision;
while (tmp  1.0d != 0.0d) {
machinePrecision *= inverseRadix;
tmp = 1.0d + machinePrecision;
}
return machinePrecision;
}
public static int getRadix() {
return radix;
}
public static double getMachinePrecision() {
return machinePrecision;
}
public static double defaultNumericalPrecision() {
return defaultNumericalPrecision;
}
/**
* @return true if the difference between a and b is less than
* the default numerical precision
*/
public static boolean equals(double a, double b) {
return equals(a, b, defaultNumericalPrecision());
}
/**
* @return true if the relative difference between a and b is
* less than precision
*/
public static boolean equals(double a, double b, double precision) {
double norm = Math.max(Math.abs(a), Math.abs(b));
return norm < precision  Math.abs(a  b) < precision * norm;
}
}
The book has details as to why the algorithms work the way they
do. Here is how you would use the equals() method:
public class BetterDoubleTest {
public static void main(String[] args) {
System.out.println("Floatingpoint machine parameters");
System.out.println("");
System.out.println("radix = " + DhbMath.getRadix());
System.out.println("Machine precision = " +
DhbMath.getMachinePrecision());
System.out.println("Default numerical precision = " +
DhbMath.defaultNumericalPrecision());
System.out.println(DhbMath.equals(
2.71828182845905,
(2.71828182845904 + 0.00000000000001)));
System.out.println(DhbMath.equals(
2.71828182845905, 2.71828182845904));
System.out.println(DhbMath.equals(
2.718281828454, 2.718281828455));
System.out.println(DhbMath.equals(
2.7182814, 2.7182815));
}
}
On the machines that I ran this test on, the output was:
Floatingpoint machine parameters

radix = 2
Machine precision = 1.1102230246251565E16
Default numerical precision = 1.0536712127723509E8
true
true
true
false
That's it for this week, I hope you will consider these issues
when next you want to compare doubles. And if you like
interesting books, do yourself a favour and read
Didier Besset's book. No, there's not really an
unsubscription fee. Look at the date. Yes, I will get a
referral fee if you purchase the book via that link.
Heinz
This material from The Java(tm)
Specialists' Newsletter by Maximum Solutions (South Africa). Please contact Maximum
Solutions for more information.
