gcd 0 0 = 0

Jan de Wit Jan de Wit" <jwit@cs.uu.nl
Wed, 19 Dec 2001 01:14:32 +0100


Why not define gcd a b as the largest (in 'normal' order) integer d such
that the set of sums of
multiples of a and b {na+mb | n <- Z, m <- Z} is equal to the set of
multiples of d
{nd | n <- Z}? Easy to understand, no talk of division, lattices, rings,
ideals etcetera, and it covers the cases with 0.

Cheers, Jan de Wit