[Haskell-cafe] [Haskell-Cafe] Constructive Probability Theory and RandProc's σ-algebras
Edward Kmett
ekmett at gmail.com
Sun Jul 3 18:35:06 CEST 2011
On Sun, Jul 3, 2011 at 11:05 AM, David Banas <dbanas at banasfamily.net> wrote:
> v0.4 of `RandProc` has just been posted to Hackage:
> http://hackage.haskell.org/package/randproc
[NB: I transfered this discussion from haskell to haskell-cafe, and started
another thread]
I have been spending some time exploring constructive probability theory in
Haskell, and I have a couple of observations about working with random
processes in a constructive setting:
Necessarily because you are working in Haskell, a constructive setting,
your σ-algebras can't be actual σ-algebras! =/
Notably, they aren't closed under *countable* union/intersection/complement,
but merely finite union/intersection/complement, and Gray and Davisson's
horrified objections from the bottom of page 39 on Random Processes applies:
If a class of sets is only a field rather than a σ-field, that is, if it
> satisfies only (3.1) and (3.2), then there is no guarantee that the class
> will contain all limits of sets. Hence, for example knowing that a class of
> sets contains all half-open intervals of the form (a,b] for a and b finite
> does not ensure that it will also contain points or singleton sets! In fact,
> it is straightforward to show that the collection of all such half-open
> intervals together with the complements of such sets and all finite unions
> of the intervals and complements forms a field. The singleton sets, however,
> are not in the field. (See exercise 3.4)
> Thus if we tried to construct a probability theory based on only a field,
> we might have probabilites defined for events such as (a,b) meaning "the
> output voltage of a measurement is between a and b" yet not have
> probabilities defined for a singleton set {a} meaning "the output voltage is
> exactly a." By requiring that the event space be a σ-field instead of only a
> field, we are assured that all such limits are indeed events.
This limitation, while from some perspective annoying is intrinsic to
tackling the problem intuitionistically. Practically, this means you need to
be careful when rederiving most of the later results, because you are
limited to what you can prove in a constructive measure theory.
In particular the approaches of YK Chan, Brian Weatherson, and Glenn Shafer
to intuitionistic probability theory are probably appropriate reading.
Personally, I don't think this is all that bad, we get some nice properties
by being in a constructive setting. Weatherson noted that your measure
becomes additive without problems from Dutch book arguments in an
intuitionistic setting, because P(A v not A) is not necessarily 1!
You may recall that recall that additivity would imply that P(A v B) = P(A)
+ P(B), given that A and B are disjoint, but that it tends to fall apart in
classical probability theory.
Intuitionistically, however, this is fine. That is to say that if you placed
bets that payed out with market rate interest at a rational booking agent on
both whether P = NP and P /= NP, it isn't just as good as having placed a
bet on truth, because intuitionistically it is possible that neither of
those bets may ever pay out, but in a classical setting P (P = NP v P /= NP)
= P(True) = 1, so we lose additivity to gain the excluded middle.
The cost of additivity is giving up or refining a lot of results from that
text that talk about the limit of a random process. That and that the
"σ-fields" in question are mere fields. ;)
I am interested in exploring the consequences of this further.
-Edward
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/haskell-cafe/attachments/20110703/d20a7fc8/attachment.htm>
More information about the Haskell-Cafe
mailing list