[Haskell-beginners] help with optimizing memory usage of thunk size of Monte Carlo backtracking search algorithm

KC kc1956 at gmail.com
Sun Sep 11 23:06:27 UTC 2016


Please run with a profiler
Programmers are notoriously bad at guessing where the bottlenecks are
Unless looking for beer 😃

For strictness there is the bang notation ! 😃

--
--

Sent from an expensive device which will be obsolete in a few months! :D

Casey


On Sep 11, 2016 3:49 PM, "Dennis Raddle" <dennis.raddle at gmail.com> wrote:

> I'm a Haskell semi-beginner, and I've implemented a crude Monte Carlo
> search optimization algorithm, and I want to know how to use strict
> evaluation to prevent it from consuming too much memory.
>
> I am willing to learn new things to do this---if you want to point me in
> the right direction and maybe give some references, I'll take it from
> there. However, if there is something simple that can take care of the
> whole problem, please let me know.
>
> I have not run this with the profiler so I don't actually know precisely
> what it is doing now. I have not run out of memory using it in small cases,
> but I hope to use it on much larger cases.
>
> Here is the code so far:
>
>
>
> import qualified Data.List as L
> import Control.Monad
> import Data.Function
>
>
> -- Suppose that we do backtracking search for an optimal arrangement of
> -- elements in some kind of "structure". The "structure" is built one step
> at
> -- a time.
> --
> -- An example would be searching for an optimal arrangement of furniture.
> The
> -- structure is a represenation of the room and all items that have been
> -- placed in it so far. We could list the options (or "steps") available
> to us
> -- at any point in building the structure, that is a list of furniture
> items
> -- and locations.
> --
> -- We have the notion that the structure, at some point in adding steps,
> -- becomes complete. We have an evaluation function providing a "goodness
> -- score" on either a partially built or complete structure.
> --
> -- The search will optimize the goodness score over all possible complete
> -- structures. (Or perhaps with Monte Carlo search, an approximate optimal
> -- value.)
> --
> -- Class Opt (for "optimization") defines a structure type "struct" and a
> step
> -- type "step".
>
> class Opt struct step | struct -> step where
>   -- number of steps chosen so far in current state of 'struct':
>   oSize  :: struct -> Int
>   -- list all the available steps to choose next. If this list is null,
> then
>   -- the structure is complete.
>   oList  :: struct -> [step]
>   -- apply a step to the structure to create a new structure:
>   oApply :: struct -> step -> struct
>   -- evaluate the "goodness" of the current state of a structure. Higher is
>   -- better.
>   oEval  :: struct -> Double
>
>
>
> -- Implement a kind of Monte Carlo search. (I have a vague idea of the
> -- literature on Monte Carlo; this algorithm is my guess at something that
> -- does the job). We work in a state monad of class
> -- "RandMonad" which holds the StdGen data and provides several methods for
> -- accessing it. The only method we need from RandMonad is
> --
> --    rChooseItem :: RandMonad m => [a] -> m a
> --
> -- which chooses a random item from a non-null list.
> --
> -- The basic algorithm is this: at each point in building the structure, we
> -- have a structure S and a list of next steps step_list. We apply each
> step
> -- in step_list to S, in turn. After applying a step x to S, call the
> result
> -- S_x. We evaluate the "monte carlo fitness" of S_x by making random
> -- completions of it---that is, choosing a bunch of additional steps
> -- randomly---doing 'nExper' completions (nExper might be 1000 to 10000).
> The
> -- fitness as measured by oEval of the very best random completion becomes
> -- the "monte carlo fitness" of S_x. We then choose the step from
> step_list,
> -- x, that maximizes the "monte carlo fitness" of S_x.
> --
> -- We then repeat this process until S is complete.
>
>
> -- Function monteCarlo will take a partially complete structure, and
> optimize
> -- it over an investigation of 'nExper' possible "completions".
>
> monteCarlo :: (RandMonad r, Opt a b) => Int -> a -> r a
> monteCarlo nExper struct = case oList struct of
>   -- If structure is complete, then it is its own optimization.
>   []    -> return struct
>   -- Otherwise find optimal step according to "monte carlo fitness" and
>   -- recursively call 'monteCarlo'
>   steps -> do
>     let doStep step = do
>           let newStruct = oApply struct step
>           score <- monteCarloEval nExper newStruct
>           return (score,newStruct)
>     (_,winner) <- L.maximumBy (compare `on` fst) `liftM` mapM doStep steps
>     monteCarlo nExper winner
>
>
>
> -- monteCarloEval
> --
> -- Evaluate the "monte carlo fitness" of a structure 'struct' by
> completing it
> -- in nExper random ways (that is, make all remaining choices in purely
> random
> -- way) and finding the maximum value of the evaluated final state among
> all
> -- nExper ways.
> --
> monteCarloEval :: (RandMonad r, Opt a b) => Int -> a -> r Double
> monteCarloEval nExper struct = case oList struct of
>   [] -> return $ oEval struct
>   _  -> do
>     scores <- replicateM nExper (randomComplete struct)
>     return . maximum . map oEval $ scores
>
>
> -- Make random choices of steps until a structure 's' is complete.
> randomComplete :: (RandMonad r, Opt a b) => a -> r a
> randomComplete s = case oList s of
>   []    -> return s
>   steps -> rChooseItem steps >>= (randomComplete . oApply s)
>
>
> _______________________________________________
> Beginners mailing list
> Beginners at haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.haskell.org/pipermail/beginners/attachments/20160911/55d57cd7/attachment-0001.html>


More information about the Beginners mailing list