Announcing containers

David Feuer david.feuer at
Wed Aug 31 05:51:40 UTC 2016

There's a lot to see in this one. There are plenty of brand-new
functions in Data.Map, Data.Set, and Data.Sequence, including a
highly-optimized lens-inspired map alteration function and a brand-new
API for merging maps efficiently. Several key map, set, and sequence
functions have sped up considerably. The full change log is below. I
have tried to give appropriate credit to all the non-maintainers who
contributed to this release, some of whom made very major
contributions indeed. If I missed anyone, please give a shout.

General package changes:

Remove all attempts to support nhc98 and any versions of GHC before 7.0.

Integrate benchmarks with Cabal. (Thanks, Gabriel Gonzalez!)

Make Cabal report required extensions properly, and stop using default
extensions. Note that we do not report extensions conditionally
enabled based on GHC version, as doing so would lead to a maintenance
nightmare with no obvious benefits.

Use BangPatterns throughout to reduce noise. This extension is now
required to compile containers.

Improve QuickCheck properties taking arbitrary functions by using
Test.QuickCheck.Function.Fun instead of evil Show instances for

Expose several internal modules through Cabal (as requested by Edward
Kmett). These remain completely unsupported.

New exports and instances:

Add alterF, restrictKeys, and withoutKeys to Data.Map and Data.IntMap.

Add take, drop, splitAt, takeWhileAntitone, dropWhileAntitone, and
spanAntitone for Data.Map and Data.Set. Thanks to Cale Gibbard for
suggesting these.

Add merge, mergeA, and associated merge tactics for Data.Map. Many
thanks to Cale Gibbard, Ryan Trinkle, and Dan Doel for inspiring the
merge idea and helping refine the interface.

Add fromDescList, fromDescListWith, fromDescListWithKey, and
fromDistinctDescList to Data.Map.

Add fromDescList and fromDistinctDescList to Data.Set.

Add Empty, :<|, and :|> pattern synonyms for Data.Sequence, as
originally envisioned in the finger tree paper by Paterson and Hinze.

Add adjust', (!?), lookup, chunksOf, cycleTaking, insertAt, deleteAt,
intersperse, foldMapWithIndex, and traverseWithIndex for

Derive Generic and Generic1 for Data.Tree.Tree, Data.Sequence.ViewL,
and Data.Sequence.ViewR.

Add foldTree for Data.Tree. (Thanks, Daniel Wagner!)

Semantic changes:

Make Data.Sequence.splitAt strict in its arguments. Previously, it
returned a lazy pair.

Fix completely erroneous definition of length for Data.Sequence.ViewR.

Make Data.Map.Strict.traverseWithKey force result values before
installing them in the new map.

Make Data.Tree.drawTree handle newlines better. (Thanks, recursion-ninja!)


All functions in Data.Map proper that have been documented as
deprecated since version 0.5 or before now have DEPRECATED pragmas and
will actually be removed after another cycle or two.

Tree printing functions in Data.Map intended for library debugging are
now deprecated. They will continue to be available for the foreseeable
future in an internal module.

Performance changes:

Substantially speed up splitAt, zipWith, take, drop, fromList,
partition, foldl', and foldr' for Data.Sequence. Special thanks to
Lennart Spitzner for digging into the performance problems with
previous versions of fromList and finding a way to make it really
fast. Slightly optimize replicateA. Stop traverse from performing many
unnecessary fmap operations.

Most operations in Data.Sequence advertised as taking logarithmic time
(including >< and adjust) now use their full allotted time to avoid
potentially building up chains of thunks in the tree. In general, the
only remaining operations that avoid doing more than they really need
are the particular bulk creation and transformation functions that
really benefit from the extra laziness. There are some situations
where this change may slow programs down, but I think having more
predictable and usually better performance more than makes up for

Add rewrite rules to fuse fmap with reverse for Data.Sequence.

Switch from hedge algorithms to divide-and-conquer algorithms for
union, intersection, difference, and merge in both Data.Map and
Data.Set. These algorithms are simpler, are known to be asymptotically
optimal, and are faster according to our benchmarks.

Speed up adjust for Data.Map. Allow map to inline, and define a custom
(<$). This considerably improves mapping with a constant function.

Remove non-essential laziness throughout the Data.Map.Lazy implementation.

Slightly speed up deletion and alteration functions for Data.IntMap.

More information about the Libraries mailing list