xml in fptools?

Graham Klyne GK at ninebynine.org
Thu Jun 1 16:02:38 EDT 2006


S. Alexander Jacobson wrote:
> Ok, but my original question is whether one XML tool makes sense.

I missed that bit... a fair question, but one that begs "what constitutes a tool?".

I would suggest that XML is sufficiently quirky and complex to parse that we (as
a community) probably don't want to invest effort in supporting more than one
XML *parser*.  But other tools may usefully be layered on top of that parser.

As for the use cases you offer, I think that's one way, but not the only way, to
slice the problem space:

> For example, if we are consuming XML, it seems like we would want
> something layered on top of Parsec or PArrows (so we can also parse the
> contents of CDATA etc).

HaXML is layered on something like that, viz. HMW parser combinators.  I suppose
it could be layered on Parsec, and if starting afresh that might be a good
option, but it doesn't seem to me to be a critical issue.

As for parsing the contents of CDATA sections, I'd suggest that (except for very
specific applications with demanding performance requirements) is something to
be tackled *after* the XML has been parsed.

> And, if we are producing XML, then we just need some data type that
> represents the XML infoset and a function for presenting that infoset as
> XML.

I see *producing* XML as being a different, albeit related, problem to that of
*parsing* XML.

> And if we are transforming XML, then perhaps the HaXML approach makes
> the most sense.  Note: I am using a wrapper around HaXML for producing
> XML in HAppS.

So, here, you use a common (one of 3?) underlying XML *parser*.  I don't see how
one can, in general, transform XML without first parsing it.

> And if we are *transacting* XML, then a tool like Haifa or HWSProxyGen
> or perhaps DTDToHaskell seems to make the most sense.

Hmmm... you lost me there.  In this context, I'm not sure what you mean by
"transacting".  Does this avoid the need to parse it in the first place?

> All of these seem like different needs/tools.  What were your use-cases?

I assume that's rhetorical?  (As I said earlier, mine was parsing RDF/XML to
yield something that was easily processed in accordance with the RDF abstract
syntax specification.  A generic XML parser yielding something close to XML
infoset was exactly what I wanted for this.)

...

So, in summary, I do see value in having a common XML parser, yielding a data
structure that is easy to process as an abstraction of the XML data model (like
the XML infoset), upon which other tools can be built.

It seems to me that the other use-cases for consuming XML, that don't call for a
generic XML parser, are more likely to be specific applications that don't need
the generality of full XML parsing.  I'm ambivalent about the appropriateness of
 following such an approach, but I note that Tim Bray (the XML pioneer) has
argued quite forcefully against the deployment of XML subsets for specific
applications (I don't have a specific reference to hand, but this came up some
time ago in IETF discussions of protocols based on XML;  maybe Jabber or Beep or
XmlConf).

#g
--

> On Wed, 31 May 2006, Graham Klyne wrote:
> 
>> Well, part of my point was that, AFAICT, your approach doesn't serve the
>> use-cases I envisage and did development for.
>>
>> It seems to me that a good basic XML parser would be a prerequisite to
>> supporting the use-case you describe, and the Haskell type-conversion
>> could be
>> layered on top.  As I understand it, that's how HaXML is constructed.
>>
>> As for the <textarea/> case you raise, this could be an area where
>> HTML and XML
>> give rise to differing requirements.  Personally, I'd prefer an *XML*
>> parser to
>> stick to XML specifications.
>>
>> #g
>> -- 
>>
>> S. Alexander Jacobson wrote:
>>> Again, my point is that it depends on the use cases we want to target.
>>>
>>> My bias is that we should be targetting conversion between XML and
>>> application specific Haskell data types.  Speculatively, I imagine a
>>> tool that generates Haskell datatypes and a parser from a RelaxNG
>>> specification and another that generates a RelaxNG spec from a haskell
>>> datatype.  But that is just my hope.  My immediate need is probably to
>>> adapt HWSProxyGen or HAifa to talk SOAP to paypal's api.
>>>
>>> Other people may have other needs.
>>>
>>> -Alex-
>>>
>>> ______________________________________________________________
>>> S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com
>>>
>>>
>>>
>>>
>>> On Tue, 30 May 2006, Udo Stenzel wrote:
>>>
>>>> S. Alexander Jacobson wrote:
>>>>>
>>>>> The problem with the infoset is that <textarea></textarea> and
>>>>> <textarea/> mean different things for some web browsers.
>>>>
>>>> So do <textarea/> and <textarea />.  What's the point of pointing out
>>>> that some browsers are broken?  (Actually most are somehow broken when
>>>> it comes to application/xml, but who's counting?)
>>>>
>>>>
>>>> Udo.
>>>> -- 
>>>> "There are three ways to make money.  You can inherit it.  You can
>>>> marry
>>>> it.  You can steal it."
>>>>     -- conventional wisdom in Italy
>>>>
>>>
>>
>> -- 
>> Graham Klyne
>> For email:
>> http://www.ninebynine.org/#Contact
>>
>> _______________________________________________
>> Libraries mailing list
>> Libraries at haskell.org
>> http://www.haskell.org/mailman/listinfo/libraries
>>
> 
> _______________________________________________
> Libraries mailing list
> Libraries at haskell.org
> http://www.haskell.org/mailman/listinfo/libraries
> 

-- 
Graham Klyne
For email:
http://www.ninebynine.org/#Contact



More information about the Libraries mailing list