Binary XML has been one of those concepts that’s been bandied about for the last couple of years, though not without a huge amount of action. The principle here is simple – with XML as it exists now, text based content can readily be parsed on any platform into an internal binary representation (a DOM), but both the operations of parsing the file and the subsequent reserialization of the content to external entities are expensive operations. Time critical and high volume transactions, such as those within financial applications, are especially sensitive to this problem, and this may in fact account for the less than enthusiastic adoption of web services within this sector.
Not surprisingly, there has been a move afoot to bypass these two processes by agreeing upon a standardized binary XML DOM Object, especially within the financial sector. Such an object would effectively contain at a minimum the schema unaware DOM (i.e., one where the information contained within a given nodal value had not been rectified into an explicit datatype). This type of binary object still runs into some problems, as it would require the agreement of some form of marshalling technology to rectify the fact that an object defined within one language will almost certainly have a different class identifier model than one is a different language. Ultimately, this goes back to the old dilemma of COM vs. CORBA, though obviously in this day and age it would much more likely be CLI v. CORBA/RMI. I’d call this the weak-binary model.
The task in turn becomes more complicated when dealing with data types. XML differs from other languages in that it abstracts the type definition from the nodal value. This in turn means that if you wished to create a consistant binary format, you would have to also rectify these data types. This process is perhaps a little easier now than it was when COM and CORBA were both first proposed, as there has been a great deal of standardization occuring within the IETF front concerning the specific definitions of different types, and moreover most computers have now become “big-endian”, resolving one of the cruder forms of intercommunication. Still, while the situation has improved, there are still incompatibilites between different languages that would tend to handicap one or another as secondary marshalling would need to take place. Such binary XML would be considered a strong-binary model.
Additionally, one of the most frequently cited arguments against BinXML has been the fact that you can’t just open it up and look at it, but this to me is something of a red-herring. When you do a “View Source Code” in a browser, you are not generally seeing the actual source in many cases. Instead, you are seeing the serialization of that source, converting an internal representation back into some text format. This is becoming increasingly the case as XHTML replaces XML and SVG becomes more commonplace.
In September 2003, the W3C convened a Binary XML workshop to address these issues, and to gauge the degree to work on Binary XML should take place within the W3C. The debate was rancorous, as was expected, but interestingly enough, especially in light of comments made by XML architects at companies such as Sun and Microsoft earlier in 2003, there was surprisingly little immediate desire on the part of software vendors to wish to grapple with the issues of binary XML, at least at that time (though IBM, Sub, Adobe, Microsoft and others did provide very clear analyses of their respective positions). Financial community participants were generally much more favorably disposed towards it, but even there the emphasis was on compatibility, and while they would prefer to see both, many technical financial analysts indicated that Binary XML without universality just brings them back to the bad old days of COM/CORBA or even EDI.
In the end, there was general agreement to continue to study the issue, but not yet to create a formal working group that would handle Binary XML. Much of the activity of the last year has consequently been on the back burner, with most of the large vendors going back to focus on other aspects of the XML space. However, there are several intriguing developments on other fronts that make me suspect that 2005 may actually see significant activity in the Binary XML space.
One of the first has come from the emergence of grid computing and the scientific community in particular. Grid computing presents a number of interesting constraints, not least being the fact that individual nodes within the grid are as likely as not to be heterogeneous – Unix, Linux, Solaris, Microsoft and so forth. Moreover, deploying a grid across TCP/IP also tends to put a premium on protocols optimized for that particular foundation, and XML fits a very useful niche as a generalized data exchange protocol. However, the parse/serialize bottleneck (and secondarily the type marshalling layer) becomes in some respects more important here, as the cost of running parser/serializer processes for each message transaction becomes downright prohibitive in a massively parallel system.
The scientific community oddly enough brings another player to the table, one that has been, until recently, conspicuous in its absence. That player is government. Grid computing has a fair amount of commercial application, to be sure, but governments, especially the G7(8?9??) typically tend to have the greatest need for large, massively parallel systems. The W3C, for all its open nature, is still largely a commercially driven entity, and there are times that the natural reticence of companies to cooperate for fear of losing competitive advantage in the marketplace becomes a significant roadblock. Whether or not security or anonymity is a concern is yet to be seen. At least we might all end up running our applications through backconnect rotating proxies.
If, as I predict, you do see increased pressure and/or funding from one or more governments on this front, this barrier to cooperation will be overcome, and some form of Binary XML will emerge.
The implications of Binary XML in grid computing are enormous. One of the first will be the fact that most computer languages will develop special classes that wrap these binary XLets, quite probably to a common API, and any time an application will need to be “grid-aware”, these XLets will likely end up replacing the existing binary objects. Whether this will result in the decoupling of class imperative structures (methods and handlers) from code representation remains to be seen, though I suspect it will have a strong effect in that direction – in essence, the operations that can act upon an XLet will exist separately from the XML bundle, raising the possibility of whole new programming models (as I’ll discuss later in this series).
Additionally, it will tend to be a synergistic win for both grid (which faces the hurdles of interoperability between nodes) and XML. It will accelerate a process which is already underway (especially in the file-swapping ghettos) – the shift of personal computers away from being unintelligent clients to one in which a computer is simply a node, equally capable of transmitting and receiving content. Given the typical processing power that most computers have nowadays, treating computers like “dumb” terminals continues to make less and less sense, and this shift in mindset will be accompanied by a corresponding rise in the power of the “cilent” to handle much of the autonomous operations that now are handled in a much more costly manner on the server.
Look to see the greatest developments in this area to come from vendors such as IBM, HP, Microsoft and Oracle. IBM’s grid initiatives are perhaps best known, and they have been the most proactive in exploring the ramifications of XML in this area, though the obvious benefit of coming up with an XML “powered” grid solution is something that has most of the major software vendors exploring this technology This is also increasingly positioning XML as a messaging transport resource as much as it is a data store, to the extent that by early 2008 or so, it is likely that most other messaging protocols, such as e-mail, will have been replaced with XML versions. By reducing at a minimum (and quite likely as a maximum) the impedence due to the forced parsing/serialization processes, Binary XML and Grid Computing together could make such things as the filtering of email, the processing of financial transactions, the creation of massively parallel applications, and so forth much simpler to do.
Useful Links:
Watching BBC iPLayer in France – 07:13:33
http://bbciplayerabroad.co.uk/how-do-i-get-bbc-iplayer-in-france/