Recently gave a presentation on the relationship between JSON and Xml technologies. I'd set it in the context of "friend or foe" as there are lots of people who frame the relationship between these two as some sort of competition in a zero sum game or a Darwinian death match. On the one hand, Xml as the incumbent who is trying to fend off the nipping upstart, saying that JSON simply isn't a king killer. On the other hand, the insurgent JSON is poised to topple the bloated, over-the-hill, yesterday technology. Wresting the title from reluctant dinosaurs.
Having worked in both data integration as well as content management spaces, I've seen both natures and how they react to Xml and JSON. I think the former are very hot on the JSON track, and rightly so. With cloud applications, bandwidth is now an issue again. And then there's mobile applications. Light weight, simply data structures for not overly complex data can make a huge beneficial difference. So JSON will continue to have an increasing role there.
The content folks see value but are a little less keen on the the JSON value proposition. An example of some skepticism is the concept mixed content (elements intermingling with text). This is a big, bright line that differentiates the two technologies. Having tried several methods to work with this myself, I find that Xml's inherent support for mixed content is a really nice relief. And content management will tend to run into mixed content more frequently than data integration specialists. Still, content folks see some value in JSON for sure. They don't like Xml's bloat any more than anyone else.
Ultimately however, this isn't a death match. The Darwin analogy doesn't mean there can be only one survivor. But an array of creatures that each have their strengths and weaknesses. Like programming languages or Galapagos island animals, there is room for many. I like JSON and find it very useful and fast for development. I've developed JSON applications and experimented with JSON Schema in fact. (More to come on this in another post.) And when I come into contact with complex content structures or mixed content of any kind, I'm glad Xml is still in the toolbox.
Reading quite a bit recently about the technology that is popularly known as Bitcoin. The use of block chain computing power to solve mathematical problems in return for money, to put it bluntly. Article after article spoke to how this can be a trans-formative technology.
Fair enough. Time to investigate and see how it is supposed to make people money. I thought of 2 angles to try out. First, the easiest is to think of it technologically. And use computing power to test out how things work and how useful it proves. Seems the things to do are setting up a wallet (after all I need a wallet to store all my major bucks right?). Then start "mining" the math to create my way to wealth. I installed Bitcoin Core wallet for windows. Installed and seems to be about what I expected and read about. Next I installed GUI Miner which is a client that does the computing. So if I'm mining for bitcoin gold, where do I land my first shovel? In order to find a place to squat and stake a claim, its best to follow someone who knows.
So enter Slush Pool. Pools are ways of aggregating computing power with a shared reward. Slush's Pool claims to be the world's first mining pool. Its at least a place to start. Soon I'm set up with a wallet and I'm using GUI miner to mine coins in Slush's pool. So I sit back and let my computer make me money, right?! Seeing the early returns, it's clear that it will take a very long time to make any money this way. Can I reduce my overhead to maximize my margins?
Researching pools, one quickly gets into issues of governance. The competition to attract miners leads to claims of transparency and low cost pool providers. (An interesting view that money creates government instead of the other way around. :) ) Being an advocate of a Vanguard investment philosophy, I view the strategy of keeping overhead low and I'll beat the higher cost guys most of the time without even trying. But it becomes apparent the global nature of this setting. I'm not only choosing a pool that may claim to have low overhead fees, but my mining efforts are competing against third world cost structures. Calculators spring up to tell you how your costs affect your mining potential.
In fact, in mining, the calculations of benefit soon become a discussion of your electricity rates. Since coins are minted using computing power, one needs to factor in electricity costs in your potential profit margins. But since "1"s and "0"s do not recognize borders, my first world electricity costs quickly mean I'm competing against inherently cheaper places around the world. This means I'm starting to sour on raw mining for profit. The margins simply aren't there unless I can employ an armada of machines at third world electricity rates.
So I've learned about the technology that makes it work and I've learned that the mechanics of mining mean one will never get rich that way and this task is better left to low overhead miners. What about a more philosophical or entrepreneurial view? (Meaning I want to own my own pool.) Where is the opportunity to put this technology to something different or in line with goals. Can it be leveraged to solve a bigger problem? I'd like to see this applied to something useful like fighting malaria or some kind of important goal. Pools to simply make money are obvious and already exist. Another one won't stand out. Creating a pool that attracts investors (miners) for some motivation other than simply making money might do the trick. Indeed some of the pools are motivated by philosophies that attract a certain motivated miner. This remains my landing point in this story.
I'm left intrigued with Bitcoin (and the underlying block chain technology) even if I've not found a path to riches nor used it to solve a bigger problem. The fact that it is making some inroads to mainstream usage and acceptance means it isn't a fad. The technology is interesting and I can understand the attraction. So there is some "there" there. I'm just not sure where this fits into my strategies as yet. Perhaps you'll find me next announcing a new mining pool that will plow all profits into fighting malaria.
I've been in car maintenance mode as you can probably tell. This time, its been a long standing issue. Quite frankly these brakes have been squeaking ever since I got the car. Very annoying and actually embarrassing when driving friends. I was told when I got it that the brakes were not that old and because of the squeaking the mechanic put in the exact OEM pads for this car.
So are they just worn out? Are the slider pins needing lubrication? Something else? As it turns out I think the problem was the pads. They were not worn down all the way. But they are metallic pads. I switched to ceramic and this made all the difference. Here is how I did it.
Here is another item in the "anything else" category. Recently had some car trouble and make a short video of a repair I did. I have a Toyota Camry 1997 and I was getting P0115 error codes from my ODB 2 reader. As I was about to replace the ECT coolant temperature sensor, the radiator showed to be leaking. So I ended up replacing the radiator. This video shows how I did it.
Ran into a rather maddening problem last week. I was working on a front end to a tool and was planning on using JSP within a Tomcat environment. I'd downloaded the latest Tomcat (8.0.9 to be exact). It installed ok. Well most of my app is xml based and I needed to use XSLT 2. So grabbed Saxon 9 (126.96.36.199J - later tried 188.8.131.52J and same behaviour), and added to my lib directory and with an environment variable update, presto - I was able to perform transformations. (Just needed a property set in the JSP)
So far a happy story right? The issue came up around relative and absolute paths. The collection() function was throwing errors if I tried to use a relative path. It was annoying but not the end of the world as I could supply the full path behind the scenes. Maddeningly, the doc() function was throwing an error if I used an absolute path. So I had 2 functions that were each doing their own thing at different places in the tool. But one required full path and one relative. No exceptions. I could work around this but it seemed silly to have to do this.
I wasn't sure if the problem was my code, java, tomcat, or saxon (can you guess which?). I found that it wasn't anything to do with encoding, so I could rule that out. I started doing research and found some interesting (though dated) discussions here , here , here . The issue was apparently around the URIResolver. Potential work arounds/solutions here , here , here , here .
I also did some document-uri(.) functions on loaded documents. It reflected this problem, as it was returning a path that started with "jstl:/../" instead of "file:///c:/" or even "http://localhost". So the resolver was definitely the problem.
Just as I was about to contemplate writing a custon URIResolver, I did some more digging in my JSTL tagging. And it hit me that I might have outsmarted myself. Turns out that @xsltSystemId not only provides the path for the XSLT, but also serves as the basis for all relative URLs used in the XSLT. So things like imports, doc(), collection(), etc. They all are based on that. So my solution was a humbling, simple attribute on my JSTL:
When a known and correct relative path was used, the collection() function resulted in this error (snipped for brevity):
HTTP Status 500 - javax.servlet.ServletException: javax.servlet.jsp.JspException: net.sf.saxon.trans.XPathException: Cannot resolve relative URI: Invalid base URI: Expected scheme-specific part at index 5: jstl:: jstl:
Meanwhile when the full path is given, and the collection() function works correctly, later on in the process, the same full path in doc() function returns this error (also snipped):
HTTP Status 500 - java.lang.IllegalArgumentException: Expected scheme-specific part at index 5: jstl:
java.lang.IllegalArgumentException: Expected scheme-specific part at index 5: jstl: java.net.URI.create(Unknown Source) java.net.URI.resolve(Unknown Source) net.sf.saxon.functions.ResolveURI.tryToExpand(ResolveURI.java:115) net.sf.saxon.StandardURIResolver.resolve(StandardURIResolver.java:165)
This goes into the "anything else" category. One of my other passions is rock and roll music. And many people who hang around the worlds of hard rock and heavy metal are aware of a VH1 classic TV show called "That Metal Show" (@ThatMetalShow #TMS). Hosted by Eddie Trunk, Don Jamieson, and Jim Florentine, the show has become a focal point of discussion, awareness, and fun around this music genre.
The show has numerous segments, but the one that struck me is the "Top 5" lists. Host selected topics are debated and a "final" list is determined. This blog post is to show you how diligent (or perhaps crazy) I've taken interest in these lists. I've researched the shows and found that no where was there a definitive list of Top 5 lists. So I created one myself!
I want to give a shout out to Priscilla Walmsley's list of XSLT functions. I have used it many times and find the site easy to read and consume. Especially when I'm wrapping my head around some nasty namespace management issues, I find myself coming back to the site over and over.
Am working in updating the code to the #SchemaLightener (which also flattens schemas and wsdl files) to use XSLT 2.0 and other enhancements. Having offered this tool for years, I've accumulated many use cases that I can test with. And of course, I use many consortia standard schemas and wsdls. However, I want your worst use cases! So I can make this tool the best it can be. Don't worry - I won't redistribute them, so you are free to send me your ugliest cases!
Simply email me and I'll work to incorporate these use cases into the testing of this new version.
And thank you.
I've played around with @OASISopen CAM in the past, but mostly from a learning and experimentation perspective. I thought it an interesting technology and always wanted to find a reason to use it in real life. But for some time that opportunity wasn't at hand.
The most powerful aspect of this is to allow data model and constraints to travel and live together.
Schematron, (xml schema 1.1), and xslt
We were working with industry standards in terms of xml schema data models. These were the starting point. There was a need to add on additional constraints. But within the standard (i.e. co-occurence constraints which can't be put in xml schema) or outside the standard where busiensses take the standard and build their own additional constraints onto it as a base.
Schematron to the rescue?
The most logical technology to use at the time was of course schematron (and still is of course). The problem I was trying to solve was that schematron was understood by xml geeks like myself. However, the people who knew the business rules were speaking an entirely different language. So the only choices either to have the xml geeks be the translator or to create a translation tool for business people to use. In one sense I was doing both. I created an interface to simplified schematron specifically for business analysts. By creating a simple interface, then business people could input simple rules without a problem. No middle man. But any more complex rules could only be roughed in and the xml geek would then need to step in and translate the business rules into schematron patters.
The interface started out as absolute simplicity. It was "if ... then" at its core. If this business condition exists, then some other rule applies. At it's simplest, 2 element names were the minimum point needed. And it was put in a simple HTML web form, an interface BAs were very familiar with. The web form would take the input and generate the schematron assertions as well as use the schematron skeleton template to create an XSLT that would validate xml with these assertions. At that time the skeleton was used widely because no ANT task or native schematron processor was in place.
The simplicity of this approach was both its biggest strength and of course also its biggest weakness. Only a rudimentary knowledge of xml and a business analyst could create schmatron compliant rules by simply identifying 2 components of this "if ... then" assertion. Scmeatron and XSLT validator came out the other end automagically. But of course complex assertions defied this method and so the xml expert had to intervene and help the BA formulate the rules.
This worked for its limited aims. But we still had the problem of separate technologies for validation. It would be best to have schematron embedded directly into the schema. Indeed this is what eventually what xml schema 1.1 would enable.
Enter @OASISopen CAM
Working with CAM on a consulting gig moved me from seeing it not just as an interesting technology to one that I like. One big benefit is that all validation rules can be put in one place. Content model, data typing, co-occurence, or any other constraint could travel together.
Secondly, there was a CAM processor that provided a relatively easy to use interface to creating assertions. Not quite as simple as my earlier effort, but also not as limiting as it was as well. So business analysts can do some of the work in creating assertions, although xml knowledge is of course essential.
While I'm still working with it and learning its warts, I've come to appreciate CAM. And of course one can't mention CAM without a shout out to David Webber.
One of the interesting aspects of this release is around extensions. The 2 main extension methods you've seen in previous releases are still there. The <UserArea> element is still ubiquitous. And it is by design the last element in a content model in all cases except where there is a type derivation. This handy element has been one of the mainstays of working with a standard in the real world of sometimes messy and custom data. Secondly, the elements in release X are still almost all globally scoped. This enables one to use the substitutionGroup extension method that is also common and goes back to version 8 of OAGIS.
What is different about this release is that it makes management of extensions easier rather than employing some new engineering gadget. Very practical. To begin, there is an Extensions.xsd file which centralizes your extension management. This file is the link between custom and standard content models. It is where the UserArea global element is defined. But there are changes from there. The UserArea is defined as "AllUserAreaType". This type is a convenince yet again as it extends the "OpenUserAreaType" (see below) with a sequence that ships as empty. So one can simply add in elements to this sequence and instantly have widely impacted additions to the UserArea content model. Nice.
<xsd:sequence><!-- easy to insert extensions here --></xsd:sequence>
Next, as mentioned, there is the OpenUserAreaType. This takes the most commonly used extension elements and puts them explicitly in the UserArea. Things like name value pairings, codes, IDs, and text can all be used out of the box here. They will look familiar to folks working with CCTS Representation Terms and the UN/CEFACT Core Components Data Type Catalogue. In all my years of experience, I've found these kinds of elements in the OpenUserAreaType to be the most often used in a pinch. So again the management is easy.
Lastly, there is the AnyUserAreaType which is an xsd:any with strict process contents. This is how the UserArea used to be defined in previous releases. In this release, it is a type that can be employed as needed. In fact the UserAreaType is one of these "any" definitions. However it is important to note that the UserArea global element is not defined as "UserAreaType" but as "AllUserAreaType" which extends "OpenUserAreaType". So be sure to keep that straight and you'll have lots of possibilities for managing extensions at your fingertips.
The first part that is crucial to understand is that the semantics remain intact. Indeed version X is a non-backwardly compatible major release. So one might have concerns about changing data models. However, at the high level, there was not any kind of large scale re-modeling of
the data models on the nouns, bods, or verbs. This will be
comforting to those who have invested in previous version of OAGIS and
who may be worried about what has been done with a new release. In fact, I was able to create a valid xml instance document from a valid 9.4 version document without much work.
When comparing instance documents, the first area for change you'll notice is in attributes. Specifically around codes and identifiers. For example, the LogicalID element in version 9.4 looks like this:
This effectively pushes values for agency name and such into basically a lookup table. Not an unreasonable approach. Generally I've not seen those attributes being used anyway. So this simplification is good. Certainly when generating sample documents or dealing with the schema as a data model, the existence of all these extra attributes was often cumbersome. And similarly to this ID, the same streamlining was done on codes as well. Strike one for simplification.
The second thing you might notice is the base namespace is the same. "http://www.openapplications.org/oagis/9" is still the default namespace in BODs. I haven't checked with the folks at OAGi about whether this should be "10" instead of "9" but I'll assume that by the time the bundle is released in the production version this would be changed. In the short term though, this makes working between 9.4 and X that much easier (or harder in some cases). I've kept them separate and so don't have collision problems, but if they must coexist you'll need to manage that carefully. It is a candidate release.
I've got more to say so I'll return and update you on this important new release of OAGIS.
(see part 2)
The initial release of the SchemaLightener was four years ago this month. And I've been getting requests for it every week since. When I started, I didn't think it would last this long, but its taken on a life of its own. I simply wanted to work with libraries of schemas easier. I'd been frustrated at the inability of tools to lighten or flatten them. So I made my own. Four years on, I'm still happy to be getting requests and great feedback. It's being used in over 19 countries and is still free.
It was a simple design, built with XSLT 1.0. And now, its been updated and includes a nice GUI interface, sample data, batch files, and an ANT task as well. The most important developments of course have been the expansion into three tools in one. The Lightener surely does lighten schemas, providing sample based profiling of a larger data model. But the addition of the SchemaFlattener makes working with schemas much easier as well. It takes a tangled web of includes and condenses them down into the smallest number of files necessary (one per namespace). The WSDLFlattener, a third tool, does the same thing for WSDL files. Merging files to the minimum so they can be managed more easily.
The tool has been tested with many consortium libraries including OAGi – Open Application Group – http://www.oagi.org, Swift – Society for Worldwide Interbank Financial Telecommunication – http://www.swift.com, NIEM – National Information Exchange Model – http://www.niem.gov, HR-XML – Human Resources XML – http://www.hr-xml.org, OTA – Open Travel Alliance – http://www.opentravel.org, ACORD – Association for Cooperative Operations Research and Development – http://www.acord.org, STAR – Standards for Technology in Automotive Retail – http://www.starstandard.org. Plus countless company libraries and extensions.
So if you're interested in getting a copy, just send me an email. No spam or mailing list risk - so don't worry about getting hit with unwanted mail. And thanks for all the coffee and support over the years!
I've gotten some recent offline comments about the article I wrote called "Profiling Xml Schema" on xml.com. Its been quite a while since I wrote the piece. At the time I was attempting to cull a "best practices" guide on how schema developers are actually using this technology. I've run into too many tool bugs with the more advanced aspects of the spec. Looking back, the article holds up pretty well. Most of the recommended practices are still the norm today.
When Roger Waters left Pink Floyd, sides were drawn. I was a huge admirer of David Gilmour's playing. He had the ability to transform, transport, and transcend via auditory travels. His hypnotic playing had captured me for a long time. And I'd moved with him and the band.
I'd also been admiring of the lyrical prowess of the band. Interestingly most attributable to Roger Waters. But as Waters and Gilmour split their working relationship, I chose my sides according to my musical muse. Gilmour was the shaman and he was still in the band called Pink Floyd. Anyone leaving the band must have been the problem. Well, there was no need for the jury to retire. Waters was guilty and hence the band was "saved" by his exit. Judgment passed and all was again well in the land.
But a nagging disconnect existed. My admiration of the written, lyrical word was always strong and only continued to grow over the years. The lyrics that I most loved in Pink Floyd's music were mostly written by Waters. How could I disavow the contribution of the lyrical poet? Like a pebble in my shoe, I walked on in my life with uncomfortable steps not knowing how to accept this disconnection. Each step accumulating its nagging bite.
Specifically, I can talk of the pinnacle of Pink's work, The Wall. (While I worship at the altar of numerous works by the band, one has to admit that this is the seminal work they produced.) An anthem for the young male psyche, its words and music match and meld. And the movie added yet another infallibility to the work of art. There is neither an imperfect sound nor word in the piece, I held. I knew every eighth note and syllable. And I knew each small difference between what was on the record (vinyl at first) and what was in the movie.
So given the esteem with which I had elevated this combination of word, note, and image, how could I disavow its most responsible architect? The pebble grew into a thorn.
Fast forward to 1990. The Berlin Wall had recently come down and Roger Waters was producing a concert in Berlin performing The Wall. While admittedly historic, I had chosen my path long ago and I was not open to receive it. It wasn't Pink Floyd, it was Roger Waters. It wasn't David Gilmour, Richard Wright and Nick Mason. So it must be imperfect. I did not watch it nor get the CD or DVD of the performance. Out of a sense of loyalty I believed.
But an impostor? ... the man most responsible for the work? The thorn had been affecting my reasoning.
Forward again to 2011. I was invited to a friend's house to partake in beer and The Wall movie. I'd lost count as to how many times I'd seen it. Dozens for sure. But this time was different. After the movie, my friend talked about the Berlin show and put the DVD into the player. Skeptically, I placated him by sitting through a taste, believing it couldn't be the perfection I'd known for so long. I only saw a portion, but took it home to view in full, which I did the next week.
The DVD shined the brightest light on my aching foot. Each song. The show. The performance. It was all amazing. It repeatedly brought chills to my skin. I am after all a history major, and the significance of this piece at the Berlin Wall in 1990 was another window into this work. Inspired casting (i.e. Thomas Dolby as the Professor). It was ... well ... outstanding. How can this be? Waters was supposed to be the problem. And yet, this show was incredible. Had "I" been guilty all this time? This will not do.
I finally removed the obstruction and opened up to a new ending to the story. Pink's story. Waters really is an architect of The Wall's greatness. He isn't an impostor or the problem. He is as worthy as Gilmour to the immortal legacy of Pink Floyd's music. (Not just The Wall, but all of it.)
And so I offer Roger Waters a heart felt and belated apology. An apology for eschewing your work and contributions all these intervening years. An apology for judging. And for letting that judgment interfere with the appreciation of your wonderful art. The art that has touched my life and made me feel like few other things. I hereby apologize.
The bleeding hearts and artists make their stand.