Bernard commented on my thoughts on schema evolution.
I do not see why schema evolution is harder to handle in Prevayler than it is with a relational database. Changing relations is always a problem whatever the storage strategy you use.
Hmm, let’s see. Maybe I was very using traditional thinking. My thoughts were that it is easier to transform bare data as it is less to transform. You take the old data and insert it into the new code, which is all you need because you don’t want the old implementation anyway.
But you’re saying that with Skaringa I transform the Java object graph to XML, transform that XML data using XSL to a new XML format matching the new code and deserialize it. Whoomp, there it is? Sounds very nice.
Next, do you truly believe SQL is the best way to query your object model?
No, not my object model, not at all. However, we may have to provide a tabular data model for analysis purposes outside the scope of our application. The standard way to access this is using SQL. This is the only reason I want it
I also have this nagging feeling that many customers will want their data in a RDBMS because just because it has always been there. What would we do there? Delete and insert for every snapshot?
This morning Jon had a good way of formulating what I wrote on the two persistence needs: “RDBMS do two things: OnLine Transaction Processing and OnLine Analysis Processing. After Prevayler we only need OLAP.”
Well, stay tuned, or better help the people working on the XML export for Prevayler. As said before, SQL is in my opinion not the best thing because it requires mappings between your objects and your relation database. And ODMG API would probably be better. Carlos Villela wrote a XPath query demo
We will definitely work with Prevayler to enhance this functionality. As for JXPath, already there :-).