In a recent U2 forum thread, difficulties were documented about moving data directly between SQL Server and Unidata. This thread is just like many others there and in other forums. The question is always about issues with direct connections between environments. I posted a blog here in August 2009 where the question was about TAPI and Universe and my answer was an explanation of how to connect MV to Anything. Folks, you’re missing the forest for the trees. The issue isn’t with TAPI or SQL Server, nor with Unidata, Universe or any other MV DBMS. You need to rethink your approach to solving all problems in this class. I’ll make another attempt to describe the approach here, describe a solution I created for a client, and follow-up with a Part 2 with more specifics about that solution.
The urge to do everything directly with the MV DBMS is in all of us, but we really need to literally think outside of the box. Exchanges between U2 and SQL Server should not be “exchanges between U2 and SQL Server”. The way to approach this is to export from one environment and import into another. When you separate the tasks everything suddenly gets much easier. The world of APIs and Mashups that we see evolving around us is evidence that people are getting away from directly connecting any two environments, and tending now toward creating solid end-points from which developers can create their own X-to-Y solutions. This turns your import/export projects into ETL projects, where each DBMS makes use of familiar tools that are independent of one another, and where the middle-tier of transformation is almost always simple grunt code that moves Name to Name, Address to Address, etc.
As a real world example, I have a client who wanted to do pretty much what was described in the recent U2 thread, but more complex, and I created a solution for them:
- They are running SS2005 now, maybe 2008 later. Hardcoding exchanges between environments would require some modification later.
- They are generating CSV files from MV for import into SS but may want a more direct method later. This is different from their export from SS to MV, which is using a different mechanism.
- The SQL Server tables and views are subject to change, and we wanted to avoid having to make manual exchange interface changes every time something small changed in SS.
To solve the problem I created a utility that “discovers” the SQL Server tables and views, and generates code for data import and export. The generated code handles the import from CSV as well as export through mv.NET, though UO.NET could have been used here as well, and other tools like the D3 Class Library or QMClient could be employed – but I use mv.NET because it’s platform independent and allows the solution to be ported to any MV environment. I’m using strongly typed ADO.NET Entity Framework and generated LINQ code to avoid platform-specific SQL syntax. This allows them to use different SS releases and allows this utility to support Oracle, MySQL, etc. All of the code is compiled into a DLL which is then used by a generic transfer utility that runs in the background. The generated code also includes MV BASIC EQUATE definitions, so that data exported into MV can be manipulated without worrying about which attributes have which data. That code can be copy/pasted into MV for immediate processing of new data. Other sundry components were added: for example, the client’s SQL Server tables don’t define primary keys (go figure) and I had to write a utility that zapped the tables with primary keys (single and multi-part) whenever the tables were modified/replaced by another vendor.
The main points here are that I don’t have a single SQL query, no ODBC, no direct connection between the databases, and I am not pushing raw data directly into MV application files. The “ETL” export from SS Extracts data using a LINQ to Entities statement, Transforms the data by simply moving it from a strong class into an mvItem object, and Loads the data to MV by running a BASIC program. Given this process the data can come from anywhere and go to anywhere. I have no idea what the client wants to do with that data, so the BASIC code is entirely theirs. Again, this separates the tiers so that they can operate independently. Compare this to hardcoding a direct interface between SQL Server 2000 and Unidata v7.1 using ODBC, or similar immutable details.
This solution requires more refinement to become a truly general purpose black-box utility, but it is now processing production data and the client loves it.
I hope people get some insight from the approach of separating tiers, even if they choose not to write their code generator for a general purpose solution. 🙂
Now, if you are interested in moving data beteen your MV DBMS and a relational DBMS, please let me know- I might have something for ya! The working name for this utility, perhaps not surprisingly, is “Magic”.