Cross-MV development made easy…er

I just finished setting up the latest mv.NET v3.5.1.2 and used the Data Manager to create simultaneous connections to multiple MV environments. I dunno if other people get as much of a kick out of this as I do but just the ability to do this tickles me.

Simultaneous connection to D3, jBASE, QM, Unidata, UniverseI could add Reality, mvBase, mvEnterprise, and even Advanced Pick – I just don’t have them loaded here. I would want to add Caché but mv.NET hasn’t been ported there yet, though I could easily use native Caché tools for .NET.

I have code to exchange data between MV and SQL Server and I was thinking of writing a little "universal propagation" tool that would take data updates from anywhere and post them everywhere else – sort of a cross-platform hot-backup. The funny thing is that the audience for this would be extremely limited.

However, for those of us who write cross-platform code there is a neat possibility here. Let’s say you code in Unidata and you want to make sure the code update you just made will compile in the other platforms. Your editor just needs to be rigged to write data to a communications file as well as to the program file – or you can put a trigger on your program files to do the copy so that you can use any editor. The code gets pulled from the file with this neat propagation hub and gets pushed into the other environments, where simultaneously phantoms move the new code to test program files and attempt to compile them. The results of the compilation are all sent back through the communications file. On your development system a phantom is running to extract results and format the output into a consistent item structure. Within seconds after saving your code on any system you can run a report to see where it failed to compile and why, allowing you to make another tweak and try again. If you’re working in normal Pick editors (ED, AE, SED, WED, etc) all you need to do is Save, not even exit the editor.

Are you contemplating a migration where you need to ensure your code compiles for both environments, so that you don’t have two code sets? This is one way to help with that. In one migration I was working on we were simply using the host file system as a shared resource. A DIR file in each environment was pointing to the same host directory, so any updates on one system were defacto updates to the other – but compilation was still a manual process.

Sure, code may compile, but does it run? That remains to be seen in conjunction with other programs and in the context of actual application usage. You can’t replace QA, though you can automate it with some tools. For me and others I know it’s a real pain just to manually move code from one environment to another, compile, go back and make a little change, move it again, etc. This eliminates all of that – you just need an editor up and another place to run your report. The less time we spend dealing with moving code from here to there the more time we can spend on actually developing code.

Hmmm, if a report is run by a middle tier then you can have a tray icon that polls for results via a web service and displays results from windows – exactly the way we see alerts for inbound mail, Skype calls, IM chats, etc. You can get a quick red light for failure and green for sucess next to each platform. You can probably get line numbers, maybe just get failure info and you can assume the others are successful. This is open to evolution. Once again, I already have some of this code.

If we want to take it to the next step, the same code can be used at end-users, so that when you post an update package on any system it gets sent to your end-user sites. All they need to do is run some routine that you provide to incorporate the mods into their systems – this program can be a part of the update package itself. We see this all the time with automatic updates for Linux and Windows, virus and malware updates, etc. You just need to decide when you want users to get the new packages. Do they get build releases? Do they only get minor point-releases which result from several builds? Does their license allow them to even get updates? Should they just get notifications and not the releases themselves?

I was going to do this a few years ago but haven’t had the time to implement it. I think now is a good time to do this, assuming I can find time… In short, in the bigger picture I wasn’t just posting code from developers, I was generating statistics on end-user systems and pumping them over to the developer. As usual a lot of that code is already complete, I just need, errr, proper movitvation to complete it.  Now that I have all environments up and an easy way to test my code I guess this resource for support providers will naturally follow.

Yup, a lot can be done with this, and fairly quickly too given that all of the code already exists in various places on my systems here. So much is possible but there’s so little time… Anyone out there want to help prioritize (=fund) this?

Leave a Reply