What about large commands, performance, and elegance?
For performance it can be painful to do a unique EXECUTE for every query. Not only does that open a new connection to the remote DBMS but shelling out from MV has a penalty of its own.
Find out how your target RDBMS handles bulk queries. Assemble the larger command line, maybe in a loop, then pass the entire request in a single EXECUTE. This is a good technique to use when you need Transaction Bracketing, where a rollback should be performed if any part of the query fails.
You may have a problem passing a massive command through EXECUTE. If so, or if your sense of elegance draws you in this direction as does mine, open the local file system, write the data into an OS file, then Execute a short query that essentially says “execute the RDBMS command line and pass the query from ‘this’ file.”. It might look something like this depending on your RDBMS command:
EXECUTE "!rdbms_cmd -q -f filename" CAPTURING OUT
You could also create a PHP or bash script which does a massive update. I do this with NebulaMail where I use BASIC to write a PHP script which could easily be ported anywhere. Then from BASIC I simply do something like this:
EXECUTE "php myscript.php" CAPTURING OUT
That may not always be practical. Sometimes you don’t know the second command until you get the response from the first. The solution to this indeed would be to link connectivity with the MV DBMS process, so that you can open a connection do multiple exchanges, then close the connection. The following paragraph is more technical than the rest of this, only because this specific topic is the optional “next %20” that one might wish to take after getting through the easy 80% from above – and as we all know, that last 20% can consume 80% of our time or cost. (If you miss the next paragraph it’s OK and you can continue with the rest.)
By linking connectivity code directly with the MV DBMS, we can open a connection, perform multiple operations, and then close the connection, all from BASIC. Most (all?) of the MV DBMS platforms allow for linking external libraries with the primary DBMS executable/service (sometimes called the Monitor). Just to be clear, I’m not talking about external connectivity libraries like QMClient.dll, UniObjects.dll, or MVSP for D3 and mvBase. I mean the DBMS itself allows C (primarily) functions to be linked in so that they become a wired-in part of the package. In short, there are two kinds of programs; there are “main” programs, and there are libraries that get linked with those programs. The libraries provide additional functionality, and they can be changed without modifications to the main executable. Without going too far off here, this is possible because they can be linked into the main code dynamically. In MV BASIC, an Include item is like a statically linked library but a CALLable subroutine is like a dynamically linked library (often called a DLL). So, after all of that, the goal would be to create a DLL which allows for basic open, execute, close transactions on a remote RDBMS, then this DLL can be linked with the MV executable.
The idea is to be able to call from BASIC with something like a new %RDBMS function to be used as follows:
(D3 functions begin with % but most others do not.)
That RDBMS() function could be written by anyone with a clue for the specific MV DBMS. It’s not rocket science (pun intended for our U2 friends). With a little cooperation we could even collaborate to get that function with the same standard 3-parameter interface for all MV DBMS platforms. Personally I think that’s better than waiting for some number of years for the DBMS provider to do it on their own.
Honestly the interface would have to be just a little more complex than that. We might have one interface to support queries on MySQL, SQL Server, Oracle, Postgres, and others. The function might not be RDBMS at all – we could have %MYSQL, %ORACLE, and others. We could even have a %MV function that allows exchanges with other MV environments. I think DLLs could be the new “crack cocaine” of the MV industry, enabling BASIC to do all kinds of things from interacting with Web Services to direct access into MS Office applications and more. But that’s just an aside. The take home here is that performance can be improved by switching from an out-of-process operation like EXECUTE “!”:CMD to an in-process function like %RDBMS(“execute”,CMD).
And what was that about elegance?
Well, rather than using EXECUTE to go outside of the MV environment, a lot of people like things to be “built-in”. That’s why they ask the MV DBMS vendors to come up with new functionality rather than looking to the third party market (waving arms wildly here). Is it really that aesthetically displeasing to use EXECUTE? If I can create a function like %RDBMS(), is it really that much better if your DBMS vendor can give you a function without that % sign? What if you wait some number of years and they give you %RDBMS() with the % sign? You won’t complain, right? It does come from the MV DBMS vendor after all. Have you seen some of the syntax that actually does come from these MV DBMS companies?! I welcome you to peruse the Caché documentation as one example, and if you can find the well hidden docs for this (they’re there, trust me) I’m guessing you would be horrified at the syntax required to support a basic SQL query.
What if your MV DBMS vendor provides you with a series of CALLable subroutines, as they often do? Would the above be elegant if we did the following?
CONN = 0
IF CONN = -1 THEN STOP
CALL RDBMS("execute",CMD,CONN) ...
Heck if you don’t want to do the above for yourself, I could do that for you today! How would I implement it? Initially I wouldn’t create a DLL. I’d store your connection data, indexed using a unique ID which I return as CONN and then when you passed in an Execute request I would pass that connection data into a single EXECUTE “!”:CMD statement. To make it a little more elegant I’d probably add constants like R.OPEN, R.EXECUTE, R.MYSQL, and R.ORACLE, to help eliminate errors from hard-coded parameters.