Connecting with MV

Sockets

A socket interface is one level lower than Telnet which is a protocol that runs over sockets. With a raw socket interface, you need to create your own transport protocol. In other words, you need to make sure that whatever the data looks like, that it is faithfully moved in both directions – this is part of what protocols do for us, including Telnet, FTP, NNTP, SMTP, and hundreds more. Obviously it’s a lot more work to support a custom socket protocol plus your own data protocol, which again may make use of delimiters, XML, etc.

Most MV platforms support a socket interface. Unfortunately the BASIC support for sockets is different in every MV platform. And there are varying degrees of reliability from one release to another. This is because this is a feature that isn’t used extensively and it’s thus not tested as rigorously as other MV components like BASIC, dictionary codes, and fundamental file IO. So, when people discuss MV DBMS connectivity, sockets are often on the table only until cross-platform support is mentioned, and even then, stability becomes a concern even when the discussion focuses on just a single platform.

UniObjects, QMClient, D3 Class Library, etc…

All of these interfaces consist of a server process which accepts connections, a vendor-provided client library (API), and a proprietary protocol in the middle. The benefit of these libraries is that developers like us can send data into the pipe and pull it out without having to know exactly how that data is being transferred,and without having to find, buy, or create a wrapper around telnet. You can use any of these standard tools to get started with web development and you’ll probably do fine. Your next decision as a developer might be about which language(s) that you would use in your middle-tier code that is called by the web server. The solution to this depends on the platform(s) supported for the client API. For D3, for example, the D3 Class Library has no *nix based client, so your only connectivity option is from a Windows system to your Windows- or *nix-based DBMS server. With UniObjects, you can use UniObjects for Java, which means you can interface from *nix based Apache directly into your *nix-based DBMS.

mv.NET, PDP.NET/ON.NET, RedBack, and other "super libraries"

These are products that are one step above the vendor-provide APIs. They expose a single, consistent developer API, regardless of the actual transport that they’re using to get into your DBMS of choice. For example, mv.NET makes use of all of these, UO, QMClient, D3CL, or Telnet, depending on the platform and developer preferences. These products also provide mechanisms that allow multiple external entities to maximize the use of DBMS processes. Because of this, the cost of the tools can be entirely offset by savings of DBMS licenses.

Do these products help to cheat the DBMS companies? I think VARs will all agree that lower TCO allows them to sell more systems, so overall revenue for the DBMS companies balances or appreciates. Conversely, if it costs too much to support a small number of web users, VARs will simply be unable to sell their software in our modern economy. So the short-sighted urge to increase revenue by limiting what we can do with our DBMS licenses might just decrease revenue (for everyone) in the long term.

Comparisons

OK, we’ve looked at a typical solution proposed by many developers – Telnet. Then at a slightly lower-level solution – sockets. Then at higher-level solutions like vendor-provided libraries and third-party wrappers to those libraries.

We see that Telnet is much easier than raw sockets because issues like reliability are already handled by the protocol, so all you need to do is put your data on the wire from either side and you can trust that you’ll receive it. However, you still need to wrap that data so that you can make use of it on either end. If you use XML, you’ll need to wrap your data in XML on each end (BASIC on one side and ?? on the other), and of course parse it on "the other" side. You can also delimit your data, but you always need to be careful about using delimiters that might be found in the data, and your delimiters can’t be filtered by the protocol. For example, Telnet doesn’t support the transport of high-byte characters like xFE, xFD, xFC, so you can’t pass dynamic arrays across the wire without modification. You could delimit records with a tilde "~" and fields with a comma – as long as your data doesn’t have those fields. As you can imagine, both your client and server components need to perform the exact same packing and parsing operation, and this occurs separately from the actual processing of the data.

And what happens to data after it’s unpacked? In the MV code you need to identify the data coming from users, and in the middle-tier code you need to reformat data from the MV server into HTML or something else that’s meaningful to the client. Why not create HTML in MV? You certainly can but not only does that make your server code HTML-specific, but going back to XML or delimited fields, you also need to be very careful about how you transport your data. Some developers will simply not want to deal with all of these concerns, and may opt to move on to one of the vendor libraries.

It’s a very "easy" decision to use a vendor API. It’s usually free. It’s supported (to some extent). You know a lot of other people are using it, so there is some community comfort. Again, you may do well to use one of these products. The problem is with portability, and this is not a problem that is inherent with the tools, it’s the mindset of the developers. When tools are free and easy to use, most developers don’t follow Best Practices in their code development. This leads to both client and server code which are heavily interconnected with the connectivity tools. For example, you’ll find many variables beginning with "D3" in code that makes use of the D3 Class Library, and "UO.something" variables and method calls in code that makes use of UO/UO.NET. If a site wants to migrate their DBMS, developers will spend a great deal of time looking through all of their code to see just how much of this code will need to be re-written.

You can use the vendor-specific products without making your code vendor-specific! Again, according to Best Practices, code should abstract various components as much as possible. Your user interface code should never include a reference to the database, and the database should rarely/never include a reference to the UI. So how do you code with these limitations? Your UI code calls to a Data Access Layer which is structured to provide a well-defined set of functions. Those functions then call to DBMS-specific code which fulfill their functions. You should be able to substitute another Data Access Layer component without a single change to your user interface. On the back-end, the vendor-specific API connects into the MV DBMS in whatever way directed by the vendor. From that entry point, you should extract data from the API-specific structures, and then pass that data to application code which is completely independent of the comms-interface. Again, you should be able to swap out the comms API without a single change to your application code that handles requests.

What do third-party vendors provide that isn’t already in the vendor APIs? There is usually an extra set of helper functions that make the developer’s job a little easier. If you’re paying for tools, you’re really paying someone else to save you some time. You can write your own client and server libraries to save yourself some time, but then you’re writing application code and tools. If you have time for this, have fun, stick to the DBMS APIs. If you don’t want to deal with process management, thread management, or learning the nuances of creating sub-components to add to your code, then you should at least take a look at the various third-party tools.

Leave a Reply