Access to the OS file system from D3 is possible through the OSFI – the Open System File Interface. Over time I’ve noticed some confusion in how our colleagues use and under-use this feature, so I thought I’d clarify a couple points here.
BIN, UNIX, and DOS are prefixes, though they are often thought of as the path to the OS. By themselves they access the current default OS volume. For DOS, this is C: because that’s generally where apps like D3 are installed. So DOS:C: “can be” redundant, depending on some nuances below. But if D3 is installed to D: then (I believe) DOS: by itself will access D:. In that case DOS:C: specifies a non-default volume, so the C: is required if that’s not where D3 is installed. (I’m hoping someone who installed D3 to D: can confirm or correct this.)
You can use UNIX: in Windows to create LF-delimited records, and you can use DOS: in Unix to create CRLF-delimited records. That is their real function. They specify the EOL delimiter, not the file location. Similarly, the BIN driver specifies no EOF or any other conversion to the data in either direction. This is used when you really do want to see the CRLF or LF delimiters (and all other characters) on inbound data, or when you really do want MV delimiters and other characters to be saved in OS-level files on outbound data.
The difference between the C: driver and the DOS: driver is that C: translates 4 spaces to a single tab in data going in both directions. If you do want this, use C:. If you do not want this, use DOS:, or prefix any drive reference with DOS:. For example: DOS:D: or DOS:E:.
As seen above, C, D, and E are default drivers. So you can “list D:” and “list E:”, but not “list F:”. These are items in the Hosts file, so:
CT HOSTS C D E DOS
You’ll see that the “drive” drivers include “t4” in a2, but the DOS driver does not. You can copy C to F, G, etc to create a new driver that does tab translation, or copy the DOS driver to F, G, etc to create a default driver which does not do translation. Modify a2 per documentation to point to the right drive. I’d recommend being consistent and using DOS:G: as an override for a default G driver with translation.
The forward slash ‘/’ should be used for both Windows and Unix to delimit paths. The backslash ‘\’ is a MV delimiter, equivalent to single or double quotes. So “list \” returns an error while “list /” returns a file listing from the root drive. Being consistent avoids all related issues, avoids any need to remember which delimiter to use in any system, and allows you to use the same code for any platform without special handling for a delimiter (for this specific usage).
The forward slash is specifically a Unix driver, though this and all other drivers can be used against any file system. That is, it treats whatever data it references as LF-delimited, regardless of the actual OS. It’s the equivalent of the DOS driver which treats data as CRLF-delimited. There is no item ‘/’ in the Hosts file because this is the default – DOS only exists as as a non-default option. For example, “list /temp” works in Windows, and “list /tmp” works in Unix even without the “unix:” driver reference. When you use the slash from D3 Windows, it creates LF-delimited lines. So “ed / test.txt” will convert attributes to LF in the OS file, whether this is on Windows or Unix. To ensure you get CRLF in Windows, use one of the other drivers, C, D, E (with tab translation), or DOS (without tab translation). Another way to say this is that the slash is the default file system access character, that it’s equivalent to UNIX: even when used in Windows, and that UNIX: and DOS: are used in addition to the slash to define or refine the translation the file characters – these modifiers do not indicate where the file is. This reinforces the point made in the first note above.
To see more of these host drivers, just “SORT HOSTS”.
In there, you’ll see the PEQS host which points to “some location”, we don’t really care where, and that’s where the system stores spooler jobs. You can access the data for a print job using “CT PEQS:”. Yes, D3 does have a convenient q-pointer to this location, but the data isn’t really in a “file”, it’s in some location which is referenced by this OSFI host. This is why spooler jobs aren’t saved by default in a file-save.
The HDR host is also fascinating for time-stamping data updates. You can find documentation for that here.
Or look at the “VAR” host… Try “list var:” and you probably won’t see any items. Execute “set test=md” from TCL. Now “list var:” again. You’ll see a new ‘test’ item. To use this, try “list @test”. Yup, these are the dynamic references which can be changed at any time from code or command-line. I documented how to use these in this blog and another one before that (see link in that blog). How is this valuable? What if you want access different data with the same code, dependent on some external condition. For example, wouldn’t it be cool if you could use the same file name to always refer to the current year, even when your files are named SALES.2012, SALES.2013, SALES.2014, etc? Sure, you can create q-pointers, but do you really want to do this for every file where you need this? And a single q-pointer applies to all users. With a VAR entry each user gets access to different resources with the same file reference! So “set THIS.YEAR.SALES=SALES.2014”, and then “LIST @THIS.YEAR.SALES”. Or use that in your code as “OPEN ‘@THIS.YEAR.SALES’ to f.sales…”, then you don’t need to change code or create q-pointers, and every user can get access to different data as they need it! The link here is in data referenced by the VAR host. Like PEQS or HDR, we don’t know or care where VAR stores its data, but we can get some great use from it.
What about that bit about different users having different data? Open two telnet windows and execute a SET command in one of them. Then LIST VAR: in both windows. You’ll see that these settings are session-specific, that each user/port has it’s own unique view on the VAR: space. If you logoff the values are cleared. I use logon macros/programs to set values on entry, and to reconfigure an environment during a given session – this doesn’t affect other users.
Compare the data in these Hosts items with the documentation and you might decide you want to create you own hosts entries that provide access to some specific data resource. For example, this is where you’d store a reference to another D3 system, so that you can access data across systems with a simple q-pointer. That’s the D3 equivalent of using Samba or other network sharing to access a remote file as though it were local. If you do customize these items, be sure to restore them after a system upgrade, as the D3 installer loads a default set of Hosts items which will not include your enhancements.
Unfortunately this “Open” interface is not “Open”. The API for OSFI is not published, despite me asking for it for years. And as we see above for PEQS, HDR, and VAR, it’s also not limited to “File Interfaces”. So Open System File Interface seems to have been a very poor name for some very cool functionality that’s not used as much as it could be, and it’s not even given due credit by its own developers.
But what else might we want to do with this?
Consider a “EXCEL:” driver where “LIST EXCEL:MyDoc.xlsx” shows rows as items, with columns exposed as attributes.
Or consider a “WS:” driver which invokes web service functionality, so that “LIST WS:MyHost” would list the web services exposed by some server, and “CT WS:MyHost FOO” would invoke the FOO web service in that server to return whatever data is attainable by a GET query.
If you’re using the OPENDB feature in D3 to access an RDBMS from D3, you’re already using functionality like that. Why shouldn’t we be able to create our own drivers for MySQL, Hadoop, or Google Docs – and access them from Pick BASIC like we do anything else?
If Pick Systems / Raining Data / TigerLogic / Rocket Software published the specs for OSFI, we could have these things and so much more from this under-used feature that’s been built-in for almost 20 years.
But now with the info above, you should at least be able to get more from the functionality that we have now – and you should also have a better understanding of what’s really possible.