And now on to D3…
As we’ve seen, BASIC object isn’t really compiled down into machine code. The D3/Pick Virtual Machine Environment (VME) is written at a high level in a proprietary dialect of assembler, and the BASIC Runtime is a part of the VME. Object code tokens are taken in one by one into the runtime engine and executed through other assembler instructions that manage the file system and other services. When access to the lower-level OS is required then requests are made of a lower level called the Generic Monitor, GM, or just "the monitor". This is the platform-specific code that does memory management, disk IO, process/thread management, etc. For D3NT this monitor level is written in C. For Linux and other *nix platforms part of the monitor is written yet another lower form of assembler, and (I believe) part is in C. Many years ago Raining Data attempted to merge the mvEnterprise platform with D3, as both platforms included features from which the other would benefit. This project wasn’t completed, in part due to the complexities of the differences between the assember GM in D3 and the C monitor in mvEnterprise.
A little history
FlashBASIC was intended to allow BASIC code to execute outside of the slower interpreted mode of the traditional runtime engine. The code is more optimized, a little closer "to the metal" than standard BASIC object code. The FlashBASIC object code itself is really an additional module of bytes appended to the bottom of normal PCode interpreted object code. When it was first introduced, there were up to 9 levels of optimization. In theory the higher the level the more optimized, and larger, the object code got. The tradeoff was longer compilation time to produce more optimized modules. There were two kinds of Flash, Platform Dependent, and Platform Independent. The FlashBASIC compiler got its optimizations by translating the source into C, then compiling that code with the platform-specific C compiler down into assembler for the host OS. In practice higher optimization levels didn’t really yield significantly better performance, and for some code higher levels ran a little slower than non-flashed code, in part because so much C/assembler was generated in an attempt to make the process run more efficiently. The decision was made to do away with optimization levels and platform-specific modules. The Flashed modules are now platform-independent, "well optimized" though not as close to the metal as before, and there is no longer a requirement to have a C compiler for FlashBASIC.
Note however that at least D3 Linux does still require the C Developer package to be present at installation time. So … since we need this anyway why is there such a big deal about not needing it for Flash?
One thing that has bothered me for years is that Pick Systems / Raining Data Marketing at some point decided to "re-brand" Pick BASIC, so that from then on all BASIC in D3 would be known as FlashBASIC. Well, that wasn’t too bright because it created a need for new terms: "Non-Flashed FlashBASIC" and "Flashed FlashBASIC". This only served to confuse the developer base when on one hand they were told that they are using FlashBASIC, while Support would still need to know if a site’s FlashBASIC was actually Flashed.
Crying in my QA soup
It’s worth noting also that before FlashBASIC, Pick Systems just had PickBASIC. When flash came on board, the testing cycles at least tripled in complexity. All tests had to be run on each platform in three separate modes: NonFlashed (NF), Platform Independent Flash (PIF), and Platform Dependent Flash (PDF). Further, because the PIF was/is supposed to run on all platforms, all tests needed to be compiled in both NF and PIF modes, and ported to all other D3 platforms to ensure the code ran. This was done for NF anyway but the load doubled for PIF. Around that time as well, D3 was supported over at least 7 operating systems – and for each OS there were sometimes multiple releases that needed to be checked. It’s fair to say that there could be up to 21 instances of the FlashBASIC testing cycle for any one release on any one D3 platform. Multiply that by anywhere from 3 to 5 cycles because the software went from Engineering to QA and back to Engineering – sometimes several times. And over 2000 tests were run on each system in each instance. That’s a lot of testing and it’s no wonder that it sometimes took months to get a release out the door.