Last Updated on
Can technology make a better life for us all? Does improving and developing new systems, writing new code really bring a positive improvement to our lives. It seems a trite argument, of course technology is good – but sometimes it’s difficult to balance the improvements against the problems it causes.
Don’t get me wrong: (transparently) improving the performance of software does have its benefits. By allowing more work to be wrung out of computers they don’t need to be upgraded as often. The increase in performance will ease the acceptance of virtual machines (VMs) into popular use. VMs enhance security, simplify development and debugging, and sometimes even allow running applications under foreign OSs. VMs can do this because they insulate OSs and programs from each other.
Better performance also gives higher-level languages some slack to work with, thus easing their adoption by programmers. JIT compilers have helped Java be taken more seriously for server side development. Criticisms against Functional languages for being slow now have less legitimacy. Programmers of imperative languages can be freed from subtler tuning issues and instead focus on writing maintainable code. Issues of array striding and conditionals within loops can be left to the CF subsystem.
Code on the fly also allows compiled binaries to become independent of the processor without a significant tradeoff in performance. Compiled applications are increasingly used between different architectures. Any Java class file can run on both a SPARC and a Pentium. Tao’s Elate RTOS runs the same executable on any processor. However, the ability to “write-once, run-anywhere” only works among environments that provide the same set of APIs. Since computers are bought for the applications they run, there will be less concern for which processor is used, as long as the entire computer system is affordable and fast enough.
Compilers tend not be released in a timely enough manner for software vendors to take advantage of the latest whizbang processor. Even when the compilers eventually do become available, developers are too lazy to compile all their applications for each architecture (no flames please – I’m a developer myself!). They like to use “deployment complexities” and “user friendliness” as excuses. Well-designed CF systems will be able to address these concerns.
To correspond with the expected growth in digital appliances will be diverse heat, speed, cost and form factor constraints. Clearly, no single company or processor will be able to meet this challenge. However, portability of compiled software between competing appliances is a challenge that CF can handle.
Another obstacle this code can hurdle is making the instruction set orthogonal to the processor’s architecture. The Pentium, Crusoe and Elbrus 2000 are based on RISC and VLIW designs, but are able to execute what is essentially a CISC instruction set by using CF. This flexibility is becoming increasingly important in today’s market which places a premium on compatibility with the x86 even though chip designer’s wish to be rid of its restrictions.
There are difficulties in the different development speeds of software and hardware. Even trivial issues can become difficult when you don’t have the hardware to support your fancy new programs. Last week I tried this – a free UK VPN trial, basically because the new Dr Who was starting (and yes she’s a girl!). My trusty 8 year old Samsung laptop performs perfectly well in my life, but load a VPN and try and stream video across the tunnel and suddenly it looked useless. Taken into new areas my hardware is obviously insufficient!
More importantly, code free VLIW processors from the burden of backward compatibility. Without CF, VLIW instruction sets are inherently dependant on the number and latencies of functional units in a specific processor. CF should ease the adoption of VLIW processors by extending their expected lifetime. Contemporary corporate research is hedging its bets on the VLIW approach. Witness Intel’s IA-64, Sun Microsystems’s MACJ, Transmeta’s Crusoe and the Elbrus 2000. VLIW processors forego any hardware-assisted optimization and instead shift all the responsibility (and blame) to the compiler. Perhaps there is a role in this hardware-software spectrum for CF to play.
Updating code technology is software-based. Thus it gives processor designers more elbowroom to work with. They are no longer restricted to hardware-only options when making tradeoffs between a chip’s speed, size, cost, heat dissipation and power requirements. But the best is yet to come: CF is a path around patent restrictions. Now companies can compete with instruction-compatible processors on a level playing field, beyond the grimy reach of patent lawyers. Coup de Grace! Shifts in the competitive landscape due to CF are bound to occur. The extent of change however, remains to be seen.
Where once I felt I was at the cutting edge of computer privacy with my single proxy discretely running on a local college’s server. Now that’s very little good at all, my traffic isn’t encrypted of courese, and I also discover that my IP address is limiting my access. The ultimate in security and privacy won’t use a simple proxy with a commercially registered IP address. You need residential ones which are obviously harder to obtain, although backconnect devices now have made these more affordable. You can access bulk shared proxies with residential IP addresses for a few dollars a month if you know where to look.
Despite all the benefits its brings, developing code on the fly still raises a fuss. What are the legal implications of running code that is different from what the software vendor delivered? Whose tech support do you call when correct code becomes faulty during optimization? (I’d hate to be the guy who has to debug that). Will the emergence of dynamic compiling replace the market dominance of the x86 instruction set architecture (ISA)? If so, will it be replaced by a RISC-like ISA like the Sparc? Or a bytecode like Java’s? Something more hierarchical, like the high level Slim Binaries? Or maybe even something resembling a 4-way VLIW ISA? Perhaps a multitude of instruction formats may emerge to serve different niches.