Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is it possible to have hard real-time with lexical scope?

I was reading this paper about the funarg problem, which is really the problem of maintaining the environments of lexical closures. It's an old paper and I'm not sure if the author's conclusions still hold, but he strongly implies that, in order to have lexical rather than dynamic scope, you have to abandon the traditional C-style stack, and instead have a tree structure of environments, allocated from the heap.

Does this make it impossible to have lexically scoped closures in any hard-real-time system? in real-time embedded systems, where latencies are measured in microseconds, heap allocation is typically forbidden because of the non-deterministic latency it introduces.

This has been an idle curiosity of mine, because I make my bread mostly as a firmware developer where C is the de facto language, and for a while now it seems I've been using my brain power to figure out how to force C to let me do things that come for free in more sophisticated languages. Consequently, I've begun to wonder whether you could implement a micro-lisp compiler specifically for hard-real-time embedded microcontroller-based systems.

As a side note: I've lately gained great insights into deep topics like how closures and objects are equivalent and so forth, and it gives me greater awe of guys like Stallman and Rich Hickey, and Paul Graham. Implementing Lisp from the ground up seems like a daunting task to me. It's hard to know where to start. (Maybe with PG's implementation of McCarthy's original eval function, IDK). Anyway, I digress.

like image 952
Johnny Avatar asked Oct 11 '22 04:10

Johnny


1 Answers

Lexical scope is obviously possible with "hard real-time" -- after all, you say that you're using C for real-time applications, and C is a lexically scoped language. My guess is that you're actually concerned about first-class functions, not lexical scope. Assuming that, there's a whole bunch of known compilation techniques for dealing with first-class functions efficiently.

First of all, what you'll see in textbooks etc is almost always doing the usual tree-shaped environment, but in practice that's not needed at all if functions are not used as values. Practically every decent functional language compiler will identify such code and will use the stack instead, so there's zero overhead for allocation. (Note that at this point this means that if you limit yourself to the kind of things that you write in C then no allocation is needed.) Then, there's a bunch of additional ways to reduce allocations -- for example, consider something like (lambda (x) (* x 9)) -- this function doesn't really close over any free identifiers, and therefore compilers will lift it to the top so there's a single copy of the function and, again, no allocation. (Related note: with this optimization you already get more than what C gives you and still no allocation.)

There's a whole bunch of additional optimizations that avoid allocation, but of course there are cases where you really need to allocate a new closure. But these places are statically identifiable, and if you really care about such allocations then it shouldn't be difficult to hack up a compiler that warns you about such allocations. There have been a number of such languages, see for example GOAL, the very low level prescheme. But IME most people quickly get the hang of things and it's getting obvious where allocations happen -- so you get the benefit of a high level language, and when it gets to some code that you want to optimize, it's easy to see where allocation happens, and it's easy to avoid it in the usual ways. (And of course, all of this is unrelated to optimizing allocations.)

like image 70
Eli Barzilay Avatar answered Nov 02 '22 03:11

Eli Barzilay