Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is there a performance hit in SSJS when using @Functions?

Tags:

xpages

If I want to parse a text field in SSJS there are 2 main tools. The built in JavaScript code and the newly converted @Functions. Are the @Functions slower then using pure javascript? Or is there no real difference?

viewScope.put("length", tmpStr.length)

vs.

viewScope.put("length:, @Length(tmpStr))
like image 826
David Leedy Avatar asked Nov 30 '22 03:11

David Leedy


1 Answers

All SSJS is parsed into an AST (abstract syntax tree) at runtime. In other words, your code just remains a String until the exact moment that it is executed, at which point a parser examines that String to syntactically identify what the code contains: which characters denote variables, which are operators, functions, etc. Once that parsing is complete, the runtime engine is able to run Java code that is a rough approximation of what the JavaScript code was designed to do.

This is why SSJS is always slower than the directly equivalent Java: if you just write your code in Java to begin with, then it's compiled into bytecode the moment you build your project, but perhaps more importantly, at runtime it doesn't have to "guess" what code to run by parsing a String... it just runs the Java code you already defined.

On the other hand, there's nothing about this process that significantly distinguishes the SSJS implementation of various @Functions from "native" JavaScript; given that @Length(tmpStr) is just a wrapper for tmpStr.length, it doesn't surprise me that Sven is seeing a difference in execution time given enough iterations. But if your goal is optimization, you'll gain far more improvement by moving all code from SSJS blocks to bean methods than you will by eschewing the convenience of @Functions in favor of native JavaScript, because even native JavaScript has to be parsed into an AST. In that sense, there is no fundamental difference between the two.

UPDATE: there's a slight caveat to the AST parsing mentioned at the beginning of this answer. By default, the XPages runtime caches up to 400 unique SSJS expressions (you can override this limit via the ibm.jscript.cachesize property in the server's xsp.properties file). So if an expression is encountered that matches exactly (including whitespace) one that is already cached, Domino doesn't have to construct a new AST for that expression; it just references the tree already in cache. This is a MRU ("most recently used") cache, so the more frequently the same expression is encountered, the more likely it is to remain in the cache. Regardless of whether the AST is cached, it still has to be evaluated against the current context, and some of the JavaScript wrapper objects do have additional overhead compared to what you'd likely use instead if you were just coding directly in Java (for instance, {} becomes an ObjectObject, which is similar to a HashMap, but has additional features that support closures, which are just wasted if you're not using closures anyway). But the primary performance implication of this AST cache is that, unlike in most development contexts, duplication of code can actually be a good thing, if only in the sense that using the same exact expression over and over again allows all but the first instance of each to skip the language parsing and jump straight to invocation.

like image 176
Tim Tripcony Avatar answered Feb 24 '23 19:02

Tim Tripcony