We have been running into the problem of template errors occasionally sneaking into our production site. If there were a tool to catch these or a strategy to catch these problems ahead of time then it could be added to the delivery pipeline.
eval. This built-in evaluates a string as an FTL expression. For example "1+2"? eval returns the number 3. (To render a template that's stored in a string, use the interpret built-in instead.)
Apache FreeMarker is a free Java-based template engine, originally focusing on dynamic web page generation with MVC software architecture. However, it is a general purpose template engine, with no dependency on servlets or HTTP or HTML, and is thus often used for generating source code, configuration files or e-mails.
The most common method of detecting for empty or null values uses the has_content function and the trim function. The has_content function is often used with another built-in FreeMarker function to detect for null values. Check the FreeMarker documentation for more information on the has_content function.
Special variables are variables defined by the FreeMarker engine itself. To access them, you use the . variable_name syntax.
From my experience, Freemarker really only has two classes of errors when trying to render a template (Ignoring configuration):
Although linting tools typically do find errors in code, a lint tool does not replace the need for basic testing, which is a far better solution for what you're experiencing here since you are seeing exceptions in production code.
For better or for worse, Freemarker's assumptions about who is working with the data and who is working with the markup are different people with different skillsets. Assuming this is the case (and you have Java engineering resources to spare), you have two ways of approaching this problem from a test process point of view (Although to be truly rigorous, you want both).
At my previous job, we used this approach by itself. Basically, the front-end engineers would hack templates on a special web frontend where the edited templates were directly on the configuration path containing two listings:
The 'versions' were essentially a two-tiered set of hardcoded Java objects, where Tier 1 was consistent across all of templates, and Tier 2 was template-specific variations. Because most of our emails were account-level notifications, we got a lot of mileage and reuse out of just having the global data model, and we rarely had to dig into the small stuff.
Benefits
Not So Good
The other alternative is form unit tests for each template as they are created. Because the exceptions in Freemarker all happen at compile time, you will only need to do the following (After you do initial setup):
Template temp = cfg.getTemplate("myTestedTemplate.ftl");
temp.process(myTestDataModel, myIgnoredOutput); // No exceptions -> OK for us
Note that you don't care about the result in this case, you just need it to compile. This is important because you can easily get bogged down in debugging compiled output in Java and not solve the current problem. As before, you will also want to do the same two-tiers of unit tests here in all likelihood:
Benefits
Not-So-Good
If you have the time to do both, I would recommend work on using unit tests to handle the edge cases and other concerns that you find as they pop up, then the web frontend for development so that developing the page doesn't require a recompile. The ability to version frontend is extremely beneficial, but the goal is to prevent errors in production to the build process first. Launch, then optimize and all.
If you mean runtime output errors and not template syntax errors, you could use a (brittle) set of "expect-got" integration tests. How appropriate this is depends on the complexity of your templates and how dynamic they are. If you just want some simple "smoke" tests, this is probably a good solution.
For each template, create at least one test. Using concrete/static/pre-specified input data and concrete/static/pre-specified output results. This output has to be manually generated and verified the first time after any change, but from then on it can be saved and testing can be scripted. If the template pulls in its own data (like dates, etc) that can't be set as fixed input, mask or remove this from the expected output. Each automated test should:
Exact output equality is the easiest to implement and ensure are correct. If needed, have multiple tests per template. I wouldn't try to be clever, just let the computer do boring and repetative work. I would ignore the parts of the template that need to be masked on a first pass (some testing is better then none). Write explicit tests just for them later when you decide it improves reliability enough to be worth the effort (or for any that have been done wrong in the past.)
This solutions has the following caveats.
The scope is too large. Any change to anything in the template or in the data model can require updating the test. Using "diff" might help with manual verification and determining how to modify tests when data models and/or templates change.
Code refuse and modularity causes testing problems. With good, modular code, one code change can affect data in all templates, but static test require changing and reverifying all the tests independently when this happens. Not much to be done to fix that, so the better and more modular your code, the more work this causes :(
Sophisticated templates are hard to test. It might be problematic to have good template coverage using only a few sets of static data. That could mean the templates are doing too much "processing" and are not really being used just as template. That is probably not a good idea anyway.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With