This is a great question! Here's a first cut.
Be able to log at multiple levels (ex: debug, warning, etc.).
hslogger is easily the most popular logging framework.
Be able to collect and share metrics/statistics about the types of work the program is doing and how long that work is taking. Ideally, the collected metrics are available in a format that's compatible with commonly-used monitoring tools like Ganglia, or can be so munged.
I'm not aware of any standardized reporting tools, however, extracting reports from +RTS -s
streams (or via profiling output flags) has been something I've done in the past.
$ ./A +RTS -s
64,952 bytes allocated in the heap
1 MB total memory in use
%GC time 0.0% (6.1% elapsed)
Productivity 100.0% of total user, 0.0% of total elapsed
You can get this in machine-readable format too:
$ ./A +RTS -t --machine-readable
[("bytes allocated", "64952")
,("num_GCs", "1")
,("average_bytes_used", "43784")
,("max_bytes_used", "43784")
,("num_byte_usage_samples", "1")
,("peak_megabytes_allocated", "1")
,("init_cpu_seconds", "0.00")
,("init_wall_seconds", "0.00")
,("mutator_cpu_seconds", "0.00")
,("mutator_wall_seconds", "0.00")
,("GC_cpu_seconds", "0.00")
,("GC_wall_seconds", "0.00")
]
Ideally you could attach to a running GHC runtime over a socket and look at these GC stats interactively, but currently that's not super easy (needs an FFI bindings to the "rts/Stats.h" interface). You can attach to a process using ThreadScope
and monitor GC and threading behavior.
Similar flags are available for incremental, logged time and space profiling, which can be used for monitoring (e.g. these graphs can be built incrementally).
hpc
collects a lot of statistics about program execution, via its Tix
type, and people have written tools to log by time-slice what code is executing.
Be configurable, ideally via a system that allows configured properties in running programs to be updated without restarting said programs.
Several tools are available for this, you can do xmonad-style state reloading; or move up to code hotswapping via plugins
* packages or hint
. Some of these are more experimental than others.
Reproducible deployments
Galois recently released cabal-dev
, which is a tool for doing reproducible builds (i.e. dependencies are scoped and controlled).
Example of ConfigFile:
# Default options
[DEFAULT]
hostname: localhost
# Options for the first file
[file1]
location: /usr/local
user: Fred
I would echo everything Don said and add a few general bits of advice.
For example, two additional tools and libraries you might want to consider:
-Wall
Those are both targeted at code quality.
As a coding practice, avoid Lazy IO. If you need streaming IO, then go with one of the iteratee libraries such as enumerator. If you look on Hackage you'll see libraries like http-enumerator that use an enumerator style for http requests.
As for picking libraries on hackage it can sometimes help to look at how many packages depend on something. Easily see the reverse dependencies of a package you can use this website, which mirrors hackage:
If your application ends up doing tight loops, like a web server handling many requests, laziness can be an issue in the form of space leaks. Often this is a matter of adding strictness annotations in the right places. Profiling, experience, and reading core are the main techniques I know of for combating this sort of thing. The best profiling reference I know of is Chapter 25 of Real-World Haskell.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With