We're still using old Classic ASP and want to log whenever a user does something in our application. We'll write a generic subroutine to take in the details we want to log.
Should we log this to, say, a txt file using FileSystemObject
or log it to a MS SQL database?
In the database, should we add a new table to the one existing database or should we use a separate database?
In hindsight, a better answer is to log to BOTH file system (first, immediately) and then to a centralized database (even if delayed). Most modern logging frameworks follow a publish-subscribe model (often called logging sources and sinks) which will allow multiple logging sinks (targets) to be defined.
I think storing logs in database is not a good idea. The pros of storing logs to databases over files is that you can analyse your logs much more easily with the power of SQL, the cons, however, is that you have to pay much more time for database maintainence.
Advantage of the File System over Data base Management System is: When handling small data sets with arbitrary, probably unrelated data, file is more efficient than database. For simple operations, read, write, file operations are faster and simple. You can find n number of difference over internet.
As a general rule, databases are slower than files. If you require indexing of your files, a hard-coded access path on customised indexing structures will always have the potential to be faster if you do it correctly.
Edit
In hindsight, a better answer is to log to BOTH file system (first, immediately) and then to a centralized database (even if delayed). Most modern logging frameworks follow a publish-subscribe
model (often called logging sources and sinks) which will allow multiple logging sinks (targets) to be defined.
The rationale behind writing to file system that if an external infrastructure dependency like network, database, or security issue prevents you from writing remotely, that at least you have a fall back if you can recover data from the server's hard disk (something akin to a black box in the airline industry). Log data written to a file system can be deleted as soon as it is confirmed that the central database has recorded the data, so generally file system retention sizes or rotation times need not be large.
Enterprise log managers like Splunk can be configured to scrape your local server log files (e.g. as written by log4net
, the EntLib Logging Application Block
, et al) and then centralize them in a searchable database, where data logged can be mined, graphed, shown on dashboards, etc.
But from an operational perspective, where it is likely that you will have a farm or cluster of servers, and assuming that both the local file system and remote database logging mechanisms are working, the 99% use case for actually trying to find anything in a log file will still be via the central database (ideally with a decent front end system to allow you to query, aggregate, graph and build triggers or notifications from log data).
Original Answer
If you have the database in place, I would recommend using this for audit records instead of the filesystem.
Rationale:
severity, action type, user, date ...
)select ... from Audits where ...
) vs GrepDelete from Audits where = Date ...
)The decision to use existing db or new one depends - if you have multiple applications (with their own databases) and want to log / audit all actions in all apps centrally, then a centralized db might make sense.
Since you say you want to audit user activity, it may would make sense to audit in the same db as your users table / definition (if applicable).
I agree with the above with the perhaps obvious exception of logging database failures which would make logging to the database problematic. This has come up for me in the past as I was dealing with infrequent but regular network failovers.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With