We are currently use azure scale set (many VMs on one source group with load balance and one availability set), we used to use NLog to log our web app action and errors, but now we asked/needs to use Elastic Search and also use centralized log for all azure vm instances instead of file per each instance.
I am new to ES and LogStash concepts, Do I need to replace NLog with something else? and How I move to use ES and unify all logs in one (I think to make nlog store in azure storage table as unify results or do I needs to use LogStash or you prefer something else)?
What is the most logging that give support for .net core app on azure multi VMs as described above?
Any help please?
Both Nlog and Serilog loggers logged all data without missing any log and it has taken approximately 13 seconds. Serilog is a little bit faster than Nlog.
You need to install Filebeat first which collects logs from all the web servers. After that need to pass logs from Filebeat -> Logstash. In Logstash you can format and drop unwanted logs based on Grok pattern. Forward logs from Logstash -> Elasticsearch for storing and indexing.
Elasticsearch logging levels can be adjusted by changing the corresponding logger. {name}. level to the desired level. Each logger accepts Log4j 2's built-in log levels, from least to most verbose: OFF , FATAL , ERROR , WARN , INFO , DEBUG , and TRACE .
Elasticsearch uses Log4j 2 for logging. Log4j 2 can be configured using the log4j2. properties file.
Many recommends that the application should not write directly to ElasticSearch, but should just write to local files.
Then have a service (Ex. FileBeat) to upload the contents of the log-files into ElasticSearch.
This will optimize network traffic to the ElasticSearch instance (bulk), and will ensure logging is not lost if problems with the network or ElasticSearch instance is restarted because of maintenance.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With