Would you say this is the most optimal way of doing simple traditional logging in a Azure deployed application?
If feels like a lot of work to actually get to the files etc ...
What's worked best for you?
ASP.NetServer Side ProgrammingProgramming. Logging is the process of recording events in software as they happen in real-time, along with other information such as the infrastructure details, time taken to execute, etc. Logging is an essential part of any software application.
Azure Monitor Logs is a feature of Azure Monitor that collects and organizes log and performance data from monitored resources.
Logging output from dotnet run and Visual Studio Logs that begin with "Microsoft" categories are from ASP.NET Core framework code. ASP.NET Core and application code use the same logging API and providers.
NET can log errors to the Windows Event Viewer, a database, or a file using libraries already included in the . NET framework. Each solution has its pros and cons.
We use the build in diagnostics that writes to Azure Table storage. Anytime we need a message written to a log, it's just a "Trace.WriteLine(...)".
Since the logs are written to Azure Table Storage, we have a process that will download the log messages, and remove them from the table storage. This works well for us, but I think it probably depends on the application.
http://msdn.microsoft.com/en-us/library/gg433048.aspx
Hope it helps!
[Update]
public void GetLogs() {
int cnt = 0;
bool foundRows = false;
var entities = context.LogTable;
while (1 == 1) {
foreach (var en in entities) {
processLogRow(en);
context.DeleteObject(en);
cnt++;
try {
if (cnt % 100 == 0) {
foundRows = true;
context.SaveChanges(SaveChangesOptions.Batch);
}
} catch (Exception ex) {
Console.WriteLine("Exception deleting batch. {0}", ex.Message);
}
}
if (!foundRows)
break;
else {
context.SaveChanges(SaveChangesOptions.Batch);
}
foundRows = false;
}
Console.WriteLine("Done! Total Deleted: {0}", cnt);
}
Adding a bit to Brosto's answer: It takes only a few lines of code to configure Azure Diagnostics. You decide what level you want to capture (verbose, informational, etc.). and how frequently you want to push locally-cached log messages to Azure storage (I usually go with something like 15 minute intervals). Log messages from all of your instances are then aggregated into the same table, easily queryable (or downloadable), with properties defining role and instance.
There are additional trace statements, such as Trace.TraceError(), Trace.TraceWarning(), etc.
You can even create a trace listener and watch your log output in almost-realtime on your local machine. The Azure AppFabric SDK Samples zip contains a sample (under \ServiceBus\Scenarios\CloudTrace) for doing this.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With