Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Differences between .NET application servers vs. Java application servers

I'd like to better understand the reasons for .NET's application server model compared to that used by most Java application servers.

In most cases I've seen with ASP.NET web applications, business logic is hosted in the web server's asp.net hosts processes. Another common approach is to have a physically or logically different tier which hosts your business objects and then are exposed as web services or accessed via mechanisms like WCF. The latter approach typically but not always seems to be used when higher scale is required. In the days of COM objects I've seen Microsoft Transaction Server (MTS) and later COM+ hosting used to host COM objects containing business logic, with MTS (theoretically) managing object lifetime, transactions, concurrency yada yada. This model largely seems to have disappeared in ASP.NET land.

In the Java world you might have Apache with Tomcat as the servlet container and your business objects hosted in Tomcat. In this case, Tomcat provides similar functionality to what MTS provided in the .NET world.

Several questions:

  1. Why the fundamental difference in the Microsoft vs. Java approaches to application servers? This must have been an architecture/design choice when these frameworks were created.
  2. What are the pros and cons of each approach?
  3. Why did Microsoft move away from the MTS-hosting model (which is similar to the Tomcast servlet hosting model) to the more common current approach which is just to have business objects as part of the web server's ASP.NET process?
  4. If you wanted to implement the MTS type approach or the Tomcat type approach in ASP.NET applications today I assume a common pattern would be to host business objects in some IIS process (possibly on some different physical or logical tier) and access via WCF (or standard asmx web services, whatever). Is this a correct assumption?
like image 200
Howiecamp Avatar asked Jul 24 '10 23:07

Howiecamp


1 Answers

To my way of thinking, the primary difference is in the "open" approach vs. the "integrated stack" approach. Microsoft likes to provide everything as an integrated stack that all shares a common flavor and approach. Java is more friendly to the "bring your own x" model, where you may want to plug in your favorite application server, transaction manager, etc. Both technology stacks allow in-process invocation as well as remote invocation with varying levels of transaction support.

Really, WCF is not a new technology stack, but a reorganization and rebranding of existing elements of the .NET stack. Specifically, WCF took on the functions of .NET Remoting, Web Services, and distributed transactions.

You reference "the more common current approach which is just to have business objects as part of the web server's ASP.NET process." That is only common for non-distributed apps. It is a simple model that works well when all of your objects will reside on the same server. This follows Microsoft's "Scale Out" deployment model. Rather than segregating object tiers across servers, put everything but the database together on the web servers and then incrementally add identical, redundant servers to scale out the web-server layer.

Microsoft has been pushing hard lately on Service Oriented Architecture, which relies more heavily on WCF and remote invocation. This is seen as a key to the cloud strategy that would have people moving parts or all of their applications to flexible resources in the cloud (which MS would like to host with Azure and the like).

like image 146
Jason Avatar answered Oct 05 '22 22:10

Jason