Azure – Deploying Multiple Web Roles and Worker Roles on a Single Azure Cloud Service, azure, azure-web-roles, azure-worker-roles, deployment

This may not be new, but I hope some one can put me on right track as its bit confusing during azure deployment. I'm in the process of planning for deployment on Azure. This is what I have

  1. A Public facing ASP.Net MVC app (web-role) + WCF Service (web-role) to be accessible only to this app + WCF Service (worker-role) again accessible to 1. over message-queue
  2. An Custom STS i.e. ASP.NET MVC app (web-role) acting as Id-Provider (for 1. which is a Relying Party) + WCF Service (web-role) to expose some of STS funcionality to RP's such as 1.
  3. SQL Azure: accessed by 1 and 2
    Note: 1. will eventually grow to be come a portal, with multiple wcf-services hosted on web and worker roles for both internal and external access.

My question is if 1. is going to be the app exposed to public, and 2. is for 1. to Federate the security (internal), how should I plan my deployment on azure keeping in mind 1. will require scale-out sometime later along with the two wcf services? Do I publish to one cloud service or how?.
My understanding is that A Cloud Service is a Logical Container for n-web/worker roles.
But when yu have 2 web soles like in this case both apps, which one becomes the default one?

Best Regards

Best Solution

By default all web roles in the solution are public. You can change this by going into the service definition and remove HTTP endpoints, if you wish; you can also define internal HTTP endpoints that will only be available to cloud services, nothing will be exposed to the load balancer. The advantage to having all web roles in the same project is that it's easy to dynamically inspect the RoleEnvironment and each web role -- in other words, all roles in a solution are "aware" of other roles and their available ports. It's also easy to deploy one package.

All roles share the same DNS name ( (however you could use host headers to differentiate), but they are typically exposed by using different ports via the load balancer on your service. You can see this when the service is running in the cloud, there are links in the portal that point to each role that has a public HTTP endpoint that specifies the port. The one that is port 80 (as defined by the external HTTP endpoint) is the "default" site.

You could also create multiple cloud projects, and deploy them separately. In this case, each would have its own DNS name, and each is managed separately. Whether this is a good thing or not depends on how tightly coupled the applications are, and if you will typically be deploying the entire solution, or just updating individual roles within that solution. But there's no cost or scalability difference.

If you are going to frequently redeploy only one of the roles, I'd favor breaking them out.