MSDN Magazine > Issues and Downloads > 2005 > January >  Web Q&A: Caching and Expiration, Connection Poo...
Web Q&A
Caching and Expiration, Connection Pools, and More
Edited by Nancy Michell


Q Is there a trick to using .js and cascading style sheets (CSS) includes so they're cached instead of fetched when a request is made?
Q Is there a trick to using .js and cascading style sheets (CSS) includes so they're cached instead of fetched when a request is made?

A To accomplish that kind of caching you can enable content expiration using the HTTP Headers property sheet in the IIS administrative snap-in by selecting the Enable Content Expiration checkbox. This will tag the HTTP response using an Expires header (see Enabling Content Expiration). You can also use the Cache-Control header to achieve the same effect if you're using ASP, ASP.NET, PHP, and so forth.
A To accomplish that kind of caching you can enable content expiration using the HTTP Headers property sheet in the IIS administrative snap-in by selecting the Enable Content Expiration checkbox. This will tag the HTTP response using an Expires header (see Enabling Content Expiration). You can also use the Cache-Control header to achieve the same effect if you're using ASP, ASP.NET, PHP, and so forth.
Microsoft® Internet Explorer supports the META tag which can be used for a similar purpose, but the HTTP Response Header is still the preferred way to go. With it, no HTML content needs to be updated and the setting can be applied globally on a Site, Virtual Root, or file basis in IIS.
For an example of a site that behaves pretty well with regard to object caching, use an HTTP debugging proxy such as Fiddler, which logs all HTTP traffic between the client and Web server, and visit office.microsoft.com. When you look at your traces in Fiddler, in the EXPIRES column most objects have an Expiration set and are not rerequested. Figure 1 shows the list view on the left-hand side of Fiddler. You'll note that Office Online either sends an HTTP/1.0 EXPIRES date (for example, Mon, 01 Nov 2004) or an HTTP/1.1 Cache-Control (for example, max-age=86400) header on transactions.


Q Is there a recommended size limit for view state data that will help me avoid significant performance degradation?
Q Is there a recommended size limit for view state data that will help me avoid significant performance degradation?

A When you consider the user's experience when downloading large amounts of view state data, you realize that the user's pain threshold is really the limiting factor and that it's crossed before the server has problems. See the section entitled View State at Chapter 6 — Improving ASP.NET Performance.
A When you consider the user's experience when downloading large amounts of view state data, you realize that the user's pain threshold is really the limiting factor and that it's crossed before the server has problems. See the section entitled View State at Chapter 6 — Improving ASP.NET Performance.
It is very difficult to say what a large view state is without considering the server specs, network architecture, and the client. The blog post at Don't let the BinaryFormatter get at it! shows how the LosFormatter and the BinaryFormatter are used for view state and how the type being serialized impacts the view state size.
The follow-up post at Whidbey will brings us a shorter viewstate, guaranteed! shows how this changes in the .NET Framework 2.0.
Information on the new and improved View State can also be found in an article from the October 2004 issue of MSDN®Magazine, "Speed Up Your Site with the Improved View State in ASP.NET 2.0".

Q I had an issue recently where all machines in a layer (about 18 of them) recycled W3wp (the worker process) more or less simultaneously. I am wondering how the timer was in sync across all these different machines. I'm using Application Center and performed a content push recently.
Q I had an issue recently where all machines in a layer (about 18 of them) recycled W3wp (the worker process) more or less simultaneously. I am wondering how the timer was in sync across all these different machines. I'm using Application Center and performed a content push recently.

A The timer starts when the app pool is started. It's common practice to recycle app pools after a significant change to the application (the content push), and this would likely have been automated across a cluster, thus all pools were started at about the same time.
A The timer starts when the app pool is started. It's common practice to recycle app pools after a significant change to the application (the content push), and this would likely have been automated across a cluster, thus all pools were started at about the same time.

Q How many connection pools can a process create in ADO.NET? I know that if I have n connection strings, I'll have n pools. But is there a maximum value for n? Also, MSDN documentation states that once created, connection pools are not destroyed until the active process ends. Maintenance of inactive or empty pools involves minimal system overhead. Will the inactive pool be pruned?
Q How many connection pools can a process create in ADO.NET? I know that if I have n connection strings, I'll have n pools. But is there a maximum value for n? Also, MSDN documentation states that once created, connection pools are not destroyed until the active process ends. Maintenance of inactive or empty pools involves minimal system overhead. Will the inactive pool be pruned?

A If you're asking how many pools may be created, the answer is one pool per connection string used per app domain. Sub-pools are created for integrated security connections based upon the number of different users. If you use n connection strings in a given app domain, you will have n pools. Connection pools are created per app domain, not per process. Pools are created on demand by user connection requests. There is additional logic to prune the pools and even delete the pools after periods of inactivity if the user does not specify otherwise.
A If you're asking how many pools may be created, the answer is one pool per connection string used per app domain. Sub-pools are created for integrated security connections based upon the number of different users. If you use n connection strings in a given app domain, you will have n pools. Connection pools are created per app domain, not per process. Pools are created on demand by user connection requests. There is additional logic to prune the pools and even delete the pools after periods of inactivity if the user does not specify otherwise.
There is no upper limit on how many pools can be created. The MSDN comment you mentioned is correct for the Microsoft .NET Framework 1.0 and 1.1. The pools will remain, though they will be empty after inactivity if you did not specify a minimum pool size. In the .NET Framework 2.0, after a period of inactivity the Framework will delete the pool.

Q I have a question concerning the expiration date of HTML pages (in Windows® XP with Microsoft Internet Explorer 6.x). In Expire1.htm, I specified that the page expires immediately:
<html>
<head>
<META HTTP-EQUIV="Pragma" CONTENT="no-cache"> 
<META HTTP-EQUIV="expires" CONTENT="-1"> 
</head><body> 
<a href="http://HtmlPage1.htm">HtmlPage1.htm</a>
</body>
</html>
Q I have a question concerning the expiration date of HTML pages (in Windows® XP with Microsoft Internet Explorer 6.x). In Expire1.htm, I specified that the page expires immediately:
<html>
<head>
<META HTTP-EQUIV="Pragma" CONTENT="no-cache"> 
<META HTTP-EQUIV="expires" CONTENT="-1"> 
</head><body> 
<a href="http://HtmlPage1.htm">HtmlPage1.htm</a>
</body>
</html>
Using Internet Explorer, if I browse to Expire1.htm (using HTTP, not HTTPS), after I click on the link I am sent to HtmlPage1.htm. Now if I change the content of Expire1.htm and then click on the Back button, I get the old content of Expire1.htm rather than the new content. In Internet Explorer I selected the option to "check" for newer versions of stored pages on every visit to the page.

A This behavior is by design. By using the META header you are relying on MSHTML to decide whether or not to remove the content from the cache, and the removal of the content from the cache is only performed by MSHTML when the content is downloaded over Secure Sockets Layer (SSL). Since you are using the Forward and Back buttons, you are not really navigating; you are simply displaying the content that was previously downloaded. If the connection is not an SSL connection, then MSHTML simply sets the Expiration header for the content that is stored in the cache.
A This behavior is by design. By using the META header you are relying on MSHTML to decide whether or not to remove the content from the cache, and the removal of the content from the cache is only performed by MSHTML when the content is downloaded over Secure Sockets Layer (SSL). Since you are using the Forward and Back buttons, you are not really navigating; you are simply displaying the content that was previously downloaded. If the connection is not an SSL connection, then MSHTML simply sets the Expiration header for the content that is stored in the cache.
Those actions rely on whether or not WININET cached the files on the download, which you already discovered by setting the headers on the entire site. If you control the IIS Server or the Web server that is hosting the HTML files, then you can set the Expires Header on a Site, Directory, or File basis, so you could go down to the file level to achieve appropriate caching behavior.
After you navigate from Expire1.htm to HtmlPage1.htm, but before you click on the Forward and Back buttons, you need to look at the Tools | Internet Options | Settings button | View Files button. In the display, find Expires1.htm and look at the Expiration column. Is there a date/time that reflects when you accessed the file? If so, then the rules shown in Figure 2 apply for the cached content when using the Forward and Back buttons. You can see that when using the Forward and Back buttons synchronizing only happens if the content has not been downloaded during the current Internet Explorer session.

Syncmode No Expiration Date Expiration Before Last-Accessed Time Expiration After Last-Accessed Time
Always CS DS CS
Never DS DS CS
Per-Session CS DS CS
Automatic SURL DS  
DS = Don't synchronize the content.
CS = Conditional synchronize if Last Checked time was before start of the current Internet Explorer session.
SURL = Don't synchronize if URL is marked static; otherwise fall back to per-session.
Now to get the behavior that you intended, instead of using the Forward and the Back buttons to access Expire1.htm after loading HtmlPage1.htm, click in the Address Bar and manually enter in the full path to Expire1.htm. That should result in the new content being displayed because now the caching rules are based on a Hyperlink Click or Address Bar Navigation. In this case the Expiration of the content will determine whether the content should be downloaded again.

Q If an app makes 50 simultaneous connections to SQL Server, there will be 50 connections in the connection pool. When are they removed from the pool? If their ConnectionLifetime is 0, which the documentation describes as using the maximum connection lifetime, how long is this?
Q If an app makes 50 simultaneous connections to SQL Server, there will be 50 connections in the connection pool. When are they removed from the pool? If their ConnectionLifetime is 0, which the documentation describes as using the maximum connection lifetime, how long is this?

A The logic behind removing connections from the pool is undocumented, but this has not prevented people from trying to discover it and write articles about it. The best one is probably SqlClient Connection Pooling Exposed - Reflection allows .NET developers to peer into the internals of SqlClient. You can also check out the following blog: ADO.NET The misunderstood "Connection Lifetime" managed pooler connection string keyword.
A The logic behind removing connections from the pool is undocumented, but this has not prevented people from trying to discover it and write articles about it. The best one is probably SqlClient Connection Pooling Exposed - Reflection allows .NET developers to peer into the internals of SqlClient. You can also check out the following blog: ADO.NET The misunderstood "Connection Lifetime" managed pooler connection string keyword.
The basic idea is that connections left idle in the pool will be cleared at a random time between approximately four minutes and eight minutes, depending on implementation details. There is nothing that you can do short of using reflection, as shown in the article mentioned previously, to change this default.
The actual behavior for a connection with minimum pool size set is the same as for a connection without it. On closing the connection, the connection lifetime will be checked and it will be disposed. The managed pooler will notice that you have gone under the minimum pool size and it will open a new connection on a background thread. Connection lifetime has nothing to do with the connection pool idle time. Setting it to 0 equals the maximum value of int32.MaxValue, or 2147483647 seconds.

Q Does ADO.NET do anything to secure the connection string parameter passed to the SQLConnection object? I'm concerned about a memory dump exposing a connection string. When a comparison of connection strings is performed to determine if a connection from the pool can be used, is the actual string passed?
Q Does ADO.NET do anything to secure the connection string parameter passed to the SQLConnection object? I'm concerned about a memory dump exposing a connection string. When a comparison of connection strings is performed to determine if a connection from the pool can be used, is the actual string passed?

A First, it should be noted that the concern about exposing connection strings is not unique to ADO.NET; this is a valid concern for all database access technologies. There is no real way to protect the password from a user who has administrator access to the ADO.NET application. He can get the password from the .exe file, from a memory dump, by packet sniffing it from the network, by spoofing the server information, and so forth.
A First, it should be noted that the concern about exposing connection strings is not unique to ADO.NET; this is a valid concern for all database access technologies. There is no real way to protect the password from a user who has administrator access to the ADO.NET application. He can get the password from the .exe file, from a memory dump, by packet sniffing it from the network, by spoofing the server information, and so forth.
A good solution is to use integrated security. This implies that your program can do anything the authenticated user has the permissions to do without your program. With integrated security, no passwords will appear in the process.
As an example of the dangers of using passwords in connections strings, take this scenario. First, you call SqlConnection.Open with a connection string containing a password. Second, SqlConnection encrypts the password internally as soon as possible. However, even with this design, a memory dump of the process before a garbage collection occurs will show you the string object (connection string) that you used when calling SqlConnection.Open. So no matter what SqlConnection does, it cannot hide the contents of this connection string. SqlConnection cannot even erase the data in the incoming connection string, as strings are immutable in .NET. For more info on securing strings in .NET-based apps, see Security Briefs in this issue of MSDN Magazine, specifically the section on System.Security and the new SecureString class.
A second option is to completely hide the database from the user (using a Web service solution, for example) so that the database can only be accessed by the Web server.

Q I would like to know how to prevent the menu items in my page from appearing behind the map I'm displaying. The map is a flash object held inside a <DIV> tag. Should I use a z-index? When I move the mouse over the main menu, the menu items should appear, but some of them are hidden behind the map.
Q I would like to know how to prevent the menu items in my page from appearing behind the map I'm displaying. The map is a flash object held inside a <DIV> tag. Should I use a z-index? When I move the mouse over the main menu, the menu items should appear, but some of them are hidden behind the map.

A Setting the z-index attribute on your <DIV> element won't help because the Flash control is a windowed element and will not allow other DHTML content to be layered on top of it. You may be able to change your menu to use createPopup instead, which can then sit on top of another windowed object. However, be careful that accessibility features will still be able to use the menu options. For more information on how to do this, see createPopup Method.
A Setting the z-index attribute on your <DIV> element won't help because the Flash control is a windowed element and will not allow other DHTML content to be layered on top of it. You may be able to change your menu to use createPopup instead, which can then sit on top of another windowed object. However, be careful that accessibility features will still be able to use the menu options. For more information on how to do this, see createPopup Method.

Got a question? Send questions and comments to  webqa@microsoft.com.

Thanks to the following Microsoft developers for their technical expertise: Paul Bates, Todd Carter, Blaine Dockter, Kirk Evans, Simon Hoare, Philo Janus, Jon Langdon, Eric Lawrence, Drew Leaumont, Mauricio Lorenzon, Dave Massy, Joseph Minckler, Matt Neerincx, Philip Reilly, Markus Rheker, Angel Saenz-Badillos, Bill Safcik, Sanjeeb Kumar Sarangi, Doug Shattuck, Chad Sheffield, Nuno Silva, Rob Smith, and Kevin Yu.


Page view tracker