Advanced Basics: Namespaces, Cursors, ADO.NET, ...

We were unable to locate this content in de-de.

Here is the same content in en-us.

From the January 2002 issue of MSDN Magazine
MSDN Magazine
Namespaces, Cursors, ADO.NET, Web Services, Inheritance, and More
Ken Spencer
D
uring the course of the many seminars I give each year, I get a lot of great questions from the audience. From time to time I'll be sharing some of the most interesting answers with you. This month, I'll discuss namespaces, data cursors, ADO.NET, Web Services and inheritance, and serialization.
      Namespaces are nothing new. They have been around for years, implemented in programming languages either explicitly or implicitly. For instance, in past versions of Visual Basic®, you can consider projects, forms, classes, and procedures to be namespaces because each of these allows you to define the same elements, control how those elements are accessed, and provide a scope for these elements. For instance, if Form1 has a property named Customer, you can access it from outside of Form1 by using Form1.Customer. Form1 is the namespace. Classes work the same way as forms, except they expose their members outside of the class and possibly outside of the project.
      Now, let's consider Visual Basic .NET. In the .NET world, namespaces are used explicitly. For instance, when you create a Visual Basic project in Visual Studio® .NET, a namespace is automatically created for you with the same name as the project. You can change this namespace by opening the project properties page and entering a new name, as shown in Figure 1.

Figure 1 Changing the Namespace
Figure 1 Changing the Namespace

      The namespace for the project, the first in a namespace specification, is known as the root namespace. For instance, if there was a second namespace called ClientStuff in the WinTester project, you could access its Customer property by using this syntax: WinTester.ClientStuff.Customer.
      You can create a namespace called ClientStuff with the Namespace keyword. For instance, the following code shows the creation of several namespaces:
Namespace ClientStuff
    Namespace YourStuff
        Module Module1
            Public Customer As String
        End Module
    End Namespace

    Namespace KensStuff
        Module Module2
            Public Customer As String
        End Module
    End Namespace
End Namespace
In this example, you can see that three namespaces have been created. ClientStuff is the major namespace in this code. The YourStuff and KensStuff namespaces are part of ClientStuff. Together, along with the root namespace, these three namespaces create a hierarchical namespace. Inside the project, you can use this syntax to fully qualify a member of the namespace. For instance, to fully qualify the path to Customer in Module1, use this statement:
  ClientStuff.YourStuff.Customer = "Ken"
This allows you to create namespaces and use them to separate elements of your code. The entire path for the YourStuff namespace looks like this:
  WinTester.ClientStuff.YourStuff.Customer = "Ken"
      Now, let's consider other namespaces. To access members of a namespace, you can fully qualify them (as I've just shown), use an Imports statement to access them without qualifying them, or add the Imports statement to the project to globally access the members in the project. If you have a project called WinTester, you might want to use ADO.NET to access a data source. When I created the WinTester project in Visual Studio .NET, several import statements were added to the project automatically. You can see these imports by opening the project properties page, as shown in Figure 2.

Figure 2 Adding Imports
Figure 2 Adding Imports

      The Imports tab can be used to add or remove project-level imports. The project-level imports are stored in the project files and fed to the compiler later. The command-line compiler supports an /imports option for adding global imports in this manner. Of course, you can also use the properties page in Figure 2 to add your own custom imports. For instance, if you are building an application that uses XML in different forms, you might add a global import for System.XML instead of adding an Imports statement each time you need to use XML features.
      As you can see, namespaces are quite powerful, and the .NET Framework provides you with a number of different options. Now let's take a look at data access.

No Cursors in ADO.NET

      There is no cursor support in ADO.NET. Cursors are a database-specific technology and thus are tied to features of a particular database such as SQL Server™ or Oracle. Cursors are also resource-intensive. The resources used depend on what type of cursor you select and where the cursor resides (client or server) and, of course, on the amount of data the cursor handles, the performance of the servers, and so on. Cursors are handy, but can really cause performance problems in applications.
      Early on, the ADO.NET team decided that performance would be a major consideration throughout the design of ADO.NET. Also important was default performance, also known as performance out of the box. As a result, ADO.NET does not support cursors. If your application absolutely needs cursor support, then you can use a previous version of ADO. Just be aware that your applications will not perform optimally because you'll be using COM Interop to access ADO from your .NET application. (This is in addition to the performance hit caused by the cursors.)
      I personally don't see the loss of cursors as a huge problem since Developers rarely resort to cursors. A good alternative is to just refresh the ADO.NET data upon demand. If you are using a DataSet, you can use the Merge method to pull in changes to the underlying data and merge them with the data in an existing table.
      For extremely high-performance read-only access, you can use the DataReader, which will allow you to pull data in a stream (like a fire-hose cursor) and manipulate it. In this case, there is no data to refresh. Just grab what you need and use it.

Data Providers

      To connect to a data source in the .NET Framework you have a choice of two providers. The SQL Server .NET Data Provider is implemented in the SQLClient namespace. The OLE DB provider is in the OLE DB namespace and can be used to connect to many different databases, including Microsoft® Access and Oracle.

Updating in ADO.NET

      When you want to update a table using ADO, you can create an updateable recordset and simply change the data. The changes are then applied to the underlying database in a variety of ways. How can you do this in ADO.NET?
      The easiest way to handle updates is to use a DataSet. The DataSet and DataAdapter provide you with a set of technologies to make updates easy. First, let's look at the DataAdapter, which has properties you can set to handle the updating automatically. But instead of worrying about those properties and how they work, you can let Visual Studio .NET set up the DataAdapter for you. The easiest way to do this is to drag the appropriate DataAdapter (SQLClient or OLE DB) from the Toolbox's Data tab and drop it onto a Windows® Form, an ASP.NET Web Form, or a class (implemented as a Component), and then follow the steps in the wizard. It will walk you through the process of connecting to a database (for either SQL or OLE DB) and selecting and updating data, and then will create the code for you.
      The coolest thing about the DataAdapter is that you can use a SELECT statement, existing stored procedures, or you can have it create new stored procedures for you. Basically, you can either set up everything yourself or let the wizard do the work. When you use the wizard, you create a SELECT statement, and the wizard creates the stored procedures for you (see Figure 3). You can also use Query Builder to create the SELECT statement, making it easy to create any queries you need to make.

Figure 3 Creating the Stored Procedures
Figure 3 Creating the Stored Procedures

      When you use the wizard to add the DataAdapter to your project, code is generated in a couple of places. First, the code looks like this (SQLClient) to create the variable references:
Friend WithEvents SqlDataAdapter1 As System.Data.SqlClient.SqlDataAdapter
Friend WithEvents SqlSelectCommand1 As System.Data.SqlClient.SqlCommand
Friend WithEvents SqlInsertCommand1 As System.Data.SqlClient.SqlCommand
Friend WithEvents SqlUpdateCommand1 As System.Data.SqlClient.SqlCommand
Friend WithEvents SqlDeleteCommand1 As System.Data.SqlClient.SqlCommand
Friend WithEvents SqlConnection1 As System.Data.SqlClient.SqlConnection
      Then, in the InitializeComponent event, more code is generated to connect everything. The first block of related code creates an instance of the DataAdapter and then creates an instance of SqlCommand for each command. Finally, the SqlConnection is created. These steps are shown here:
Me.SqlDataAdapter1 = New System.Data.SqlClient.SqlDataAdapter()
Me.SqlSelectCommand1 = New System.Data.SqlClient.SqlCommand()
Me.SqlInsertCommand1 = New System.Data.SqlClient.SqlCommand()
Me.SqlUpdateCommand1 = New System.Data.SqlClient.SqlCommand()
Me.SqlDeleteCommand1 = New System.Data.SqlClient.SqlCommand()
Me.SqlConnection1 = New System.Data.SqlClient.SqlConnection()
Next, the code to set up the DataAdapter is created (see Figure 4). The first three lines set up the Delete, Insert, and Select commands. Next, the mappings are created to set up the table definition. Finally, the Update command is pointed to the proper place.
      Next, the code is generated to set up each command. For instance, the code to create the Insert command is shown in Figure 5. You can see that even though there is quite a bit of code, it is fairly uncomplicated. The SqlCommand object is set up with the stored procedure name and type, then each of the parameters are set. There is also code to set up the other SqlCommands (Select, Delete, and Update) in the same manner.
      Now, let's look at the rest of the problem. I used a Windows-based application with a DataGrid and two command buttons (Load and Update) to demonstrate this approach . First, I created a new DataSet:
Dim oDS As New DataSet()
Then I created the cmdLoad_Click event which uses the Fill method of the DataAdapter to load the DataSet, then sets the DataSource property of the grid so it can load the grid with data:
Private Sub cmdLoad_Click(ByVal sender As System.Object, _
    ByVal e As System.EventArgs) Handles cmdLoad.Click
SqlDataAdapter1.Fill(oDS, "Categories")
dgrdCategories.DataSource = oDS
End Sub
      Look at the code for the Update process (see Figure 6). First, two variables are created specifically for error-handling purposes. The entire code for the update process is in a Try/Catch block to catch any errors. The first line of code in the Try block calls the Update method of the DataAdapter to actually process the updates. This one method calls the Insert, Update, or Delete statements for each inserted, updated, or deleted row in the dataset/table specified, in this case the oDS table and the table Categories. The code in the Catch block loops through the dataset and shows an error message for each row with an error.
      You can see that this didn't require much code. The wizard did the rest for me. This is one of the things that makes the .NET Framework and Visual Studio .NET so compelling—the framework lets you do things with very little code and Visual Studio .NET automates much of what's left.
      I don't miss updateable recordsets. Even with them, you had to program around errors that might occur in data entry, and handle them in your code. Using DataSets and DataAdapters makes it simple to build updateable applications where you can easily control the error handling and the update process.

Transactions?

      You can do transactions in ADO.NET in at least two ways. First, you can perform transactions in ADO.NET and let the database handle the transaction for you. For instance, in most applications, you are dealing with one database and need to handle transactions in that single database.
      I took the code in Figure 7 from an SDK sample and modified it slightly. This code inserts two records into the Region table of the Northwind sample database. The inserts are done within a transaction context so you can back out the inserts if anything fails. The transaction is started on the SqlConnection object by calling the BeginTransaction method. After the insert statements are executed, you can call the Commit method on the transaction object to commit the inserts or call Rollback to roll back those changes. The try�catch block traps any errors, as shown in Figure 7.
      The second option is to use COM+. COM+ provides a transaction system that automatically handles the transactions for your application. Many MSDN® Magazine authors have talked about using COM+, so I don't want to repeat all of their reasons here. Just consider one thing—when you access COM+ from .NET, you are using unmanaged code, which means extra overhead.
      What does this mean for transactions? When you need COM+, use it; when you don't need it, don't use it. In most applications, only one database is used, so the ADO.NET transactions that I've shown will work fine. If you are updating two databases in one transaction, then use COM+ to take advantage of the distributed transaction coordinator (DTC). Likewise, if you need object pooling or other features, use COM+.
      In all cases, you must test the application under load to make sure it works as you planned and performs under the proper load.

Web Services and Inheritance

      Now, here's an interesting question. Can you inherit from a Web Service? The answer is "yes." After all, a Web Service is a class and you can inherit from other classes, so why not from Web Services? But, the other question is should you inherit from Web Services? Well, that depends. You might want to if the Web Service has methods or properties that provide part of the framework of your Web Service architecture.
      This brings up a design point. I recently suggested to clients that they treat Web Services as part of either the user interface or the facade layer of their applications. The idea here is to keep the business logic in components that your other applications can use. Then you can call these components from the Web Services and from those applications. The result is no duplicate code, and applications that are easy to create. Therefore, you save tons of time and are more effective because Web applications, Windows-based applications, and your Web Services can all share the same code. Using components like this will reduce the need for inheritance of Web Services, but you can use inheritance in many other places.

Serialization

      Many applications use serialization in their components. Serialization lets a class serialize (stream or dump) its data into some type of stream and then deserialize itself later from the same type of stream, essentially reloading the class from its data. This is handy in many scenarios when you need to either persist the data from a class or stream the data. For instance, perhaps you need to save the data in a class to disk and reload it later. Or you need to stream the data from a class, send it over the wire using Web Services, remoting, or Microsoft Message Queuing (MSMQ), then deserialize the class from that stream on the other end.
      The .NET Framework includes features to let classes automatically serialize and deserialize themselves. But can a class serialize itself into binary data instead of XML? Yes, it can. There is a binary formatter class that can be used to serialize a component into a binary stream. In addition, the BinaryMessageFormatter can serialize a class into a binary message to be sent over MSMQ. Both of these binary formatters allow you to format a stream of serialized data into a compact format that can be created and read quickly.

Conclusion

      Like other features of the .NET Framework, the features I discussed this month allow you to build powerful, high-performance applications very quickly. I am continually amazed at what I find when I crack open the SDK or browse through the files in Visual Studio .NET. I'm sure you'll be hearing lots more about the goodies you can find in .NET.

Send questions and comments for Ken to basics@microsoft.com.
Ken Spencer works for the 32X Tech Corporation (http://www.32X.com), which produces a line of high-quality developer courseware. Ken also spends much of his time consulting or teaching private courses.

Page view tracker