Using ODBC with Microsoft SQL Server


Amrish Kumar and Alan Brewer
Microsoft Corporation

September 1997
Updated May 5, 2010


An Application Programming Interface (API) is a definition of the set of functions an application can use to access a system resource. ODBC is a database API based on the Call Level Interface (CLI) API definition published by the standards organizations X/Open and ISO/CAE. ODBC applications can access data in many of today's leading database management systems (DBMSs) by using ODBC drivers written to access those DBMSs. The application calls the ODBC API functions using a standard ODBC SQL syntax, then the ODBC driver makes any necessary translations to send the statement to the DBMS and presents the results back to the application.

This paper describes how application programmers using the ODBC API can optimize access to Microsoft® SQL Server® when using the Microsoft SQL Server ODBC driver. The paper also discusses issues commonly raised by customers who have called Microsoft Support for help with the SQL Server ODBC driver. This paper is not a tutorial on ODBC programming in general, nor is it a comprehensive discussion about performance tuning on SQL Server. It assumes the reader is already familiar with ODBC programming and the use of SQL Server. For more information about ODBC, see the Microsoft ODBC 2.0 Programmer's Reference and SDK Guide available on MSDN and from Microsoft Press®, and Inside ODBC by Kyle Geiger, also available from Microsoft Press. For more information about SQL Server, see the SQL Server documentation.

Except where noted, users should assume that this paper is discussing the operation of Microsoft SQL Server version 6.5 and its associated version 2.65 ODBC driver. This paper uses the ODBC version 2.5 API functions because version 2.5 is the version used by most existing applications and is also the version of the ODBC SDK that ships with Microsoft SQL Server Workstation version 6.5. Programmers writing ODBC 3.0 applications should refer to the Microsoft ODBC 3.0 Software Development Kit and Programmer's Reference.

Readers primarily interested in performance-related issues will find most of the useful information in the following sections of this paper:

  • "General Good Practices"
  • "Choosing a Cursor Option"
  • "SQLExecDirect vs. SQLPrepare/SQLExecute"
  • "Batching Procedure Calls"
  • "Text and Image Data"


The Microsoft SQL Server ODBC driver uses the standard SQL Server components for communicating from a client application to the database server. Rather than being implemented as a new layer over SQL Server's older native API, DB-Library, the ODBC driver writes directly to the same Network-Library (Net-Library) layer used by DB-Library. The ODBC driver is implemented as a native API to SQL Server and is a functional replacement of the DB-Library DLL. The components involved in accessing a SQL Server from an ODBC application are described in the following sections.


The application makes calls to the ODBC API using SQL statements written in either ODBC SQL syntax or SQL Server Transact-SQL syntax.

ODBC Driver Manager

The ODBC driver manager is a very thin layer that manages the communications between the application and any ODBC drivers with which the application works. The driver manager primarily loads the modules comprising the driver and then passes all ODBC requests to the driver. There are Win32® and Win16 application programming interface versions of the driver manager. The Win32 driver manager is Odbc32.dll; the Win16 driver manager is Odbc.dll.

SQL Server ODBC Driver

The SQL Server ODBC driver is a single DLL that responds to all calls the application makes to the ODBC API. If the SQL statements from the application contain ANSI or ODBC SQL syntax that is not supported by SQL Server, the driver translates the statements into Transact-SQL syntax (the amount of translation is usually minimal) and then passes the statement to the server. The driver also presents all results back to the application. The Win32 SQL Server ODBC driver is Sqlsrv32.dll; the Win16 driver is Sqlsrvr.dll.

SQL Server Client Network Library

The driver communicates with the server through the SQL Server Net-Libraries using the SQL Server application-level protocol called Tabular Data Stream (TDS). The SQL Server TDS protocol is a half-duplex protocol with self-contained result sets (that contain both metadata and data) optimized for database access.

There is a different Net-Library for each protocol SQL Server supports. The job of the Net-Library is to process TDS packets from the driver while insulating the driver from details of the underlying protocol stack. A SQL Server Net-Library accesses a network protocol by calling a network API supported by the protocol stack. The Net-Libraries supplied by SQL Server for use by SQL Server client applications are listed in the following table.

Net-LibraryWin32 DLLWin16 DLL
TCP/IP Windows SocketsDbmssocn.dllDbmssoc3.dll
Named pipesDbnmpntw.dllDbnmp3.dll
Novell SPX/IPXDbmsspxn.dllDbmsspx3.dll
Banyan VinesDbmsvinn.dllDbmsvin3.dll

Network Protocol Stack

The network protocol stack transports the TDS packets between the client and the server. The protocol stack has components on both the client and the server.

Server Net-Library

The server Net-Libraries work on the server, passing TDS packets back and forth between SQL Server and its clients. Each SQL Server can work simultaneously with any of the server Net-Libraries installed on the server.

Open Data Services

Open Data Services supports an API defined for writing server applications. An Open Data Services application can either be a server that accepts connections and processes queries (such as SQL Server or a gateway to another DBMS), or it can be an extended stored procedure that allows DLLs written to the Open Data Services API to be run as stored procedures within SQL Server. Open Data Services receives the TDS packets from the underlying Net-Libraries and then passes the information to SQL Server by calling specific Open Data Services callback functions implemented in the SQL Server code. It also encapsulates the results coming back from the server in TDS packets that the Net-Library then sends back to the client.

SQL Server

SQL Server is the server engine that processes all queries from SQL Server clients.

Overall ODBC and SQL Server Architecture

The following illustration shows the overall ODBC and SQL Server architecture. It shows both a Win16 client using TCP/IP and a Win32 client using Novell connecting to the same server.


Performance of ODBC as a Native API

One of the persistent rumors about ODBC is that it is inherently slower than a native DBMS API. This reasoning is based on the assumption that ODBC drivers must be implemented as an extra layer over a native DBMS API, translating the ODBC statements coming from the application into the native DBMS API functions and SQL syntax. This translation effort adds extra processing compared with having the application call directly to the native API. This assumption is true for some ODBC drivers implemented over a native DBMS API, but the Microsoft SQL Server ODBC driver is not implemented this way.

The Microsoft SQL Server ODBC driver is a functional replacement of DB-Library. The SQL Server ODBC driver works with the underlying Net-Libraries in exactly the same manner as the DB-Library DLL. The Microsoft SQL Server ODBC driver has no dependence on the DB-Library DLL, and the driver will function correctly if DB-Library is not even present on the client.

Microsoft's testing has shown that the performance of ODBC-based and DB-Library–based SQL Server applications is roughly equal.

The following illustration compares the ODBC and DB-Library implementations.


Driver and SQL Server Versions

The following table shows which versions of the Microsoft SQL Server ODBC driver shipped with recent versions and service packs (SP) of Microsoft SQL Server. It also lists the operating system versions under which the drivers are certified to run and the versions of SQL Server against which they are certified to work.

Newer drivers recognize the capabilities of older databases and adjust to work with the features that exist in the older server. For example, if a user connects a version 2.65 driver to a version 4.21a server, the driver does not attempt to use ANSI or other options that did not exist in SQL Server 4.21a. Conversely, older drivers do not use the features available in newer servers.

For example, if a version 2.50 driver connects to a version 6.5 server, the driver has no code to use any new features or options introduced in the 6.5 server.

Driver versionDriver dateShipped with SQL Server versionSQL Server
Operating systems supported
2.65.025206/16/976.5 SP36.5
Windows NT3.5, 3.51, 4.0
Windows 95
Windows for Workgroups 3.11
Windows 3.1
2.65.024012/30/966.5 SP26.5
Windows NT3.5, 3.51, 4.0
Windows 95
Windows for Workgroups 3.11
Windows 3.1
2.65.021307/30/966.5 SP16.5
Windows NT3.5, 3.51, 4.0
Windows 95
Windows for Workgroups 3.11
Windows 3.1
Windows NT3.5, 3.51, 4.0
Windows 95
Windows for Workgroups 3.11
Windows 3.1
2.50.012608/17/956.0 SP3
6.0 SP2
6.0 SP1
Windows NT3.5, 3.51
Windows 95
Windows for Workgroups 3.11
Windows 3.1
Windows NT3.5, 3.51
Windows 95
Windows for Workgroups 3.11
Windows 3.1

Note: None of the Microsoft SQL Server ODBC drivers listed is certified to work with Sybase SQL Servers. Applications needing to connect to Sybase SQL Servers must get an ODBC driver certified for use with Sybase from either Sybase or a third-party ODBC driver vendor.

For more information about versions and Instcat.sql, see "Instcat.sql."

Setup and Connecting

An ODBC application has two methods of giving an ODBC driver the information the driver needs to connect to the proper server and database. Either the application can connect using an existing ODBC data source containing this information, or it can call either SQLDriverConnect or SQLBrowseConnect, which provides the information in the connection string parameter.

Setting up a Data Source

ODBC data sources contain information that tells a driver how to connect to a database. ODBC data sources can be created by using the ODBC Administrator application in Control Panel or by an application calling the ODBC SQLConfigDataSource function.

Data source definitions are stored in C:\Windows\System\Odbc.ini for the Microsoft Windows® version 3.x and Windows for Workgroups version 3.x operating systems.

Win32 data sources fall into one of two categories (for details, see Microsoft Knowledge Base article Q136481):

  • Windows NTuser-specific data sources and Windows 95 data sources

    On the Microsoft Windows NT® operating system, user data sources are specific to the Windows NTaccount under which they were defined. User-specific data sources are not always visible to applications running as Windows NT services. Windows 95 data sources are stored in the following registry key:

  • Windows NT–system data sources

    On Windows NT, system data sources are visible to all Windows NTaccounts on the computer. System data sources are always visible to applications running as Windows NTservices. The ODBC driver manager that ships with Microsoft Office 97 also supports system data sources on Windows 95 clients. Windows NTsystem data sources are stored in the following registry key:

Information about the drivers installed on a client is stored in C:\Windows\System\Odbcinst.ini in Windows 3.x or Windows for Workgroups 3.x and in HKEY_LOCAL_MACHINE\Software\ODBC\Odbcinst.ini in Windows NTand Windows 95.

Each driver needs to store driver-specific information in its data sources. When a user adds a data source using ODBC Administrator, the driver displays a dialog box, where the user specifies data source information. When a data source is defined with SQLConfigDataSource, the function accepts an attribute string parameter that can contain driver-specific keywords. All of the SQLConfigDataSource driver-specific keywords for the SQL Server ODBC driver have counterparts in the dialog box that displays when using ODBC Administrator.

Here's an example SQLConfigDataSource call that sets up a SQL Server data source referencing a server using DHCP on TCP/IP:

RETCODE   retcode;
UCHAR   *szDriver = "SQL Server";
UCHAR   *szAttributes =
   "DSN=my65dsn\0DESCRIPTION=SQLConfigDSN Sample\0"
retcode = SQLConfigDataSource(NULL,

Driver-specific SQLConfigDataSource Keywords

The following sections describe the driver-specific keywords supported by the Microsoft SQL Server ODBC driver.


The SERVER, NETWORK, and ADDRESS parameters associate a data source with a specific instance of SQL Server on the network. These parameters are directly related to the advanced entries created with the SQL Server Client Configuration Utility:

  • The SERVER parameter specifies a name or label for the connection entry.
  • The NETWORK parameter is the name of the Net-Library module to use, without the .dll suffix (for example, Dbmssocn, not Dbmssocn.dll).
  • The ADDRESS parameter is the network address of the Windows NTserver running SQL Server.

If ADDRESS is present, it is always used as the network address for the connection. If ADDRESS is not present, then SERVER is used as the network address for the connection.

Here's an example entry to make a named pipes connection to a server:


The following entry evaluates to the same network address:


Here's an example entry to make a sockets connection to the same computer:


There are two special cases to consider:

  • Connecting to a SQL Server running on the same computer as the client.

    The ODBC data source for this case is specified as:


    When using this data source, the driver attempts to connect to a SQL Server on the same computer using Windows NTlocal-named pipes instead of a network implementation of named pipes.

  • Setting up a data source that connects to a server using whatever Net-Library is currently set as the default on the client.

    An example of an entry for this case is:


    The default Net-Library is set using the SQL Server Client Configuration Utility.

The SERVER, NETWORK, and ADDRESS parameters specified on SQL Server ODBC driver data sources operate the same way as the Server, DLL, and Connection String parameters specified for advanced entries made with the SQL Server Client Configuration Utility. For more information about the advanced-entry parameters, see the Microsoft SQL Server Administrator's Companion. The same parameters can be specified in the data source creation dialog box displayed in ODBC Administrator.

The relationship between the parameters is illustrated in the following table.

SQLConfigDataSourceODBC AdministratorSQL Client Configuration Utility
NETWORKNetwork LibraryDLL
ADDRESSNetwork AddressConnection String

If a data source is defined with the SERVER, NETWORK, and ADDRESS parameters, a SQL Server advanced connection entry is made in the registry, and can be viewed using the SQL Client Configuration Utility.


This parameter specifies the default database for the ODBC data source.


This parameter specifies the default national language to use.


This parameter specifies whether to convert extended characters to OEM values.

SQL Server is usually run with one of three code pages:

  • 437 code page.

    The default code page for U.S. MS-DOS computers.

  • 850 code page.

    The code page typically used by UNIX systems.

  • ISO 8859-1 (Lantin1 or ANSI) code page.

    The code page defined as a standard by the ANSI and ISO standards organizations. The default code page for U.S. Windows computers. Sometimes called the 1252 code page.

The 437 and 850 code pages are sometimes collectively referred to as the OEM code pages.

All three code pages define 256 different values to use in representing characters. The values from 0 to128 represent the same characters in all three code pages. The values from 129 to 255, which are known as the extended characters, represent different characters in all three code pages.

Because ODBC applications are Windows applications, they generally use ANSI code page 1252. If they are communicating with a SQL Server also running ANSI code page 1252, there is no need for character-set conversion. If they connect to a server running a 437 or 850 code page however, the driver must be informed that it should convert extended characters from their 1252 values to 437 or 850 values before sending them to the server. In this case, the data source should have OEMTOANSI=YES. For a more in-depth discussion of SQL Server code pages, see Microsoft Knowledge Base article Q153449.


This parameter specifies the name of the ODBC translation DLL to use with the data source.


This parameter specifies the name of the translator to use with the data source.


This parameter specifies whether translation should be done on the data going to SQL Server. YES specifies translation; NO specifies no translation. For more information about ODBC translation, see the ODBC 2.0 Programmer's Reference.


This parameter specifies whether the driver generates stored procedures to support the ODBC SQLPrepare function. For more information, see "SQLExecDirect vs. SQLPrepare/SQLExecute."

The following driver-specific SQLConfigDataSource keywords are new in SQL Server 6.5 SP2.


This parameter specifies whether the driver should issue a SET QUOTED IDENTIFIERS ON option when connecting to a SQL Server version 6.0 or later database. YES specifies QUOTED_IDENTIFIERS is ON; NO specifies the option is OFF. For more information, see "SET Options Used by the Driver."


This parameter specifies whether the driver should SET ON the ANSI_NULLS, ANSI_PADDING, and ANSI_WARNINGS options when connecting to a SQL Server version 6.5 or later database. YES specifies the options are ON; NO specifies they are OFF. For more information, see "SET Options Used by the Driver."

The following driver-specific SQLConfigDataSource keywords are new in SQL Server 6.5.


This parameter specifies the file name the driver should use to log long-running queries. Include the full path name for the file. For more information, see "ODBC Driver Profiling Features."


This parameter specifies whether the data source should do query profiling. 1 specifies profiling is done; omitting the parameter specifies no profiling. For more information, see "ODBC Driver Profiling Features."


This parameter specifies the interval for long-running queries. The interval is specified in milliseconds. If a query is outstanding for a period exceeding the QueryLogTime, it is written to the QueryLogFile. For more information, see "ODBC Driver Profiling Features."


This parameter specifies the file name the driver should use to log long performance statistics. Include the full path name for the file. For more information, see "ODBC Driver Profiling Features."


This parameter specifies whether the data source should log performance statistics. 1 specifies profiling is done; omitting the parameter specifies no profiling. For more information, see "ODBC Driver Profiling Features."


This parameter specifies whether the data source should use trusted connections when connecting to SQL Server. 1 specifies trusted connections; omitting the parameter specifies no trusted connections. For more information, see "Integrated and Standard Security."

Creating Data Sources in ODBC Administrator

When you add, modify, or double-click a SQL Server data source in ODBC Administrator, the SQL Server ODBC driver displays the ODBC SQL Server Setup dialog box. The parameters in this dialog box control the same features that are controlled by the SQLConfigDataSource keywords earlier in this paper, although they have slightly different names. Many of the options are in the dialog box that displays when you click Options. To specify the query and performance profiling options, click Options, and then click Profiling.

Driver-specific SQLDriverConnect Keywords

An ODBC application can connect to a SQL Server without referencing a data source:

RETCODE   retcode;
   "DRIVER={SQL Server};SERVER=MyServer;"
UCHAR   szUID[MAXUID+1] = "sa",
      szAuthStr[MAXAUTHSTR+1] = "password",
SWORD   swStrLen;
retcode = SQLDriverConnect(hdbc1,

The SQL Server ODBC driver supports three classes of keywords on SQLDriverConnect:

  • The standard ODBC keywords

    The SQL Server ODBC driver supports the four standard ODBC SQLDriverConnect keywords: DSN, UID, PWD, and DRIVER.

  • The driver-specific SQLConfigDataSource keywords

    On SQLDriverConnect the SQL Server ODBC driver supports all of the driver-specific keywords it supports for SQLConfigDataSource. See the list earlier in this paper for a description of these driver-specific keywords.

  • The driver-specific keywords APP and WSID

    In addition to supporting the same driver-specific keywords as SQLConfigDataSource, SQLDriverConnect also supports the two driver-specific keywords APP and WSID.


This keyword specifies the application name to be recorded in the program_name column in master.dbo.sysprocesses. APP is equivalent to a DB-Library application calling the DBSETLAPP function in C or the SQLSetLApp function in the Visual Basic® programming system.


This keyword specifies the workstation name to be recorded in the hostname column in master.dbo.sysprocesses. WSID is equivalent to a DB-Library application calling the DBSETLHOST function in C or the SQLSetLHost function in Visual Basic.

Connection Messages

The SQL Server ODBC driver returns SQL_SUCCESS_WITH_INFO on a successful SQLConnect, SQLDriverConnect, or SQLBrowseConnect. When an ODBC application calls SQLError after getting SQL_SUCCESS_WITH_INFO, it can receive the following messages:

  • 5701—indicates SQL Server initially putting the user's context into the default database defined at the server for the login ID used in the connection
  • 5703—indicates the language being used on the server
  • If either the ODBC data source has a default database specified or the application specified the DATABASE keyword on SQLDriverConnect or SQLBrowseConnect, there will be a second 5701 message that indicates the user's context has been switched to the database requested.

The following example shows these messages being returned on a successful connect by the System Administrator (SA) login. The SA login has its default database at the server defined as the master database, the server is running US English, and the connect used an ODBC data source that specified pubs as the default database.

Full Connect:
   szSqlState = "01000", *pfNativeError = 5701,
   szErrorMsg="[Microsoft][ODBC SQL Server Driver][SQL Server]
          Changed database context to 'master'."
   szSqlState = "01000", *pfNativeError = 5703,
   szErrorMsg="[Microsoft][ODBC SQL Server Driver][SQL Server]
          Changed language setting to 'us_english'."
   szSqlState = "01000", *pfNativeError = 5701,
   szErrorMsg="[Microsoft][ODBC SQL Server Driver][SQL Server]
          Changed database context to 'pubs'."
Successfully connected to DSN 'my60server'.

Applications can ignore these 5701 and 5703 messages; they are informational only. Applications cannot, however, ignore a return of SQL_SUCCESS_WITH_INFO return code on the SQLConnect, SQLDriverConnect, or SQLBrowseConnect. This is because messages other than 5701 and 5703 that do require action may be returned. For example, if a driver connects to a SQL Server with outdated system stored procedures, one of the messages returned through SQLError is:

SqlState:   01000
pfNative:   0
szErrorMsg: "[Microsoft][ODBC SQL Server Driver]The ODBC
            catalog stored procedures installed on server
            my421server are version 02.00.4127; version 06.00.0115
            or later is required to ensure proper operation.
            Please contact your system administrator."

An application's error handling routines for SQL Server connections should call SQLError until it returns SQL_NO_DATA_FOUND and act on any messages other than the ones that return a pfNative code of 5701 or 5703.

Integrated and Standard Security

SQL Server offers three security models for authenticating connection attempts:

  • Standard security

    The SA defines SQL Server logins with passwords in SQL Server and then associates the logins with users in individual databases. With older versions of SQL Server, all connection attempts must specify a valid login and password. SQL Server version 6.0 or 6.5 also allows trusted connections to a server running standard security. SQL Server logins are separate from Windows NTuser IDs.

  • Integrated security

    The SA defines logins for those Windows NTuser accounts that are allowed to connect to SQL Server. Users do not have to specify a separate login and password when they connect to SQL Server after logging on to the Windows NTnetwork. When they attempt to connect, the Net-Library attempts a trusted connection to SQL Server. If the user's Windows NTaccount is one that the SA specified to SQL Server, the connection succeeds.

  • Mixed security

    The SA defines both SQL Server logins and Windows NTaccounts as SQL Server logins. Users with validated Windows NTaccounts can connect using trusted connections; other users can connect using standard security with the SQL Server logins.

The SQL Server ODBC driver always uses a trusted connection when connecting to a server running integrated security. The driver can also be instructed to open trusted connections when connecting to a server that is running with standard or mixed security. Only the named pipes or multiprotocol Net-Libraries support integrated security and trusted connections.

There are two ways to tell the driver to use trusted connections:

  • Driver-specific data source options

    When defining a data source using the ODBC Administrator, you can select Use Trusted Connection. When defining a data source using SQLConfigDataSource, an application can specify Trusted_Connection=1.

  • Driver-specific connection options

    Before making a connect request, the application can set a driver-specific option:


Integrated security offers several benefits:

  • Passwords do not need to be stored in the application.
  • Passwords are never present in the SQL Server TDS packets.
  • Integrated security is easy to administer because the SA can use the SQL Security Manager utility to create SQL Server logins from existing Windows NTaccounts.

Protocol Considerations

Integrated security is only available when using either the named pipes or multiprotocol Net-Libraries. When using the multiprotocol Net-Library, the SA can also configure the server to encrypt packets sent across the network, so that even users of network sniffers cannot see the data. The named pipes and multiprotocol Net-Libraries can also work with either a TCP/IP, SPX/IPX, or NetBEUI protocol stack. This means a client running only a TCP/IP protocol stack can use either the Windows sockets, named pipes, or multiprotocol Net-Libraries. The Windows sockets (TCP/IP), SPX/IPX, Appletalk, DECNet, and Banyan Vines Net-Libraries only work with their single, associated, protocol stack.

Due to their added functionality, such as the encryption feature, the multiprotocol Net-Libraries are somewhat slower than the others. Testing at Microsoft has found that the TCP/IP Net-Libraries are somewhat faster than the other Net-Libraries. Other considerations, however, such as database design, indexing, and the design of queries and applications, usually have a greater impact on performance than the choice of a Net-Library.

Applications running against SQL Server 6.0 or 6.5 can sometimes improve their performance by resetting the TDS network packet size. The default packet size is set at the server, and is 4K. 4K generally gives the best performance. Applications can set the packet size themselves if testing shows that they perform better with a different packet size. ODBC applications can do this by calling SQLSetConnectionOption with the SQL_PACKET_SIZE option before connecting. Some applications may perform better with a larger packet size, but performance improvements are generally minimal for packet sizes larger than 8K.

Verifying and Testing Data Sources

The Odbcping.exe utility can be used to check whether an ODBC connection can be made between a client and a SQL Server. The command syntax to use the utility is:

odbcping {/Sservername | /Ddatasource} /Ulogin_id /Ppassword


Is the network name of the server running SQL Server.

Is the name of an ODBC data source.

Is the SQL Server login ID.

Is the login password.

You must specify either /S or /D, but not both. (The version of odbcping that ships with SQL Server 6.0 will not accept the /D parameter, only /S, /U, and /P.)

When odbcping makes a successful connection, it displays a message indicating the connection was successful and the versions of the driver and server. For example:

ODBC SQL Server Driver Version: 02.65.0201
SQL Server Version: SQL Server for Windows NT6.50 - 6.50.201 (Intel X86)
   Apr 3 1996 02:55:53
   Copyright (c) 1988-1997 Microsoft Corporation

If the connect attempt is not successful, odbcping displays the errors it receives. (The 6.0 version of odbcping does not display the Native Error code.) For example:

SQLState: 01000  Native Error: 2
Error Message: [Microsoft][ODBC SQL Server Driver][dbnmpntw]
                       ConnectionOpen (CreateFile()).
SQLState: 08001  Native Error: 6
Error Message: [Microsoft][ODBC SQL Server Driver][dbnmpntw]
                      Specified SQL Server not found.

The pfNative (or Native Error) code is important in diagnosing connection problems. For more information, see "pfNative Error Codes."

Processing Queries and Results

General Good Practices

The following sections discuss general practices that will increase the performance of SQL Server ODBC applications. Many of the concepts apply to database applications in general.

Columns in a Result Set

Applications should select only the columns needed to perform the task at hand. Not only does this reduce the amount of data sent across the network, it also reduces the impact of database changes on the application. If an application does not reference a column from a table, then the application is not affected by any changes made to that column.

Stored Procedures

Sites can realize performance gains by coding most of their SQL statements into stored procedures and having applications call the stored procedures rather than issuing the SQL statements themselves. This offers the following benefits:

  • Higher performance

    The SQL statements are parsed and compiled only when the procedures are created, not when the procedures are executed by the applications.

  • Reduced network overhead

    Having an application execute a procedure instead of sending sometimes complex queries across the network can reduce the traffic on the network. If an ODBC application uses the ODBC { CALL MyProcedure} syntax to execute a stored procedure, the ODBC driver makes additional optimizations that eliminate the need to convert parameter data (for more information, see "ODBC Call vs. Transact-SQL EXECUTE").

  • Better consistency

    The organization's business rules can be coded and debugged once in a stored procedure, and they will then be consistently applied by all of the applications. The site does not have to depend on all application programmers coding their SQL statements correctly in all the applications.

  • Better accuracy

    Most sites will have their best SQL programmers developing stored procedures. This means that the SQL statements in procedures tend to be more efficient and have fewer errors than when the code is developed multiple times by programmers of varying skill levels.

The Enterprise versions of the Microsoft Visual C++® development system and Microsoft Visual Basic® programming system also offer a new SQL debugger tool. With SQL Debugger, programmers can use the standard debugger facilities of their programming environment, such as setting break points and watching variables, to debug their SQL Server stored procedures.


An application that builds several SQL statements to execute realizes better performance if it batches the statements together and sends them to the server all at once. This will reduce the number of network roundtrips the application uses to perform the same work. For example:

      "select * from authors; select * from titles",

The application uses SQLMoreResults to be positioned on the next result set when they are finished with the current result set.

SQLBindCol and SQLGetData

Excess use of SQLBindCol to bind a result set column to a program variable is expensive because SQLBindCol causes an ODBC driver to allocate memory. When an application binds a result column to a variable, that binding remains in effect until the application either calls SQLFreeStmt with fOption set to either SQL_DROP or SQL_UNBIND. The bindings are not automatically undone when the statement completes.

This logic allows applications to effectively deal with situations where they may execute the same SELECT statement several times with different parameters. Since the result set will keep the same structure, the application can bind the result set once, process all the different SELECT statements, then do a SQLFreeStmt with fOption set to SQL_UNBIND after the last execution. Applications should not call SQLBindCol to bind the columns in a result set without first calling SQLFreeStmt with fOption set to SQL_UNBIND to free any previous bindings.

When using SQLBindCol, applications can either do row-wise or column-wise binding. Row-wise binding is somewhat faster than column-wise binding.

Applications can use SQLGetData to retrieve data on a column-by-column basis, instead of binding the result set columns using SQLBindCol. If a result set contains only a couple of rows, then using SQLGetData instead of SQLBindCol is faster, otherwise, SQLBindCol gives the best performance. If an application does not always put the data in the same set of variables, it should use SQLGetData instead of constantly rebinding. Applications can only use SQLGetData on columns that are in the select list after all columns are bound with SQLBindCol. The column must also appear after any columns on which the application has already used a SQLGetData.

Data Conversion

The ODBC functions dealing with moving data into or out of program variables, such as SQLBindCol, SQLBindParameter, and SQLGetData, allow implicit conversion of data types. For example, an application that displays a numeric column can ask the driver to convert the data from numeric to character:

retcode = SQLBindCol(hstmt1,
                     1,        // Point to integer column
printf("fetched row cola = %s\n", szCharVar);

Applications should minimize data conversions. Unless data conversion is a required part of the application, the application should bind columns to a program variable of the same data type as the column in the result set.

If the application needs to have the data converted, it is more efficient for the application to ask the driver to do the data conversion than for the application to do it.

The driver normally just transfers data directly from the network buffer to the application's variables. Requesting the driver to perform data translation forces the driver to buffer the data and use CPU cycles to perform the conversion.

Data Truncation

If an application attempts to retrieve data into a variable that is too small to hold it, the driver generates a warning. The driver must allocate memory for the warning messages and spend CPU resources on some error handling. This can all be avoided if the application allocates variables large enough to hold the data from the columns in the result set, or uses the SUBSTRING function in the select list to reduce the size of the columns in the result set.

Query Options

Timeout intervals can be adjusted to prevent problems. Also, having different settings for some ODBC statement and connection options among several open connection or statement handles can generate excess network traffic.

Calling SQLSetConnectOption with fOption set to SQL_LOGIN_TIMEOUT controls the amount of time an application waits for a connection attempt to timeout while waiting to establish a connection (0 specifies an infinite wait). Sites with slow response times can set this value high to ensure connections have sufficient time to complete, but the interval should always be low enough to give the user a response in a reasonable amount of time if the driver cannot connect.

Calling SQLSetStmtOption with fOption set to SQL_QUERY_TIMEOUT sets a query timeout interval to protect the server and the user from long running queries.

Calling SQLSetStmtOption with fOption set to SQL_MAX_LENGTH limits the amount of text and image data that an individual statement can retrieve. Calling SQLSetStmtOption with fOption set to SQL_MAX_ROWS also limits a rowset to the first n rows if that is all the application needs. Note that setting SQL_MAX_ROWS causes the driver to issue a SET ROWCOUNT statement to the server, which will affect all SQL statements, including triggers and updates.

Care should be used when setting these options, however. It is best if all statement handles on a connection handle have the same settings for SQL_MAX_LENGTH and SQL_MAX_ROWS. If the driver switches from a statement handle to another with different values for these options, the driver must generate the appropriate SET TEXTSIZE and SET ROWCOUNT statements to change the settings. The driver cannot put these statements in the same batch as the user SQL since the user SQL can contain a statement that must be the first statement in a batch, therefore the driver must send the SET TEXTSIZE and SET ROWCOUNT statements in a separate batch, which automatically generates an extra roundtrip to the server.


Applications can execute the Transact-SQL statement SET NOCOUNT ON. When this is set on, SQL Server does not return an indication of how many rows were affected by data-modification statements, or by any statements within procedures. When SET NOCOUNT is ON, the driver does not get the information it needs to return the number of rows affected should the application call SQLRowCount after a data-modification statement.

All statements executed in a stored procedure, including SELECT statements, generate an "x rows affected" message. Issuing a SET NOCOUNT ON at the start of a large stored procedure can significantly reduce the network traffic between the server and client and improve performance by eliminating these messages. These messages are typically not needed by the application when it is executing a stored procedure.


Starting with SQL Server 6.0, the SQL Server ODBC driver supports the ODBC cursor options by using server cursors.

Cursor Types

The ODBC standard assumes that a cursor is automatically opened on each result set and, therefore, does not make a distinction between a result set and a cursor. SQL Server SELECT statements, however, always return a result set. A SQL Server cursor is a separate entity created when the application needs to perform cursor operations such as scrolling and positioned updates.

In the ODBC model, all SQL statements return a result set within a cursor, and an application retrieves rows through the cursor using either SQLFetch or SQLExtendedFetch. Before executing an SQL statement, an ODBC application can call SQLSetStmtOption to set statement options that control the cursor's behavior. These are the default settings for the cursor options.


When running with these default settings, the application can only use SQLFetch to fetch through the result set one row at a time from the start of the result set to the end. When running with these default settings, the SQL Server ODBC driver requests a default result set from the server. In a default result set, SQL Server sends the results back to the client in a very efficient, continuous stream. The calls to SQLFetch retrieve the rows out of the network buffers on the client.

It is possible to execute a query with these default settings, and then change the SQL_ROWSET_SIZE after the SQLExecDirect or SQLExecute complete. In this case, SQL Server still uses a default result set to efficiently send the results to the client, but the application can also use SQLExtendedFetch to retrieve multiple rows at a time from the network buffers.

An ODBC application can change the SQL_CURSOR_TYPE to request different cursor behaviors from the result set. The types of cursors that can be set are:

  • Static cursors

    In a static cursor, the complete result set is built when the cursor is opened. The cursor does not reflect any changes made in the database that affect either the rows in the result set, or the values in the columns of those rows. In other words, static cursors always display the result set as it was when the cursor was opened. If new rows have been inserted that satisfy the conditions of the cursor's SELECT statement, they do not appear in the cursor. If rows in the result set have been updated, the new data values do not appear in the cursor. Rows appear in the result set even if they have been deleted from the database. No UPDATE, INSERT, or DELETE operations are reflected in a static cursor (unless the cursor is closed and reopened), not even modifications made by the same user who opened the cursor. Static cursors are read-only.

  • Dynamic cursors

    Dynamic cursors are the opposite of static cursors; they reflect all changes made to the rows in their result set as the user scrolls around the cursor. In other words, the data values and membership of rows in the cursor can change dynamically on each FETCH. The cursor shows all DELETE, INSERT, and UPDATE statements either made by the user who opened the cursor or committed by other users. Dynamic cursors do not support FETCH ABSOLUTE because the size of the result set and the position of rows within the result set are not constant. The row that starts out as the tenth row in the result set may be the seventh row the next time a FETCH is performed.

  • Forward-only cursors

    This cursor is similar to a dynamic cursor, but it only supports fetching the rows serially in sequence from the start to the end of the cursor.

  • Keyset-driven cursor

    With a keyset-driven cursor, the membership of rows in the result set and their order is fixed when the cursor is opened. Keyset-driven cursors are controlled through a set of unique identifiers (keys), known as the keyset. The keys are built from a set of columns that uniquely identify the rows. The keyset is the set of all the key values that made up the rows in the result set when the cursor was opened. Changes to data values in nonkeyset columns for the rows (made by the current user or committed by other users) are reflected in the rows as the user scrolls through the cursor. Inserts are not reflected unless the cursor is closed and reopened. Deletes generate an "invalid cursor position" error (SQLState S1109) if the application attempts to fetch the missing row. If an update is made to a key-column value, it operates like a delete of the old key value followed by an insert of the new key value, and the new key value is not visible to the cursor. Attempts to fetch the old key value generate the S1109 error, but the new key value is not visible to the cursor.

  • Mixed cursors

    SQL Server does not support mixed cursors.

    All ODBC cursors support the concept of a rowset, which is the number of rows returned on an individual SQLExtendedFetch. For example, if an application is presenting a 10-row grid to the user, the cursor can be defined with a rowset size of 10 to simplify mapping data into the grid.

Concurrency Option Overview

In addition to the cursor types, cursor operations are also affected by the concurrency options set by the application:


With this option set, the cursor does not support UPDATE, INSERT, or DELETE statements. Locks are not held on the underlying rows that make up the result set.

This option offers optimistic concurrency control. Optimistic concurrency control is a standard part of transaction control theory and is discussed in most papers and books on the subject. The application uses optimistic control when "optimistic" that there is a slight chance that anyone else may have updated the row in the interval between when the row is fetched and when the row is updated. When the cursor is opened in this mode, no locks are held on the underlying rows to maximize throughput. If the user attempts an UPDATE, the current values in the row are compared with the values retrieved when the row was fetched. If any of the values have changed, SQL Server returns an error. If the values are the same, the cursor engine performs the UPDATE.
Selecting this option means the application must deal with an occasional error indicating that another user updated the row and changed the values. A typical action taken by an application that receives this error would be to refresh the cursor, to get the new values, and then let the user or application decide if the UPDATE should still be performed. Note that text and image columns are not used for concurrency comparisons.

This optimistic concurrency control option is based on row versioning. With row versioning, the underlying table must have a version identifier of some type that the cursor engine can use to determine whether the row has been changed since it was read into the cursor. In SQL Server, this is the facility offered by the timestamp data type. SQL Server timestamps are binary numbers that indicate the relative sequence of modifications in a database. Each database has a global current timestamp value, @@dbts, which is incremented with every change in the database. If a table has a timestamp column, then its timestamp column is updated with the current @@dbts value every time the row is updated. The cursor engine can then compare a row's current timestamp value with the timestamp value that was first retrieved into the cursor to determine whether the row has been updated. The engine does not have to compare the values in all columns, only the timestamp value. If an application requests SQL_CONCUR_ROWVER on a table that does not have a timestamp column, the cursor defaults to the values-based optimistic concurrency control, SQL_CONCUR_VALUES.

This option implements pessimistic concurrency control, in which the application attempts to lock the underlying database rows at the time they are read into the cursor result set. For cursors using server cursors, an update intent lock is placed on the page holding the row when it is read into the cursor. If the cursor is opened within a transaction, these intent-to-update locks are held until the transaction is committed or rolled back. If the cursor has been opened outside a transaction, the lock is dropped when the next row is fetched. Thus, applications wanting full pessimistic concurrency control would typically open the cursor within a transaction. An update intent lock prevents any other task from acquiring an update intent or exclusive lock, which prevents any other task from updating the row. An update intent lock, however, will not block a shared lock, so it does not prevent other tasks from reading the row, unless the second task is also requesting a read with an update intent lock.

In all of these concurrency options, when any row in the cursor is updated, SQL Server locks it with an exclusive lock. If the update has been done within a transaction, the exclusive lock is held until the transaction is terminated. If the update has been done outside of a transaction, the update is automatically committed when it is completed and the exclusive lock is freed. Because SQL Server must acquire an exclusive lock before it updates the row, positioned updates done through a cursor (just like standard updates) can be blocked by other connections holding a shared lock on the row.

Isolation Levels

The full locking behavior of cursors is based on an interaction between the concurrency options discussed above and the transaction isolation level set by the client. ODBC clients set the transaction isolation level by setting the connection option SQL_TXN_ISOLATION. Users should combine the locking behaviors of the concurrency and transaction isolation level options to determine the full locking behavior of a specific cursor environment.

  • READ COMMITTED (The default for both SQL Server and ODBC)

    SQL Server acquires a shared lock while reading a row into a cursor but frees the lock immediately after reading the row. Because a shared lock request is blocked by an exclusive lock, a cursor is prevented from reading a row that another task has updated but not yet committed.


    SQL Server requests no locks while reading a row into a cursor and honors no exclusive locks. This means that cursors can be populated with values that have already been updated but not yet committed. The user is bypassing all of SQL Server's locking transaction control mechanisms.




    SQL Server still requests a shared lock on each row as it is read into the cursor as in READ COMMITTED, but if the cursor is opened within a transaction, the shared locks are held until the end of the transaction instead of being freed after the row is read. This is the same effect as specifying HOLDLOCK on a SELECT statement.

Note that the ODBC API specifies additional transaction isolation levels, but these are not supported by SQL Server or the Microsoft SQL Server ODBC driver.

Server Cursors

Prior to version 6.0, SQL Server sent result sets back to clients using only one type of result set, the default result set. While the default result set is efficient at sending results back to clients, it only supports the characteristics of the default ODBC result set: forward-only, read-only, and a rowset size of one. Because of this, the Microsoft SQL Server ODBC drivers that shipped with SQL Server version 4.2x only supported the default ODBC settings.

When using a default result set, there is only one roundtrip between the client and server; this occurs at the time the SQL statement is executed. After the statement is executed, the server sends the packets containing the results back to the client until all of the results have been sent back or the client has cancelled the results by calling SQLMoreResults. Calls to SQLFetch or SQLExtendedFetch do not generate roundtrips to the server, they just pull data from the client network buffers into the application.

SQL Server 6.0 introduced cursors that are implemented on the server (server cursors). There are two types of server cursors:

  • Transact-SQL cursors

    This type of cursor is based on the ANSI syntax for cursors and is meant to be used in Transact-SQL batches, primarily in triggers and stored procedures. Transact-SQL cursors are not intended to be used in client applications.

  • API server cursors

    This type of cursor is created by either the DB-Library or ODBC APIs. The SQL Server ODBC driver that shipped with SQL Server 6.0 uses API server cursors to support the ODBC cursor options.

Users access the functionality of API server cursors through either ODBC or DB-Library. If an ODBC application executes a statement with the default cursor settings, the SQL Server ODBC driver requests a default result set from SQL Server. If the application sets the ODBC cursor type options to anything other than the defaults, however, then the SQL Server ODBC driver instead requests the server to implement a server cursor with the same options requested by the application. Since the cursor is implemented on the server, the driver does not have to use memory on the client to build a client-based cursor. Server cursors can also reduce network traffic in cases where a user decides they do not need to retrieve an entire result set. For example, if a user opens a cursor with 1,000 rows but then finds what they were looking for in the first 100 rows they scroll through, the other 900 rows are never sent across the network.

When using server cursors, each call to SQLFetch, SQLExtendedFetch, or SQLSetPos causes a network roundtrip from the client to the server. All cursor statements must be transmitted to the server because the cursor is actually implemented on the server.

One potential drawback of server cursors is that they currently do not support all SQL statements. Server cursors do not support any SQL statements that generate multiple result sets, therefore they cannot be used when the application executes either a stored procedure or a batch containing more than one select. If the application has set options that cause the driver to request an API server cursor, and then it executes a statement that server cursors do not support, the application gets an error:

SQLState: 37000
pfNative: 16937
szErrorMsg: [Microsoft][ODBC SQL Server Driver][SQL Server]
            Cannot open a cursor on a stored procedure that
            has anything other than a single select statement in it.


SQLState: 37000
pfNative: 16938
szErrorMsg: [Microsoft][ODBC SQL Server Driver][SQL Server]
            sp_cursoropen.  The statement parameter can only
            be a single select or a single stored procedure.

ODBC applications getting either of these errors when attempting to use server cursors with multiple statement batches or stored procedures should switch to using the ODBC default cursor options.

Multiple Active Statements per Connection

After SQL Server has received a statement, the SQL Server TDS protocol does not allow acceptance of any other statements from that connection until one of the following occurs:

  • The client application processes the entire result set.
  • The client sends a statement telling the server it can close the remainder of the result set.

This means that when an ODBC application is using a default result set, SQL Server does not support multiple active statement handles on a connection handle and only one statement can be actively processed at any point in time.

When an ODBC application is using API server cursors, however, the driver can support multiple active statements on a connection. When the rowset for each cursor command has been received back at the client, SQL Server considers the statement to have completed, and it accepts another statement from another statement handle over that connection handle.

For example, an application can do the following to initiate processing on two statement handles:

SQLAllocConnect(henv, &hdbc);
SQLAllocStmt(hdbc, &hstmt1);
SQLAllocStmt(hdbc, &hstmt2);
SQLSetConnectOption(hdbc, SQL_ROWSET_SIZE, 5);
SQLExecDirect(hstmt1, "select * from authors", SQL_NTS);

When the SQLExecDirect on hstmt1 is executed, the SQL Server ODBC driver issues a cursor open request. When SQL Server completes the cursor open, it considers the statement to be finished and allows the application to then issue a statement on another hstmt:

SQLExecDirect(hstmt2, "select * from titles", SQL_NTS);

Once again, after the server has finished with the cursor open request issued by the client, it considers the statement to be completed. If at this point the ODBC application makes a fetch request as follows, the SQL Server ODBC driver sends SQL Server a cursor fetch for the first five rows of the result set:

SQLExtendedFetch(hstmt1, SQL_FETCH_NEXT, 1, ...);

After the server has transferred the five rows to the driver, it considers the fetch processing completed and accepts new requests. The application could then do a fetch on the cursor opened for the other statement handle:

SQLExtendedFetch(hstmt2, SQL_FETCH_NEXT, 1, ...);

SQL Server accepts this second statement on the connection handle because, as far as it is concerned, it has completed the last statement on the connection handle, which was the fetch of the first five rows of the rows for hstmt1.

Choosing a Cursor Option

The choice of cursor type depends on several variables, including:

  • Size of the result set.
  • Percentage of the data likely to be needed.
  • Performance of the cursor open.
  • Need for cursor operations like scrolling or positioned updates.
  • Desired level of visibility to data modifications made by other users.

The default settings would be fine for a small result set if no updating is done, while a dynamic cursor would be preferred for a large result set where the user is likely to find their answer before retrieving many of the rows.

Some simple rules to follow in choosing a cursor type are:

  • Use default settings for singleton selects (returns one row), or other small result sets. It is more efficient to cache a small result set on the client and scroll through the cache.
  • Use the default settings when fetching an entire result set to the client, such as when producing a report. After SQLExecute or SQLExecDirect, the application can increase the rowset size to retrieve multiple rows at a time using SQLExtendedFetch.
  • The default settings cannot be used if the application is using positioned updates.
  • The default settings cannot be used if the application is using multiple active statements.
  • The default settings must be used for any SQL statement or batch of SQL statements that will generate multiple result sets.
  • Dynamic cursors open faster than static or keyset-driven cursors. Internal temporary work tables must be built when static and keyset-driven cursors are opened but are not required for dynamic cursors.
  • Use keyset-driven or static cursors if SQL_FETCH_ABSOLUTE is used.
  • Static and keyset-driven cursors increase the usage of tempdb. Static server cursors build the entire cursor in tempdb; keyset-driven cursors build the keyset in tempdb.

Each call to SQLFetch or SQLExtendedFetch causes a roundtrip to the server when using server cursors. Applications should minimize these roundtrips by using a reasonably large rowset size and by using SQLExtendedFetch instead of SQLFetch whenever possible.

Implicit Cursor Conversions

Applications can request a cursor type through SQLSetStmtOption and then execute an SQL statement that is not supported by server cursors of the type requested. A call to SQLExecute or SQLExecDirect returns SQL_SUCCESS_WITH_INFO and SQLError returns:

szSqlState = "01S02", *pfNativeError = 0,
szErrorMsg="[Microsoft][ODBC SQL Server Driver]Cursor type changed"

The application can determine what type of cursor is now being used by calling SQLGetStmtOption with fOption set to SQL_CURSOR_TYPE. The cursor type conversion applies to only one statement. The next SQLExecDirect or SQLExecute will be done using the original statement cursor settings.

Both SQL Server 6.0 and 6.5 have the following restrictions:

  • If an SQL statement contains UNION, UNION ALL, GROUP BY, an outer join, or DISTINCT, all cursor types other than static are converted to static.
  • If a keyset-driven cursor is requested and there is at least one table that does not have a unique index, the cursor is converted to a static cursor.

SQL Server 6.0 has the following additional restrictions:

  • If a dynamic cursor is requested and there is at least one table that does not have a unique index, the cursor is converted to a static cursor.
  • If a dynamic cursor is requested and the SQL statement contains an ORDER BY that does not match a unique index or subquery, the cursor is converted to a static cursor.

SQLExecDirect vs. SQLPrepare/SQLExecute

This section discusses when SQLExecDirect or SQLPrepare/SQLExecute should be used.

Driver Implementation Overview

ODBC offers two options for executing a statement. If a statement is only executed once or twice, the application can use SQLExecDirect to execute the statement. The ODBC definition of SQLExecDirect states that the database engine parses the SQL statement, compiles an execution plan, executes the plan, and then returns results to the application.

If an application executes the same statement many times, then the overhead of having the engine compile the plan every time degrades performance. An application in this case can call SQLPrepare once and then call SQExecute each time it executes the statement. The ODBC definition of SQLPrepare states that the database engine just parses the statement and compiles an execution plan, then returns control to the application. On SQLExecute, the engine simply executes the precompiled execution plan and returns the results to the client, thereby saving the overhead of parsing and recompiling the execution plan.

SQL Server itself does not directly support the SQLPrepare/SQLExecute model, but the SQL Server ODBC driver can use stored procedures to emulate this behavior. On a SQLPrepare, the driver asks the server to create a stored procedure that contains the SQL statement from the application. On SQLExecute, the driver executes the created stored procedure. The ODBC driver uses stored procedures to support SQLPrepare/SQLExecute when the option is enabled either in the data source or the SQLDriverConnect keywords. For example, if an application calls:

SQLPrepare(hstmt, "select *from authors", SQL_NTS);

The driver sends a statement to the server:

SELECT * FROM authors

When the application then does:


The driver sends a remote stored procedure call to have the server run the #ODBC#nnnnnnnn procedure.

Because a CREATE PROCEDURE statement essentially compiles an SQL statement into an execution plan, and an EXECUTE statement simply executes the precompiled plan, this meets the criteria for the SQLPrepare/SQLExecute mode.

Excess or inappropriate use of SQLPrepare/SQLExecute degrades an application's performance. SQL Server applications should only use SQLPrepare/SQLExecute if they plan to execute a statement more than 3 to 5 times. If an application needs to execute a statement only once, using SQLPrepare/SQLExecute generates two roundtrips to the server: one to create the stored procedure and another to execute it. SQLExecDirect would only use one roundtrip and would also save the overhead of creating and storing a stored procedure. Excess use of SQLPrepare can also cause locking contention in the system tables in tempdb as concurrent users continually try to create the stored procedures to support SQLPrepare.

You may think that applications must use SQLPrepare/SQLExecute to use parameter markers, even if the application will only execute the statement once or twice. This is not true, applications can use parameter markers with SQLExecDirect by calling SQLBindParameter before SQLExecDirect.

If an application will be run by many concurrent users and the users will all be using the same SQL statement, the best approach is to create the SQL statement as a permanent, parameterized, stored procedure and executed it with SQLExecDirect. Having many users concurrently issue SQLPrepare commands can create a concurrency problem on the system tables in tempdb. Even if each user is executing exactly the same statement, the SQL Server ODBC driver on each client is creating its own copy of a temporary stored procedure in tempdb. If the SQL statement is created as a parameterized stored procedure, however, the procedure is created only once. Each ODBC application does not have to create a new procedure for its exclusive use, it simply uses a copy of the permanent procedure's execution plan from the procedure cache.

When used in the appropriate circumstances (to execute a single statement several times), SQLPrepare/SQLExecute can provide significant performance savings.

Impact on Tempdb

SQL Server 6.0 introduced temporary stored procedures, which are identified by having a number sign (#) as the first character in the name. These procedures operate like temporary tables and are automatically dropped by the server if the connection is broken. The SQL Server ODBC driver now creates the procedures that support SQLPrepare as temporary procedures. This makes it impossible for the ODBC-related stored procedures to build up as a result of broken network connections or client computer failures. However, the temporary stored procedures are always created in tempdb. This means that sites running SQL Server 6.0 or 6.5 with ODBC applications that use SQLPrepare must ensure that tempdb is large enough to hold the temporary procedures generated to support SQLPrepare.

There is another factor to consider in relation to how many stored procedures exist in tempdb. ODBC applications call SQLSetConnectoption with fOption set to the driver-specific value SQL_USE_PROCEDURE_FOR_PREPARE and vParam set to either SQL_UP_OFF, SQL_UP_ON, or SQL_UP_ON_DROP to control the generation of temporary procedures.

  • SQL_UP_OFF means that the driver does not generate stored procedures.
  • SQL_UP_ON_DROP means that the driver generates stored procedures, and that they are dropped when the application does a SQLDisconnect, a SQLFreeStmt with fOption set to SQL_DROP, or the next time the application issues SQLPrepare on the same statement handle.
  • SQL_UP_ON means that temporary procedures are created, but they are only dropped on a SQLDisconnect.

SQL_UP_ON is the default setting. The driver can reuse procedures if an application re-prepares the same SQL statement, and most applications realize a performance boost because the driver is not having to continually drop stored procedures. This may result in a build up of #ODBC procedures in tempdb, however, from applications that never disconnect or applications that make heavy use of SQLPrepare. These applications should set SQL_UP_ON_DROP by calling SQLSetConnectOption. Starting with the driver that shipped in SQL Server 6.5 SP2, SQL_UP_ON_DROP is now an option that can be specified on data sources for the SQL Server ODBC driver.

Other Considerations of SQLPrepare

To keep from having to hold locks on the tempdb system tables for the length of a user transaction, the SQL Server ODBC driver does not generate a stored procedure for SQLPrepare if it is called within a transaction. The exception to this is when the SQLPrepare is the first statement in the transaction. In this case, the driver generates a stored procedure but then immediately commits the CREATE PROCEDURE statement.

The driver does not generate a stored procedure for a SQLPrepare that uses the ODBC CALL escape clause to call a stored procedure. On SQLExecute, the driver executes the called stored procedure (there is no need to create a temporary stored procedure).

Calling either SQLDescribeCol or SQLDescribeParam before calling SQLExecute generates an extra roundtrip to the server. On SQLDescribeCol, the driver removes the WHERE clause from the query and sends it to the server with SET FMTONLY ON to get the description of the columns in the first result set returned by the query. On SQLDescribeParam, the driver calls the server to get a description of the columns in the tables referenced by the query. This method also has some restrictions, such as not being able to resolve parameters in subqueries.

Stored Procedures

This section discusses issues related to executing stored procedures using the SQL Server ODBC driver.

ODBC Call vs. Transact-SQL EXECUTE

Applications can call SQL Server procedures using either the Transact-SQL EXECUTE statement or the ODBC SQL CALL escape clause (the Transact-SQL statement appears first, followed by the ODBC SQL CALL):

SQLExecDirect(hstmt, "EXECUTE sp_helpdb 'pubs' ", SQL_NTS);
SQLExecDirect(hstmt, "{ call sp_helpdb ('pubs') }", SQL_NTS);

Using the ODBC syntax is recommended. The ODBC syntax, in addition to being more portable, offers improved features and performance over the EXECUTE statement.

The SQL Server TDS protocol provides two methods of sending a procedure to the server: the procedure can be sent to the server as a regular SQL statement, or it can be sent as a TDS Remote Procedure Call (RPC).

The TDS RPC syntax was originally defined for use by servers when one server is asked to execute a remote stored procedure on another server, but it can also be used by applications. Using the TDS RPC syntax means neither the driver nor the server need to perform any parameter conversions. This improves performance, especially for image parameters. The SQL Server ODBC driver uses the TDS RPC syntax if the application uses the ODBC CALL escape clause; it uses the regular SQL statement syntax if the application uses the Transact-SQL EXECUTE statement.

Using the ODBC CALL escape clause also allows the application to retrieve output parameters and return codes from a stored procedure. Output parameter and return code processing is discussed below.

Output Parameters and Return Codes

SQL Server stored procedures can return both output parameters and return codes to an application:

CREATE PROCEDURE odbcproc @oparm int OUTPUT AS
SELECT name FROM sysusers WHERE uid < 2
SELECT @oparm = 88

The parameters and return codes can be bound to program variables in an ODBC application where the application can reference them. For example, to execute the procedure above using the ODBC CALL escape clause and bind the return code and output parameters to program variables:

DWORD    ProcRet = 0, OParm = 0;
long     cbProcRet = 0, cbOParm = 0;
// Bind the return code.
rcd = SQLBindParameter(hstmt, 1, SQL_PARAM_OUTPUT,
      SQL_C_SLONG, SQL_INTEGER, 0, 0, &ProcRet, 0, &cbProcRet);
// Bind the output parameter.
rcd = SQLBindParameter(hstmt, 2, SQL_PARAM_OUTPUT,
      SQL_C_SLONG, SQL_INTEGER, 0, 0, &OParm, 0, &cbOParm;
// First ? marks the return code,
// second ? marks the output parameter.
rcd = (SQLExecDirect(hstmt, "{? = call odbcproc(?)}", SQL_NTS;

SQL Server does not send back the values for the return code or output parameters until the end of all result sets for the procedure. The program variables ProcRet and OParm do not hold the output values of 99 and 88 until SQLMoreResults returns SQL_NO_DATA_FOUND.

Text and Image Data

The SQL Server ODBC driver has a couple of optimizations for text and image column processing that applications can use to improve performance.

Bound vs. Unbound Text and Image Columns

When using server cursors (see "Cursors"), the driver is optimized to not transmit the data for unbound text or image columns at the time the row is fetched. The text or image data is not actually retrieved from the server until the application issues SQLGetData for the column.

This optimization can be applied to applications so that no text or image data is displayed while a user is scrolling up and down a cursor. After the user selects a row, the application can call SQLGetData to retrieve the text or image data. This saves transmitting the text or image data for any of the rows the user does not select and can save the transmission of very large amounts of data.

Logged vs. Nonlogged

An application can request that the driver not log text and image modifications:


This option should only be used for situations where the text or image data is not critical, and the data owners are willing to trade data recovery for higher performance.

Data-At-Execution and Text and Image Columns

ODBC Data-At-Execution allows applications to work with extremely large amounts of data on bound columns or parameters. When retrieving very large text or image columns, an application cannot simply allocate a huge buffer, bind the column into the buffer, and fetch the row. When updating very large text or image columns, the application cannot simply allocate a huge buffer, bind it to a parameter marker in an SQL statement, and then execute the statement. Whenever the size of the text or image data exceeds 400K (64K with SQL Server 4.21a), the application must use SQLGetData or SQLPutData with their Data-At-Execution options. Applications should always use Data-At-Execution if there is any possibility that the size of the data will exceed these limits.

Data-At-Execution is described in the ODBC 2.0 Programmer's Reference; however, it remains one of the hardest parts of the ODBC API for an application programmer to learn. The Appendix of this paper contains the source code of two Win32 console applications, Getimage.c and Putimage.c, that illustrate using Data-At-Execution to read and write large amounts of image data. Text columns would use similar calls, the only difference would be binding between SQL_C_CHAR and SQL_LONGVARCHAR instead of SQL_C_BINARY and SQL_LONGVARBINARY. Programmers interested in working with text or image columns should look up the Data-At-Execution index entries of the ODBC 2.0 Programmer's Reference, then search for "text" and "image" in Microsoft SQL Server Programming ODBC for SQL Server.

Querying Metadata

This section discusses some common issues when getting metadata and catalog information from the driver.


Both the SQL Server system catalog stored procedures and the ODBC API catalog functions address the need of applications to retrieve catalog information from a database. Because there is a high correlation between the ODBC catalog functions and the SQL Server catalog stored procedures, the SQL Server ODBC driver implements many of the ODBC API catalog functions as calls to a corresponding SQL Server catalog procedure. The driver is therefore dependent on the catalog stored procedures in any SQL Server to which it connects.

Each version of the SQL Server ODBC driver is developed in conjunction with a specific version of SQL Server. The proper operation of each driver version requires the versions of the catalog stored procedures associated with the specific version of SQL Server with which the driver was developed, or a later version of the procedures. For example, the 2.50.0121 driver was developed in conjunction with Microsoft SQL Server version 6.0, and requires either the versions of the system catalog stored procedures that were released with SQL Server 6.0, or with later versions, such as 6.5. The driver does not work properly with older versions of the catalog stored procedures, such as those in SQL Server version 4.21a.

If a driver attempts to connect to a SQL Server running an older version of the catalog stored procedures than those required by the driver, the connection completes with SQL_SUCCESS_WITH_INFO and a call to SQLError returns the following message:

SqlState:   01000
pfNative:   0
szErrorMsg: "[Microsoft][ODBC SQL Server Driver]The ODBC
            catalog stored procedures installed on server
            My421Server are version 02.00.4127; version 06.00.0115
            or later is required to ensure proper operation.
            Please contact your system administrator."

Although the connection is successful, the application may later encounter errors on calls to the ODBC API catalog functions.

Sites running multiple versions of the driver against a server need to ensure that the server is running with at least the version of Instcat.sql associated with the newest ODBC driver that will connect to it. For example, a site running multiple version 6.0 servers could buy SQL Server version 6.5 and upgrade some clients to use the new 2.65.0201 driver that comes with version 6.5. The site would also need to run the 6.5 version of Instcat.sql against the 6.0 servers before the new driver can connect to them.

Installing a newer version of Instcat.sql into an older server does not break any existing applications connecting to that server, even ones still using the old drivers. It simply allows the applications using the new driver to operate correctly.

Sites should run the Instcat.sql script at the server command prompt by using the isql utility.

C:\>cd \Mssql\Install
isql /Usa /Ppassword /Sservername /iInstcat.sql /oInstcat.rpt

For more information about determining the version of Instcat.sql currently applied to a server, see Microsoft Knowledge Base article Q137636. For more information about the isql utility, see the Microsoft SQL Server Transact-SQL Reference.

Multiple Active Statements per Connection

Starting with SQL Server 6.5 and its associated driver, users can have multiple outstanding calls for metadata. In SQL Server 6.5, the catalog procedures underlying the ODBC catalog API implementations can be called by the ODBC driver while it is using static server cursors. This allows applications to concurrently process multiple calls to the ODBC catalog functions.

Caching Metadata

If an application uses a particular set of metadata more than once, it will probably benefit by caching the information in private variables when it is first obtained. This eliminates the overhead of later calls to the ODBC catalog functions for the same information (which forces the driver to make roundtrips to the server).

Updates and Transactions

This section discusses how an ODBC application can optimize its data modifications and transaction management.


If an ODBC application needs to know how many rows were affected by a data modification (UPDATE, INSERT, DELETE), it can call the SQLRowCount function after the modification completes. SQLRowCount is generally not filled after a SELECT statement, although it may be if the application is using server cursors. For more information, see Microsoft SQL Server Programming ODBC for Microsoft SQL Server.

Batching Procedure Calls

SQLParamOptions can be used to efficiently call a stored procedure multiple times with different parameters. SQLBindParameter normally binds a single variable to a parameter, and SQLParamOptions is used to extend this binding so that it binds an array of variables to a parameter.

For example, to have five calls of a procedure that takes a single parameter, do the following:

  1. Allocate an array of five variables.
  2. Use SQLBindParameter to bind the parameter to the lead element of the array.
  3. Use SQLParamOptions to tell the driver that the parameter is bound to an array with five elements.

When you issue SQLExecDirect, the driver builds a single batch calling the procedure five times, with a different element from the array associated with each procedure call. This is more efficient than sending five separate batches to the server.

This process also works with procedures that take multiple parameters. Allocate an array for each parameter with the same number of elements in each array, then call SQLParamOptions specifying the number of elements.

Autocommit vs. ANSI Transaction Management

ODBC has two ways in which applications manage transactions. The application controls the autocommit mode by calling:


When autocommit is on, each statement is a separate transaction and is automatically committed when it completes successfully.

When autocommit is turned off, the next statement sent to the database starts a transaction. The transaction remains in effect until the application calls SQLTransact with either the SQL_COMMIT or SQL_ROLLBACK options. The statement sent to the database after SQLTransact starts the next transaction.

ODBC applications should not mix managing transactions through the ODBC autocommit options with calling the Transact-SQL transaction statements. If an application does this, it could generate undetermined results. The application should manage transactions in one of the following ways:

  • Use SQLSetConnectOption to set the ODBC autocommit modes.
  • Use Transact-SQL statements, such as BEGIN TRANSACTION. (The SQLSetConnectOption should be left at its default setting of autocommit on.)

Applications should keep transactions as short as possible by not requiring user input while in a transaction. User input can take a long time, and all that time, the application is holding locks that may adversely impact other tasks needing the same data.

An application should do all required queries and user interaction needed to determine the scope of the updates before starting the transaction. The application should then begin the transaction, do the updates, and immediately commit or rollback the transaction without user interaction.

Using Transactions to Optimize Logging

Applications doing several data modifications (INSERT, UPDATE, or DELETE) at one time should do these within one transaction (autocommit off). When autocommit is on, each individual statement is committed by the server. Commits cause the server to flush out the modified log records. To improve performance, do all updates within one transaction and issue a single commit when all the changes have been made. Care must be taken to not include too many updates within one transaction, however. Performing many updates causes the transaction to be open longer and more pages to be locked with exclusive locks, which increases the probability that other users will be blocked by the transaction. Grouping modifications into a single transaction must be done in a way that balances multiuser concurrency with single-user performance.

For applications that do not require a high degree of data accuracy, consider using the SQL_TXN_READ_UNCOMMITED transaction isolation level to minimize the locking overhead on the server.

SQL Server-specific Features

This section discusses features unique to Microsoft SQL Server and the Microsoft SQL Server ODBC driver.

Processing COMPUTE BY and COMPUTE Statements

The COMPUTE BY clause generates subtotals within a result set, and the COMPUTE clause generates a total at the end of the result set. The SQL Server ODBC driver presents these totals and subtotals back to the calling application by generating multiple result sets for each SELECT.

The following example uses COMPUTE BY to generate subtotals and COMPUTE to generate a total:

SELECT title = CONVERT(char(20), title), type, price, advance
FROM titles
  AND type like '%cook%'
COMPUTE AVG(price), SUM(advance) BY type
COMPUTE SUM(price), SUM(advance)

These statements cause a subtotal calculation for the average price and sum of advances for each book type and then cause a final total sum of both the price and advance data. The following ODBCTest GetDataAll output shows how the ODBC driver presents these subtotals and totals back to the calling application as separate result sets intermixed with the primary result set:

"title", "type", "price", "advance"
"Onions, Leeks, and G", "trad_cook   ", 20.9500, 7000.0000
"Fifty Years in Bucki", "trad_cook   ", 11.9500, 4000.0000
"Sushi, Anyone?      ", "trad_cook   ", 14.9900, 8000.0000
3 rows fetched from 4 columns.
"AVG", "SUM"
15.9633, 19000.0000
1 row fetched from 2 columns.
"title", "type", "price", "advance"
"Silicon Valley Gastr", "mod_cook    ", 19.9900, .0000
"The Gourmet Microwav", "mod_cook    ", 2.9900, 15000.0000
2 rows fetched from 4 columns.
"AVG", "SUM"
11.4900, 15000.0000
1 row fetched from 2 columns.
"SUM", "SUM"
70.8700, 34000.0000
1 row fetched from 2 columns.

You can see from the output above that the driver presents the first result set for the rows from books having the first book type. It then produces a second result set with the two COMPUTE BY columns for the AVG(price) and SUM(advance) for this first set of books. Then it produces a third result set for the next group of books, and a fourth result set with the COMPUTE BY subtotals for that group. The driver keeps interleaving these result sets until the end, when it produces the final result set with the total for the COMPUTE SUM(price), SUM(advance) clause.

Applications running SQL Server statements with COMPUTE BY and COMPUTE clauses must be coded to handle the multiple result sets returned by the driver.

The Microsoft SQL Server ODBC driver only supports COMPUTE BY or COMPUTE with the default forward_only, read_only cursors with a rowset size of one. The driver implements all other cursor types (dynamic, static, or keyset-driven) using server cursors, which do not support COMPUTE BY or COMPUTE.

Distributed Transactions

The Microsoft Distributed Transaction Coordinator (MS DTC) allows applications to distribute transactions across two or more SQL Servers. It also allows applications to participate in transactions managed by transaction managers that comply with the X/Open DTP XA standard. (For more information, see What's New in SQL Server 6.5 and the Guide to Microsoft Distributed Transaction Coordinator in the SQL Server 6.5 manuals.) ODBC applications using the driver that ships with SQL Server 6.5 can participate in MS DTC transactions.

Normally, all transaction management commands go through the ODBC driver to the server (see "Autocommit vs. ANSI Transaction Management"). The application starts a transaction by calling:


The application then performs the updates comprising the transaction and calls SQLTransact with either the SQL_COMMIT or SQL_ROLLBACK option.

When using MS DTC, however, MS DTC is the transaction manager and the application no longer uses SQLTransact. The application:

  • Connects to MS DTC using the MS DTC DtcSelectTransactionManager function.
  • Calls SQLDriverConnect once for each connection.
  • Calls the MS DTC ITransactionDispenser::BeginTransaction function to begin the MS DTC transaction and get a transaction object that represents the transaction.
  • Enlists each ODBC connection in the MS DTC transaction by calling:

    where pTransaction is a pointer to the transaction object.

  • Performs all updates that make up the transaction.
  • Calls the MS DTC function ITransaction::Commit or ITransaction::Rollback to commit or roll back the transaction.

For information about MS DTC, see the Guide to Microsoft Distributed Transaction Coordinator, which includes a sample ODBC SQL Server MS DTC application.

Fallback Connections

SQL Server 6.5 introduced fallback support. In fallback support, one server is defined as a fallback server to another, primary, server. If the primary server fails for some reason, then applications can switch to the fallback server. This feature depends on special hardware and operating system support. For more information, see Microsoft SQL Server What's New in SQL Server 6.5.

ODBC applications can take advantage of the SQL Server fallback feature by setting a driver-specific option before connecting:


Then, when the driver connects to the primary server, it retrieves from the primary server all the information it needs to connect to the fallback server and stores the information in the client's registry. If the application loses its connection to the primary server, it should clean up its current transaction and attempt to reconnect to the primary server. If the ODBC driver cannot reconnect to the primary server, it uses the registry information to attempt connecting to the fallback (secondary) server.

Handling SQL Server Messages

The way SQL Server presents information from the Transact-SQL SET, DBCC, PRINT, and RAISERROR statements was originally designed for the architecture of DB-Library applications. DB-Library applications have separate callback functions for handling messages and errors. These separate callback functions are difficult to code in multithreaded applications, so the designers of ODBC chose the method of having the application call a SQLError function to receive error messages. This means the SQL Server ODBC driver maps errors and messages originally returned by the DB-Library callback functions to the ODBC model.

Note that the SQL Server ODBC drivers that ship with SQL Server 6.0 and 6.5 do not return the severity-level or state codes associated with messages from SQL Server.


The Transact-SQL SET statement options SHOWPLAN, STATISTICS TIME, and STATISTICS IO can be used to get information that aids in diagnosing long-running queries. An ODBC application can set these options by executing the following statements:

SQLExecDirect(hstmt, "SET SHOWPLAN ON", SQL_NTS);

When SET STATISTICS TIME or SET SHOWPLAN are ON, SQLExecute and SQLExecDirect return SQL_SUCCESS_WITH_INFO, and, at that point, the application can retrieve the SHOWPLAN or STATISTICS TIME output by calling SQLError until it returns SQL_NO_DATA_FOUND. Each line of SHOWPLAN data comes back in the format:

szSqlState="01000", *pfNativeError=6223,
szErrorMsg="[Microsoft][ODBC SQL Server Driver][SQL Server] 
              Table Scan"

Each line of STATISTICS TIME comes back in the format:

szSqlState="01000", *pfNativeError= 3613,
szErrorMsg="[Microsoft][ODBC SQL Server Driver][SQL Server]
              SQL Server Parse and Compile Time: cpu time = 0 ms."

The output of SET STATISTICS IO is not available until the end of a result set. To get STATISTICS IO output, the application calls SQLError at the time SQLFetch or SQLExtendedFetch returns SQL_NO_DATA_FOUND. The output of STATISTICS IO comes back in the format:

szSqlState="01000", *pfNativeError= 3615,
szErrorMsg="[Microsoft][ODBC SQL Server Driver][SQL Server]
              Table: testshow  scan count 1,  logical reads: 1,
              physical reads: 0."

Using DBCC Statements

DBCC statements return data to an ODBC application in two ways:

  • Trace flags output

    An application can turn on various trace flags using the DBCC statement. No data is returned by the DBCC statement that turns on the trace flag, but the trace data is returned on subsequent SQL statements.

    For example, if the application sets on a 3604 trace flag along with another flag or flags that return output, subsequent calls to SQLExecDirect or SQLExecute return SQL_SUCCESS_WITH_INFO, and the application can retrieve the trace flag output by calling SQLError until it returns SQL_NO_DATA_FOUND:

    SQLExecDirect(hstmt, "dbcc traceon(3604, 4032)", SQL_NTS);

    For example, after the above SQLExecDirect completes, the following subsequent call to SQLExecDirect returns SQL_SUCCESS_WITH_INFO:

    SQLExecDirect(hstmt, "select * from authors", SQL_NTS);

    Calling SQLError returns:

    szSqlState = "01000", *pfNativeError = 0,
    szErrorMsg="[Microsoft][ODBC SQL Server Driver][SQL Server]
           96/02/02 11:08:45.26 10 LangExec: 'select * from authors"
  • DBCC execution output

    All other DBCC statements return data when they are executed. SQLExecDirect or SQLExecute returns SQL_SUCCESS_WITH_INFO, and the application retrieves the output by calling SQLError until it returns SQL_NO_DATA_FOUND.

    For example, the following statement returns

    SQLExecDirect(hstmt, "dbcc checktable(authors)", SQL_NTS);

    Calls to SQLError return:

    szSqlState = "01000", *pfNativeError = 2536,
    szErrorMsg="[Microsoft][ODBC SQL Server Driver][SQL Server]
            Checking authors"
    szSqlState = "01000", *pfNativeError = 2579,
    szErrorMsg="[Microsoft][ODBC SQL Server Driver][SQL Server]
            The total number of data pages in this table is 1."
    szSqlState = "01000", *pfNativeError = 7929,
    szErrorMsg="[Microsoft][ODBC SQL Server Driver][SQL Server]
            Table has 23 data rows."
    szSqlState = "01000", *pfNativeError = 2528
    szErrorMsg="[Microsoft][ODBC SQL Server Driver][SQL Server]
            DBCC execution completed. If DBCC printed error messages,
            see your System Administrator."

Using PRINT and RAISERROR Statements

Transact-SQL PRINT and RAISERROR statements also return data through calling SQLError. PRINT statements cause the SQL statement execution to return SQL_SUCCESS_WITH_INFO, and a subsequent call to SQLError returns a SQLState of 01000. A RAISERROR with a severity of ten or lower behaves the same as PRINT. A RAISERROR with a severity of 11 or higher causes the execute to return SQL_ERROR, and a subsequent call to SQLError returns SQLState 37000. For example, the following statement returns SQL_SUCCESS_WITH_INFO:

SQLExecDirect (hstmt, "PRINT  'Some message' ", SQL_NTS);

Calling SQLError then reports:

szSQLState = "01000", *pfNative Error = 0,
szErrorMsg= "[Microsoft] [ODBC SQL Server Driver][SQL Server]
                Some message"

The following statement returns SQL_SUCCESS_WITH_INFO:

SQLExecDirect (hstmt, "RAISERROR ('Sample error 1.', 10, -1)", SQL_NTS)

Calling SQLError then reports:

szSQLState = "01000", *pfNative Error = 50000,
szErrorMsg= "[Microsoft] [ODBC SQL Server Driver][SQL Server]
                Sample error 1."

The following statement returns SQL_ERROR:

SQLExecDirect (hstmt, "RAISERROR ('Sample error 2.', 11, -1)", SQL_NTS)

Calling SQLError then reports:

szSQLState = "37000", *pfNative Error = 50000,
szErrorMsg= "[Microsoft] [ODBC SQL Server Driver][SQL Server]
                Sample error 2."

The timing of calling SQLError is critical when output from PRINT or RAISERROR statements are included in a result set. The call to SQLError to retrieve the PRINT or RAISERROR output must be made immediately after the statement that receives SQL_ERROR or SQL_SUCCESS_WITH_INFO. This is straightforward when only a single SQL statement is executed, as in the examples above. In these cases, the call to SQLExecDirect or SQLExecute returns SQL_ERROR or SQL_SUCCESS_WITH_INFO and SQLError can then be called. It is less straightforward when coding loops to handle the output of a batch of SQL statements or when executing SQL Server stored procedures.

In this case, SQL Server returns a result set for every SELECT statement executed in a batch or stored procedure. If the batch or procedure contains PRINT or RAISERROR statements, the output for these is interleaved with the SELECT statement result sets. If the first statement in the batch or procedure is a PRINT or RAISERROR, the SQLExecute or SQLExecDirect returns SQL_SUCCESS_WITH_INFO or SQL_ERROR and the application needs to call SQLError until it returns SQL_NO_DATA_FOUND to retrieve the PRINT or RAISERROR information.

If the PRINT or RAISERROR statement comes after other SQL statements (such as a SELECT), then the PRINT or RAISERROR information is returned when SQLFetch or SQLExtendedFetch is called for the result set, before the PRINT or RAISERROR returns SQL_NO_DATA_FOUND or SQL_ERROR.

For example, in the following procedure, the SQLExecute or SQLExecDirect call returns SQL_SUCCESS_WITH_INFO and a call to SQLError at that point returns the first print message. If the ODBC application then processes through the result set using SQLFetch, the application can get the second print statement by calling SQLError when SQLFetch returns SQL_NO_DATA_FOUND:

PRINT 'First PRINT Message.'
SELECT name FROM sysusers WHERE suid < 2
PRINT 'Second PRINT Message.'

Other Application Considerations

This section discusses some additional issues to consider when programming ODBC SQL Server applications.

Asynchronous Mode and SQLCancel

Some ODBC functions can operate either synchronously or asynchronously (see the ODBC 2.0 Programmer's Reference for the list of functions). The application can enable asynchronous operations for either a statement handle or a connection handle. If the option is set for a connection handle, it affects all statement handles on the connection handle. The application uses the following statements to enable or disable asynchronous operations:


When an application calls an ODBC function in synchronous mode, the driver does not return control to the application until it is notified that the server has completed the command.

When operating asynchronously, the driver immediately returns control to the application, even before sending the command to the server. The driver sets the return code to SQL_STILL_EXECUTING. The application is then free to perform other work.

To test for completion of the command, make the same function call with the same parameters to the driver. If the driver is still waiting for an answer from the server, it will again return SQL_STILL_EXECUTING. The application must keep testing the command periodically until it returns something other than SQL_STILL_EXECUTING. When the application gets some other return code, even SQL_ERROR, the command has completed.

Sometimes a command is outstanding for a long time. If the application needs to cancel the command without waiting for a reply, it can do so by calling SQLCancel with the same statement handle as the outstanding command. This is the only time SQLCancel should be used. Some programmers use SQLCancel when the application has processed part way through a result set and they want to cancel the rest of the result set. SQLMoreResults or SQLFreeStmt with fOption set to SQL_CLOSE should be used to cancel the remainder of an outstanding result set, not SQLCancel.

Multithread Applications

The SQL Server ODBC driver is a fully multithreaded driver. Writing a multithread application is an alternative to using asynchronous calls to have multiple ODBC calls outstanding. A thread can make a synchronous ODBC call, and other threads can process while the first thread is blocked waiting for the response to its call. This model is more efficient than making asynchronous calls because it eliminates the overhead of the repeated ODBC function calls testing for SQL_STILL_EXECUTING to see if the function has completed.

Asynchronous mode is still an effective method of processing. The performance improvements of a multithread model are not enough to justify rewriting asynchronous applications. If users are converting DB-Library applications that use the DB-Library asynchronous model, it is easier to convert them to the ODBC asynchronous model.

Multithread applications need to have coordinated calls to SQLError. After a message has been read from SQLError, it is no longer available to the next caller of SQLError. If a connection or statement handle is being shared between threads, one thread may read a message needed by the other thread.

SET Options Used by the Driver

The ODBC standard is closely matched to the ANSI SQL standard, and ODBC applications expect standard behavior from an ODBC driver. To make its behavior conform more closely with that defined in the ODBC standard, the SQL Server ODBC driver always uses any ANSI options available in the version of SQL Server to which it connects. The server exposes these ANSI options through the Transact-SQL SET statement. The driver also sets some other options to help it support the ODBC environment.

The SQL Server ODBC driver that ships with SQL Server 6.5 issues the following Transact-SQL SET statements:

  • Connect to SQL Server version 6.5:
    SET TEXTSIZE 2147483647
  • Connect to SQL Server version 6.0:
    SET TEXTSIZE 2147483647
  • Connect to SQL Server version 4.21a:
    SET TEXTSIZE 2147483647

The driver issues these statements itself; the ODBC application does nothing to request them. Setting these options allows ODBC applications using the driver to be more portable because the driver's behavior then matches the ANSI standard.

DB-Library based applications, including the SQL Server utilities, generally do not turn these options on. Sites observing different behavior between ODBC or DB-Library clients when running the same SQL statement should not assume this points to a problem with the ODBC driver. They should first rerun the statement in the DB-Library environment (such as ISQL/w) with the same SET options as would be used by the SQL Server ODBC driver.

Since the SET options can be turned on and off at any time by users and applications, developers of stored procedures and triggers should also take care to test their procedures and triggers with the SET options listed above turned both on and off. This ensures that the procedures and triggers work correctly regardless of what options a particular connection may have SET on when they invoke the procedure or trigger.

The SET options used by the version 2.65 driver when connected to SQL Server 6.5 has the net effect of setting on three more ANSI options than in the 6.0 environment: ANSI_NULLS, ANSI_PADDING, and ANSI_WARNINGS. These options can cause problems in existing stored procedures and triggers. The version 2.65.0240 driver that shipped with SQL Server 6.5 SP2 allows data sources and connection statements to turn these options off. For more information, see Microsoft Knowledge Base article Q149921.

The version 2.50 driver that shipped with SQL Server 6.0 also sets on the QUOTED_IDENTIFIER option. With this option set on, SQL statements should comply with the ANSI rule that character data strings be enclosed in single quotes and that only identifiers, such as table or column names, be enclosed in double quotes:

SELECT "au_fname"
FROM "authors"
WHERE "au_lname" = 'O''Brien'

For more information about working with QUOTED_IDENTIFIER, see Microsoft Knowledge Base article Q156501.

Like the ANSI options noted above, the version 2.65.0240 driver that shipped with SQL Server 6.5 SP2 allows SQLDriverConnect, SQLBrowseConnect, and data sources to specify that QUOTED_IDENTIFIERS not be turned on.

ODBC applications should not use the Transact-SQL SET statement to turn these options on or off. They should only set these options in either the data source or the connection options. The logic in the driver depends on it correctly knowing the current state of the SET options. If the application issues the SET statements itself, the driver may generate incorrect SQL statements due to not knowing that the option has been changed.

Diagnostic Messages

This section discusses how to interpret the error messages that are returned by the SQL Server ODBC driver. All ODBC functions have return codes. The ODBC header files have #define statements that equate the return codes to literals, such as SQL_SUCCESS, SQL_SUCCESS_WITH_INFO, and SQL_ERROR. If a function returns SQL_SUCCESS_WITH_INFO, it means the function was successful but there is information available. If a function returns SQL_ERROR, it means the function failed and there is information available indicating the nature of the problem. To get these messages, the application can call SQLError. SQLError returns three parameters that have important information:

  • SQLState—a 5-byte character string with an ODBC error code.
  • pfNative—a signed doubleword holding whatever error code is returned by the native database.
  • szErrorMsg—a character string holding a header identifying the source of the error and the text of the error message.

Identifying the Source of an Error

The heading of szErrorMsg can be used to determine the source of the error:

[Microsoft][ODBC Driver Manager]

These are errors encountered by the ODBC Driver Manager.
[Microsoft][ODBC Cursor Library]

These are errors encountered by the ODBC Cursor Library.
[Microsoft][ODBC SQL Server Driver]

If there are no other nodes identifying other components, these are errors encountered by the driver.
[Microsoft][ODBC SQL Server Driver][Net-Libraryname]

These are errors encountered by the Net-Library, where Net-Libraryname is the name of a SQL Server Net-Library (see "Setup and Connecting" for a list of the names). This also includes errors raised from the underlying network protocol because these errors are reported to the driver from the Net-Library. In these errors, the pfNative code contains the actual error returned by the network. (For more information about pfNative codes, see "pfNative Error Codes," later in this paper.) The remainder of the message contains two parts: the Net-Library function called, and (within parenthesis afterward) the underlying network API function called.
[Microsoft][ODBC SQL Server Driver][SQL Server]

These are errors encountered by SQL Server. In this case, the pfNative parameter is the SQL Server error code.

For example, when an application attempts to open a named-pipe connection to a server that is currently shut down, the error string returned is:

[Microsoft][ODBC SQL Server Driver][dbnmpntw]ConnectionOpen (CreateFile())

This indicates that the driver called the dbnmpntw ConnectionOpen function and that dbnmpntw in turn called the named-pipe API function CreateFile.

pfNative Error Codes

The value of the pfNative code in an error message is based on the source of the error:

  • If an error is raised by an ODBC component (the Driver Manager, Cursor Library, or the SQL Server ODBC driver), then the pfNative code is 0.
  • If an error is raised by the server, the pfNative code is the SQL Server error code. For more information about SQL Server errors, see chapters 25 and 26 in the Microsoft SQL Server Administrator's Companion.
  • If an error is raised by the Net-Library, the pfNative code is the error code returned to the Net-Library from the underlying network protocol stack.

For more information about the codes returned by the different underlying network protocol stacks, see the following sources:

  • dbnmpntw, dbnmp3

    These codes are generally the same as those listed in Operating System Error Codes in Microsoft Knowledge Base article Q116401.

  • dbmssocn, dbmssoc3

    These codes, returned by the Winsock API, are listed in "Appendix A, Error Codes," of the Windows Sockets Specification 1.1. The Windows Sockets Specification can be found on the MSDN Library compact disc.

  • dbmsspxn, dbmsspx3

    These codes, returned from Novell, are in Novell NetWare Client Protocol Transport API for C under the section for the API function listed in the szErrorMsg. For example, if the pfNative is 253, and szErrorMsg lists SPXListenForSequencedPacket as the function, the reference manual states a 0xFD (253) return from SPX Listen For Sequenced Packet is a Packet Overflow.

  • dbmsvinn, dbmsvin3

    These codes, returned from Banyan Vines, are listed in the Vines Client Developer's Guide.

  • dbmsrpcn, dbmsrpc3

    These codes, returned by the RPC API, are listed in the RPC section of Winerror.h.

Mapping SQLState Codes

The values for the SQLState code are listed in the Microsoft ODBC 2.0 Programmer's Reference and SDK Guide.

Diagnosing and Profiling Applications

Programmers can use several tools to trace the SQL statements received and generated by the SQL Server ODBC driver. They can also use the Windows NT Performance Monitor and SQL Server ODBC driver profiling features to analyze the performance of the driver.

Tracing SQL Statements

Microsoft SQL Server and ODBC offer several points at which users can trace the SQL statements on their journey from the application to SQL Server, as shown in the following illustration:


ODBC Driver Manager Trace

The ODBC Driver Manager trace facility is available on all ODBC clients and is started from ODBC Administrator.

To start trace from ODBC Administrator

  1. In the ODBC Administrator window, click Options.
  2. Click the trace options you want.

The ODBC trace facility traces all calls made to any ODBC data source on the client. It records ODBC calls immediately after they come into the Driver Manager from the application. This is helpful in debugging problems that the Driver Manager may have when connecting to a driver.

This is a fairly minimal trace, however, and is used only when the second tool, ODBCSpy, is not available.

Here's an example of an ODBC Driver Manager trace output:

SQLAllocEnv (phenv00145E08);
SQLAllocConnect (henv00145E08, phdbc00145878);
SQLDriverConnect (hdbc00145878, hwnd00870544, "DSN=ab60def;UID=sa;PWD=;", -3, szConnStrOut,0, pcbConnStrOut, 1);
SQLError(henv00000000,hdbc00145878,hstmt00000000, szSQLState, pfNativeError, szErrorMsg, 512, pcbErrorMsg);
SQLAllocStmt(hdbc00145878, phstmt0014A990);
SQLExecDirect(hstmt0014A990, "select * from discounts",    -3);

A lot of information is missing from this output. There is no indication of the return code for each function call. There is no way to tell if the SQLDriverConnect call was successful; however, the fact that the next call was to SQLError could indicate some problem. Since the trace does not show what szErrorMsg string or SQLState value was returned by SQLError, there is no way to tell what the problem might have been. The fact that the application went on to allocate a statement handle and execute a statement seems to show that no major problem was encountered.

When Driver Manager tracing is on, all calls to ODBC drivers on that client are traced. There is no way to trace only a specific data source.

ODBCSpy Trace

The ODBCSpy utility ships with the ODBC SDK and can be used to get an informative trace of all the ODBC calls made to a specific ODBC data source. ODBCSpy traces calls as they are passed from the Driver Manager to the ODBC driver. It shows all of the parameters passed for each call to the driver and the information returned from the driver. If an error is encountered, ODBCSpy calls SQLError for all error messages returned and logs all information about the errors in the trace.

Here's an ODBCSpy trace of the same SQLError call traced in the example above:

   [85][Microsoft][ODBC SQL Server Driver] 
      [SQL Server] Changed database context to 'master'

This trace output includes more useful information. It shows that the SQLError function itself returned SQL_SUCCESS. (The entry for SQLDriverConnect would have shown that it returned SQL_SUCCESS_WITH_INFO, not SQL_ERROR.) The trace also shows that SQLError returned a SQLState of 01000, a pfNative of 5701, and a szErrorMsg string that indicates SQL Server has changed the connection context to the master database.

There are also third-party ODBC tracing tools available.

SQL Trace

SQL Trace, a trace utility introduced in SQL Server 6.5, uses Open Data Services to intercept and trace all SQL statements coming in to SQL Server. SQL Trace is extremely valuable for determining if a problem is due to the Transact-SQL statements the driver generates to support the ODBC commands coming from the application. A programmer can use ODBCSpy to see exactly what comes from the application to the SQL Server ODBC driver, and then use SQL Trace to see what the driver actually sends to the server.

If an application does:

SQLExecDirect(hstmt, "exec sp_helpdb 'pubs' ", SQL_NTS);

SQL Trace shows:

-- 1/29/97 17:13:23.530 SQL (ID=3, SPID=12, User=sa(MyDomain\MyNTAccount), App='Microsoft ODBC SDK v2.0', Host='MyServer'(d3) )
exec sp_helpdb 'pubs'

SQL Trace can be used to dynamically trace statements coming in from different clients to a server. Sites that have servers earlier than SQL Server 6.5 can use an older, unsupported version of the utility called SQLEye. SQLEye is available on the Microsoft TechNet compact disc.

SQL Server Trace Flags

SQL Server has a DBCC trace flag (4032) that causes the server to trace incoming SQL statements. SQL Trace is much easier to use, so sites that have SQL Trace or SQLEye generally use those tools instead of the trace flags.

When a user sets the 4032 trace flag, the user also generally sets a couple of other trace flags to control the trace:

  • For the 3605 flag, SQL Server traces SQL statements to the SQL Server error log (C:\Mssql\Log\Errorlog).
  • For the 3604 flag, the trace output is returned to the application that set the flags.
  • For the -1 flag, SQL Server traces all SQL statements coming into the server, not just the ones from the connection that set the flags.

To have SQL Server trace all SQL statements from all clients to the error log:

SQLExecDirect(hstmt, "dbcc traceon(3605, 4032, -1)", SQL_NTS);

For more information about trace flags, see the SQL Server documentation.

Windows NT Performance Monitor

Windows NT Performance Monitor is a powerful tool for profiling the performance of SQL Server applications. SQL Server installs several counters in Performance Monitor (for more information, see the Microsoft SQL Server Administrator's Companion). In SQL Server 6.5, users can also add up to 10 user-defined counters (for more information, see What's New in SQL Server 6.5). To get a better idea of how your query impacts the operation of the server, use the SQL Server counters in Performance Monitor to track the resources used by your application.

ODBC Driver Profiling Features

The SQL Server ODBC driver version 2.65.0201 and later offers a couple of features that aid in analyzing performance of ODBC applications:

  • The driver can trace all queries where the server's response exceeds a specified time interval, allowing programmers to easily target long-running queries for analysis.
  • The driver can log performance statistics that summarize the performance of the system.

Logging Long-Running Queries

Applications can request that the driver write all queries whose response exceeds a specified time limit to a file for later analysis by the programmer and database administrator. The log can be turned on in two ways.

  • When an application connects using a data source that specifies long query profiling, the SQL Server ODBC driver will log long-running queries from the time the application connects until it disconnects. For more information, see "Setup and Connecting."
  • Use SQLSetConnectOption to set logging on and off dynamically.

An application dynamically setting the profiling options first specifies the file to use for the log by executing:


It then sets the interval by executing:


The number specified is in seconds, so the call shown above causes all queries that do not return within one second to be logged.

Note: The query profiling interval in a data source is specified in units of milliseconds.

After these options are enabled, the application can turn logging on and off dynamically by executing:


Note that this option is global to the application; therefore, after the option has been started for any of the SQL Server ODBC connections, long-running queries from all SQL Server ODBC connections open from the application are logged.

Logging Performance Data

Applications can request that the driver log performance data for the driver. As with long-running query logging, the performance log can be turned on either by the application or by specifying performance logging in the data source using ODBC Administrator. For more information, see "Setup and Connecting."

When dynamically turning on performance logging by calling SQLSetConnectOption, applications can either write the performance data to a log file or read the data into the application using a sqlperf structure defined in the Odbcss.h header file.

The following commands start and stop performance-data gathering:


Performance statistics are recorded in a data structure named sqlperf (for an explanation of the sqlperf variables, see the appendix). The statistics are global for all connections made through the driver by the application. For example, if the application starts the performance statistics and opens three connections, the statistics are global for all three connections.

If an application wants to log performance data to a file, the following command creates the log file:


The log file is a tab-delimited text file that can be opened in Microsoft Excel (specify tab delimited in the wizard that appears). Most other spreadsheet products also support opening a tab-delimited text file.

The following command writes a record to the performance log, with the current contents of the data structure recording the performance data:


The application does not need to set up a performance log; it could instead pull the performance data by using SQLGetConnectOption to get a pointer to the sqlperf structure. This structure is declared in a typedef in the Odbcss.h header file. The following statements are an example of pulling the statistics into the application:

// Initialize PerfPtr with pointer to performance data.
printf("SQLSelects = %d, SQLSelectRows = %d\n",
         PerfPtr->SQLSelects, PerfPtr->SQLSelectRows);

If the application uses a data source that has the performance-statistics profiling option activated, the driver writes the statistics header information to the log file and starts accumulating the statistics in its internal data structure when the application makes its first connection using the driver. When the last connection to the SQL Server ODBC driver from the application is closed, the driver writes out the global, accumulated, performance statistics.

Profiling Considerations

The fact that profiling is global to the driver governs the behavior of the log files. When an application connects to a data source that specifies profiling, the driver starts a log file and begins logging information from all connections active from the application to the SQL Server ODBC driver from that point forward. Even connections to SQL Server data sources that do not specify profiling are recorded because the profiling is done globally for the driver.

If the application does a SQLFreeEnv, the ODBC Driver Manager unloads the driver. At this point, both the long-running query log and the performance statistics logs hold the information from the old connections. If the application then makes another connection to the data source that specifies profiling, the driver is reloaded, and it overwrites the old copy of the log file.

If an application connects to a data source that specifies profiling, and then a second application connects to the same data source, the second application does not get control of the log file and therefore is not able to log any performance statistics or long-running queries. If the second application makes the connection after the first application disconnects, the driver overwrites the first application's log file with the one for the second application.

Note that if an application connects to a data source that has either the long-running query or performance statistics enabled, the driver returns SQL_ERROR if the application calls SQLSetConnectOption to enable logging. A call to SQLError then returns the following message:

SQLState: 01000, pfNative = 0
szErrorMsg: [Microsoft][ODBC SQL Server Driver]
            An error has occurred during an attempt to access
            the log file, logging disabled.


This paper explains the interaction of Microsoft SQL Server and its associated ODBC driver. This knowledge helps you produce efficient ODBC applications that optimize the interaction between the driver and the server. It helps you avoid code that results in poor performance and describes how to take advantage of the unique features of Microsoft SQL Server and the ODBC driver. This paper also helps programmers and administrators to diagnose problems encountered by ODBC applications running on Microsoft SQL Server.


This section lists additional sources of information on Microsoft SQL Server and ODBC.

SQL Server Documentation

The primary reference for using ODBC with SQL Server 6.5 is Programming ODBC for Microsoft SQL Server. Much of this material is also in the Help file for the driver, Drvssrvr.hlp; however, the latest version of Drvssrvr.hlp, shipped with both SQL Server 6.0 and 6.5, does not cover 6.5 features.

Programming ODBC for Microsoft SQL Server and Drvssrvr.hlp are included with SQL Server 6.5. Other products that ship the SQL Server ODBC driver also ship the corresponding driver Help file, Drvssrvr.hlp.

ODBC Programmer's Reference

The primary reference for ODBC is:

Microsoft ODBC 2.0 Programmer's Reference and SDK Guide
Microsoft Press®
ISBN 1-55615-658-8

Microsoft ODBC 3.0 Software Development Kit and Programmer's Reference
Microsoft Press
ISBN 1-57231-516-4

Other ODBC Books

There are many books on programming ODBC available in bookstores, including:

Inside ODBC
Kyle Geiger
Microsoft Press
ISBN 1-55615-815-7

Using ODBC 2
Robert Gryphon with Luc Charpentier, Jon Oelschlager, Andrew Shoemaker, Jim Cross, and Albert W. Lilley
Que Corporation
ISBN 0-7897-0015-8

The ODBC Solution
Robert Signore, John Creamer, Michael Stegman
ISBN 0-07-911880-1

Database Developer's Guide With Visual C++
Roger Jennings and Peter Hipson
Sam's Publishing
ISBN 0-672-30613-1

Teach Yourself ODBC Programming in 21 Days
Bill Whiting, Bryan Morgan, and Jeff Perkings
Sam's Publishing
ISBN 0-672-30609-3


The appendix first defines all of the driver-specific options defined in Odbcss.h and then has two sample applications that illustrate processing text and image data.


Odbcss.h is a header file containing the definitions used for all of the driver-specific options in the SQL Server ODBC driver. Odbcss.h is distributed with SQL Server Workstation and with SQL Server 6.5 SP2. The version distributed with SP2 has a few extra connection options related to controlling the ANSI options used by the driver. The list below relates to the 6.5 SP2 version of Odbcss.h.


The following options can be set on using SQLSetConnectOption. The bulleted literals are specified as the fOption parameter; the literals grouped under each bulleted fOption are specified as vParam.


(replaces SQL_REMOTE_PWD)

Controls generation of stored procedures on SQLPrepare.
vParam valueDescription
SQL_UP_OFFDo not generate procedures on SQLPrepare.
SQL_UP_ONGenerate procedures for SQLPrepare; do not drop until SQLDisconnect.
SQL_UP_ON_DROPGenerate procedures for SQLPrepare; drop on SQLFreeStmt (SQL_DROP), SQLDisconnect, or next SQLPrepare.

Controls use of integrated security when connecting to SQL Server.
vParam valueDescription
SQL_IS_OFFIntegrated security isn't used.
SQL_IS_ONIntegrated security is used.

Controls whether cursors are dropped at the end of a transaction.
vParam valueDescription
SQL_PC_OFFCursors are closed on SQLTransact.
SQL_PC_ONCursors remain open on SQLTransact.

vParam valueDescription
SQL_UD_NOTSETNo user-data pointer set.

Controls ANSI to EOM conversion of data.
SQL_AO_DEFAULT = SQL_AO_OFF, unless DSN OEM/ANSI checkbox is selected.
vParam valueDescription
SQL_AO_OFF ANSI/OEMTranslation is not performed.
SQL_AO_ON ANSI/OEMTranslation is performed.

Controls enlistment in a distributed transaction managed by the Microsoft Distributed Transaction Coordinator.
vParam valueDescription
SQL_DTC_DONEDelimits end of distributed transaction.

Enlists in a distributed transaction managed by a transaction manager that complies with the X/Open XA standard.
The vParam value is a pointer to a variable defined using the structure:
typedef struct SQLXaTranTAG
void FAR *transManager;
void FAR *xaTransID;
DWORD dwErrorInfo;
} SQLXaTran;

Used to see if connection is still active.
vParam valueDescription
SQL_CD_FALSEConnection is open/available.
SQL_CD_TRUEConnection is closed/dead.

Controls the use of SQL Server Fallback Connections.
vParam valueDescription
SQL_FB_OFFFallback connections are disabled.
SQL_FB_ONFallback connections are enabled.

Controls the logging of driver performance data.
vParam valueDescription
SQL_PERF_STARTStarts the driver sampling performance data.
SQL_PERF_STOPStops the counters from sampling performance data.

Specifies the file in which to log performance data.
The vParam value is a pointer to a null-terminated string that contains the file name.

Specifies the interval for the trigger point to log a long-running query.
The vParam value is an integer specifying the interval in seconds.

Specifies the file in which to log long running queries.
The vParam value is a pointer to a null-terminated string that contains the file name.

Controls the logging of long running queries.
vParam valueDescription
SQL_PERF_STARTStarts the driver logging long-running queries.
SQL_PERF_STOPStops the counters from logging long-running queries.

Instructs the driver to write a performance statistics record to the log.
The vParam value is NULL.

Controls setting of QUOTED_IDENTIFIER (can only be set before connecting).
vParam valueDescription
SQL_QI_OFFQuoted identifiers are not supported.
SQL_QI_ONQuoted identifiers are supported.

Controls setting of ANSI_NULLS, ANSI_PADDING, ANSI_WARNINGS (can only be set before connecting).
vParam valueDescription
SQL_AD_OFFThe ANSI options are not set on.
SQL_AD_ONThe ANSI options are set on.


The following options can be set on using SQLSetStmtOption. The bulleted literals are specified as the fOption parameter, the literals grouped under each bulleted fOption are specified as vParam.


Controls logging of text/image operations.
vParam valueDescription
SQL_TL_OFFNo logging on text-pointer operations.
SQL_TL_ONLogging occurs on text-pointer operations.

Expose FOR BROWSE hidden columns.
vParam valueDescription
SQL_HC_OFFBROWSE columns are hidden.
SQL_HC_ONBROWSE columns are exposed.

vParam valueDescription


If SQLGetStmtOption is called with fOption = SQL_SOPT_SS_CURRENT_COMMAND, the driver returns an integer to pvParam indicating which command in the batch is the one whose results are being processed. The first command in the batch is 1.


The following options can be set on using SQLColAttributes. The bulleted literals are specified as the fDescType parameter; the literals grouped under each bulleted fDescType are the values returned as pfDesc.




pfDesc returns the number of columns in an ORDER BY clause.

The SELECT list column ID of a column that appears in the SQL statement ORDER BY clause.
pfDesc returns the column ID.

pfDesc is TRUE if the column's data can vary in length, otherwise FALSE.

pfDesc returns the number of compute clauses in the current result set.

pfDesc returns the compute ID of a compute row.

Returns a bylist: an array of the column IDs of the columns participating in a Transact-SQL COMPUTE BY clause.
pfDesc returns a pointer to the bylist for a compute row.

pfDesc returns the SELECT list column ID to which a COMPUTE BY aggregate refers.

Identifies the aggregate operation the COMPUTE BY applied to a column.
pfDesc returns:

The maximum length of data for a column.

The column is hidden. Applies only if the SELECT was FOR BROWSE and the driver-specific statement option SQL_SOPT_SS_HIDDEN_COLUMNS is set to SQL_HC_ON.

The column is a key column.

The base column name.


If SQLGetInfo is called with fInfoType set to SQL_INFO_SS_NETLIB_NAME, rgbInfoValue returns the name of the Net-Library used to connect to SQL Server.

SQLPerf Structure

The meaning of the variables defined in the sqlperf structure are given in this section. These descriptions also apply to the statistics recorded in the performance log file. For a description of how to gather these statistics, see "Diagnosing and Profiling Applications."

Application Profile Statistics

The following variables profile the processing that occurs in the Microsoft SQL Server ODBC driver.

Application profile statisticsDescription
TimerResolutionThe minimum resolution of the server's clock time in milliseconds. This is usually reported as 0 (zero). The only time this statistic should be considered is if the number reported is large. If the minimum resolution of the server clock is larger than the likely interval for some of the timer-based statistics, those statistics may be inflated.
SQLiduThe number of INSERT, DELETE, or UPDATE statements processed since SQL_PERF_START.
SQLiduRowsThe number of rows affected by INSERT, DELETE, or UPDATE statements processed since SQL_PERF_START.
SQLSelectsThe number of SELECT statements processed since SQL_PERF_START.
SQLSelectRowsThe number of rows selected since SQL_PERF_START.
TransactionsThe number of user transactions since SQL_PERF_START. For example, suppose an application had run the following statements:


SQLTransact(henv, hbdc,SQL_COMMIT);

SQLTransact(henv, hdbc, SQL_ROLLBACK);

This constitutes two user transactions. Even though the second transaction was rolled back, it still counted as a transaction. Also, when an ODBC application is running with SQL_AUTOCOMMIT_ON, each individual command is considered a transaction.

SQLPreparesThe number of SQLPrepare functions executed since SQL_PERF_START.
ExecDirectsThe number of SQLExecDirect functions executed since SQL_PERF_START.
SQLExecutesThe number of SQLExecute functions executed since SQL_PERF_START.
CursorOpensThe number of times the driver has opened a server cursor since SQL_PERF_START.
CursorSizeThe number of rows in the result sets opened by cursors since SQL_PERF_START.
Application profile statisticsDescription
CursorUsedThe number of rows actually retrieved through the driver from cursors since SQL_PERF_START.
PercentCursorUsedHere is the equation used to figure the percentage of cursor used:

PercentCursorUsed = CursorUsed/CursorSize

For example, if an application causes the driver to open a server cursor to do "select count(*) from authors," 23 rows are in the result set for the select. If the application then only fetches three of these rows, CursorUsed/CursorSize is 3/23, so PercentCursorUsed is 13.043478.

AvgFetchTimeHere is the equation used to figure the average fetch time:

AvgFetchTime = SQLFetchTime/SQLFetchCount

AvgCursorSizeHere is the equation used to figure average cursor size:

AvgCursorSize = CursorSize/CursorOpens

AvgCursorUsedHere is the equation used to figure average number of cursors used:

AvgCursorUsed = CursorUsed/CursorOpens

SQLFetchTimeThe cumulative amount of time it took fetches against server cursors to complete.
SQLFetchCountThe number of fetches done against server cursors since SQL_PERF_START.
CurrentStmtCountThe number of statement handles currently open on all connections open in the driver.
MaxOpenStmtThe maximum number of concurrently opened statement handles since SQL_PERF_START.
SumOpenStmtThe number of statement handles that have been opened since SQL_PERF_START.

Connection Statistics

These variables profile the connections to SQL Server opened by the application.

Connection statisticsDescription
CurrentConnectionCountThe current number of active connection handles the application has open to the server.
MaxConnectionsOpenedThe maximum number of concurrent connection handles opened since SQL_PERF_START.
SumConnectionsOpenedThe sum of the number of connection handles that have been opened since SQL_PERF_START.
SumConnectionTimeThe sum of the amount of time for which all of the connections have been opened since SQL_PERF_START. For example, if an application opened 10 connections and maintained each connection for 5 seconds, then SumConnectionTime would be 50 seconds.
AvgTimeOpenedHere is the equation used to figure average time connections are open:

AvgTimeOpened = SumConnectionsOpened/ SumConnectionTime

Network Statistics

The network packet statistics reported by the driver relate to the TDS packets (for more information about TDS, see "Architecture"). The size of a TDS packet is either the server's default setting specified in sp_configure 'network packet size' or what the ODBC client might request through:

SQLSetConnectOption(hdbc, SQL_PACKET_SIZE, NNNN) 

These packets may be larger than the size of the network packets actually sent by the underlying protocol stack (such as TCP/IP or SPX/IPX). The SQL Server Net-Library DLLs and the underlying protocol stack are the components that map the TDS packets onto the network packets, but this is hidden from both the SQL Server ODBC driver and the DB-Library DLL.

Network statisticsDescription
ServerRndTripsThe number of times the driver sent commands to the server and got a reply back.
BuffersSentThe number of TDS packets sent to SQL Server by the driver since SQL_PERF_START. Large commands may take multiple buffers, so if a large command is sent to the server that filled six packets, ServerRndTrips would be incremented by one, BuffersSent by six.
BuffersRecThe number of TDS packets received by the driver from SQL Server since the application started using the driver.
BytesSentThe number of bytes of data sent to SQL Server in TDS packets since the application started using the driver.
BytesRecThe number of bytes of data in TDS packets received by the driver from SQL Server since the application started using the driver.

Time Statistics

These are the time statistics.

Time statisticsDescription
MsExecutionTimeThe cumulative amount of time the driver spent doing its processing since SQL_PERF_START, including the time it spent waiting for replies from the server.
MsNetworkServerTimeThe cumulative amount of time the driver spent waiting for replies from the server.

Putimage.c and Getimage.c

The following sample programs are discussed in "Data-At-Execution and Text and Image Columns." Both depend on a table having been created as follows:

CREATE TABLE emp2 (name CHAR(30), age FLOAT, 
                  birthday DATETIME, BigBin IMAGE)

If Putimage is compiled and run first, then Getimage can be used to read the data. To confirm that all 300,000 bytes of image data has been entered by Putimage, run the following from ISQL/w:

SELECT name, age, birthday, BinLen = datalength(BigBin)
FROM emp2

Note: Some of the error checking has been removed for clarity. Also, both programs use the same function, ProcessLogMessages, whose source code has been deleted from Getimage.c to save space.


// Sample application to write SQL_LONGVARBINARY data using SQLPutData.
// Assumes DSN has table:
//           BigBin IMAGE)
#include <stdio.h>
#include <string.h>
#include <windows.h>
#include <sql.h>
#include <sqlext.h>
#define MAXDSN      25
#define MAXUID      25
#define MAXAUTHSTR   25
#define MAXBUFLEN    255
#define SIZEOFTEXT   300000
char   logstring[MAXBUFLEN] = "";
void   ProcessLogMessages(HENV plm_henv, HDBC plm_hdbc,
             HSTMT plm_hstmt, char *logstring);
int main()
   RETCODE retcode;
   UCHAR   szDSN[MAXDSN+1] = "ab65def",
      szUID[MAXUID+1] = "sa",
      szAuthStr[MAXAUTHSTR+1] = "password";      
   // SQLBindParameter variables.
   SDWORD   cbTextSize, lbytes;
   //SQLParamData variable.
   PTR   pParmID;
   //SQLPutData variables.
   UCHAR   Data[] = 
   SDWORD   cbBatch = (SDWORD)sizeof(Data)-1;
    // Allocate the ODBC environment and save handle.
   retcode = SQLAllocEnv (&henv);
   // Allocate ODBC connection and connect.
   retcode = SQLAllocConnect(henv, &hdbc1);
   retcode = SQLConnect(hdbc1, szDSN, (SWORD)strlen(szDSN),
               szUID, (SWORD)strlen(szUID),szAuthStr,
   // Print info messages returned.
   if ( (retcode != SQL_SUCCESS) &&
        (retcode != SQL_SUCCESS_WITH_INFO) ) {
                "SQLConnect() Failed\n\n");
   else {
                 "\nConnect Successful\n\n");
   // Allocate a statement handle.
   retcode = SQLAllocStmt(hdbc1,&hstmt1);
   // Let ODBC know total length of data to send.
   cbTextSize = SQL_LEN_DATA_AT_EXEC(lbytes);

   // Bind the parameter.
   retcode = SQLBindParameter(hstmt1,   // hstmt
         1,         // ipar
         SQL_PARAM_INPUT,   // fParamType
         SQL_C_BINARY,      // fCType
         SQL_LONGVARBINARY,   // FSqlType
         lbytes,         // cbColDef
         0,         // ibScale
         (VOID *)1,      // rgbValue
         0,         // cbValueMax
         &cbTextSize);      // pcbValue
   if ( (retcode != SQL_SUCCESS) &&
      (retcode != SQL_SUCCESS_WITH_INFO) ) {
         ProcessLogMessages(henv, hdbc1, hstmt1,
          "SQLBindParam hstmt1 Failed\n\n");
   retcode = SQLExecDirect(hstmt1,
   if ( (retcode != SQL_SUCCESS) &&
        (retcode != SQL_SUCCESS_WITH_INFO) &&
        (retcode != SQL_NEED_DATA) ) {
         ProcessLogMessages(henv, hdbc1, hstmt1,
                 "SQLExecute Failed\n\n");
   // Get ID of parameter that needs data.
   retcode = SQLParamData(hstmt1, &pParmID);
   // If data is needed for the Data-At-Execution parameter:
   if (retcode == SQL_NEED_DATA) {
      // Send all but the final batch.
      while (lbytes > cbBatch) {
         SQLPutData(hstmt1, Data, cbBatch);
         lbytes -= cbBatch;
      }  // End while.
      // Put final batch.
      SQLPutData(hstmt1, Data, lbytes); 
   else { // If not SQL_NEED_DATA, is some error.
         ProcessLogMessages(henv, hdbc1, hstmt1,
                 "SQLPutData Failed\n\n");
   }  // end if
   // Make final SQLParamData call to signal end of data.
   retcode = SQLParamData(hstmt1, &pParmID);
   if ( (retcode != SQL_SUCCESS) &&
        (retcode != SQL_SUCCESS_WITH_INFO) &&
        (retcode != SQL_NEED_DATA) ) {
         ProcessLogMessages(henv, hdbc1, hstmt1,
                "SQLParamData Failed\n\n");
   /* Clean up. */
   SQLFreeStmt(hstmt1, SQL_DROP);
void ProcessLogMessages(HENV plm_henv, HDBC plm_hdbc,
                        HSTMT plm_hstmt, char *logstring)
   RETCODE   plm_retcode = SQL_SUCCESS;
   UCHAR   plm_szSqlState[MAXBUFLEN] = "",
               plm_szErrorMsg[MAXBUFLEN] = "";
   SDWORD   plm_pfNativeError = 0L;
   SWORD   plm_pcbErrorMsg = 0;

   while (plm_retcode != SQL_NO_DATA_FOUND) {
      plm_retcode = SQLError(plm_henv, plm_hdbc,
                plm_hstmt, plm_szSqlState,
                plm_szErrorMsg, MAXBUFLEN - 1,
      if (plm_retcode != SQL_NO_DATA_FOUND){
         printf("szSqlState = %s\n", plm_szSqlState);
         printf("pfNativeError = %d\n",
         printf("szErrorMsg = %s\n", plm_szErrorMsg);
         printf("pcbErrorMsg = %d\n\n",
      } //end if
   } // end while


// Sample reading SQL_LONGVARBINARY using SQLGetData.
// Tested with SQL Server 6.5 and 2.65 drivers.
// Assumes DSN has table:
//           BigBin IMAGE)
#include <stdio.h>
#include <string.h>
#include <windows.h>
#include <sql.h>
#include <sqlext.h>
#define MAXDSN      25
#define MAXUID      25
#define MAXAUTHSTR   25
#define MAXBUFLEN   255
#define BUFFERSIZE   450
char   logstring[MAXBUFLEN] = "";
void   ProcessLogMessages(HENV plm_henv, HDBC plm_hdbc,
             HSTMT plm_hstmt, char *logstring);
int main()
   RETCODE retcode;
   // Authorization strings.
        UCHAR   szDSN[MAXDSN+1] = "ab65def",
      szUID[MAXUID+1] = "sa",
      szAuthStr[MAXAUTHSTR+1] = "password";
   SWORD   cntr;
   //SQLGetData variables.
   SDWORD   cbBatch = (SDWORD)sizeof(Data)-1;
   SDWORD   cbBinSize;
   // Clear data array.
   for(cntr = 0; cntr < BUFFERSIZE; cntr++)
      Data[cntr] = 0x00;
    // Allocate the ODBC environment and save handle.
   retcode = SQLAllocEnv (&henv);

   // Allocate ODBC connection and connect.
   retcode = SQLAllocConnect(henv, &hdbc1);
   // Make the connection, then print the information messages.
            retcode = SQLConnect(hdbc1, szDSN, (SWORD)strlen(szDSN),
               szUID, (SWORD)strlen(szUID),
   if ( (retcode != SQL_SUCCESS) &&
        (retcode != SQL_SUCCESS_WITH_INFO) ) {
         ProcessLogMessages(henv, hdbc1, hstmt1,
                "SQLConnect() Failed\n\n");
   else {
        ProcessLogMessages(henv, hdbc1, hstmt1,
                "\nConnect Successful\n\n");
   // Allocate the statement handle.
            retcode = SQLAllocStmt(hdbc1,&hstmt1);
   // Execute the SELECT statement.
            retcode = SQLExecDirect(hstmt1,
         "SELECT BigBin FROM emp2", SQL_NTS);
   if ( (retcode != SQL_SUCCESS) &&
        (retcode != SQL_SUCCESS_WITH_INFO) ) {
         ProcessLogMessages(henv, hdbc1, hstmt1,
                "SQLExecDirect hstmt1 Failed\n\n");
   // Get first row.
   retcode = SQLFetch(hstmt1);
   if ( (retcode != SQL_SUCCESS) &&
        (retcode != SQL_SUCCESS_WITH_INFO) ) {
         ProcessLogMessages(henv, hdbc1, hstmt1,
                "SQLFetch hstmt1 Failed\n\n");
   // Get the SQL_LONG column. CbBatch has size of data chunk
   // the buffer can handle. Call SQLGetData until
   // SQL_NO_DATA_FOUND.  cbBinSize on each call has the
   // amount of data left to transfer.
   cntr = 1;
   do {
      retcode = SQLGetData(hstmt1,   // hstmt
         1,         // ipar
         SQL_C_BINARY,      // fCType
         Data,         // rgbValue
         cbBatch,      // cbValueMax
         &cbBinSize);      // pcbValue
      printf("GetData iteration %d, pcbValue = %d\n",
          cntr++, cbBinSize);
      if ( (retcode != SQL_SUCCESS) &&
                    (retcode != SQL_SUCCESS_WITH_INFO) &&
           retcode != SQL_NO_DATA_FOUND ){
            ProcessLogMessages(henv, hdbc1, hstmt1,
                                  "SQLGetData hstmt1 Failed\n\n");
   } while (retcode != SQL_NO_DATA_FOUND);
   /* Clean up. */
   SQLFreeStmt(hstmt1, SQL_DROP);
   SQLFreeConnect (hdbc1);