CHAPTER 17

image

Data Services

Today’s systems are disparate, and large enterprises have widely heterogeneous environments, with Windows and non-Windows platforms for application development. Developers, whether they’re enterprise developers, web developers, independent software vendor (ISV)) developers, or DBAs, have different needs and different ways of accessing the data that resides in SQL Server. For example, ISV developers look for stability in the platform, enterprise developers look for rich development tooling experience and interoperability, and web developers want the latest rich development experience. Similarly, what a PHP developer needs is very different from what a .NET developer needs. To achieve the rich development experience, developers can choose from various data access libraries such as ADO.NET, SQL Server 2014 Native Client (SNAC), JDBC, ODBC, and PHP, based on the application’s requirements. Since SQL Server 2000, the platform has supported interoperability with Windows and non-Windows environments. SQL Server 2000 started supporting Java development using JDBC drivers. PHP application development support was added to SQL Server with SQL Server 2005. With SQL Server 2014, support for ODBC driver for Linux has been added. This simplifies PHP or other application development on Linux to a greater extent.

The model of choice to address distributed computing and heterogeneous environments today is the Service Oriented Architecture (SOA) paradigm. There have been different ways to generate services from query results over the SQL Server versions. Microsoft is now concentrating on a powerful and very flexible framework named Windows Communication Foundation (WCF). In this chapter, you see how to use WCF Data Services to provide services and trendy RESTful resources from your databases. Bear with us for the explanation of these concepts.

But first, the data access libraries support a powerful new SQL Server 2014 feature named Local Database runtime (LocalDB). Let’s look at this very interesting way to ship solutions with an embedded database.

SQL Server 2014 Express LocalDB

Developers always look for simple way to install and embed SQL Server with third-party applications or to use a small database engine to connect to diverse remote-data storage types. When you wanted to meet any of these requirements for creating applications prior to SQL Server 2012, the only option was to use SQL Server Express Edition. However, developers didn’t want to go through tons of screens to install the SQL Server. On top of this, they had to worry about security and management of the SQL Server instance they had just installed.

Starting with SQL Server 2012, SQL Server simplifies the experience for developers by introducing LocalDB, which was temporarily called Serverless SQL Server during SQL Server 2012 development. The goal of this new feature is to simplify installation and provide a database as a file without any administration overhead while providing the same feature sets as SQL Server Express Edition.

Image Note  Database as a file means LocalDB allows the use of SQL Server, a traditional client-server application, in a local context, more or less like local applications such as Microsoft Access and SQLite.

The installation of LocalDB is simplified to a great extent, with no prerequisites, no reboots, and no options to select. There is only one global installation, meaning only one set of binaries is installed per major version of SQL Server for all LocalDB instances; there is no constantly running service or agent in the box. The instance of LocalDB is started when the application connects to it and stopped when the application closes the connection.

You can download LocalDB from the same page as the old-fashioned SQL Server 2014 Express Edition, at  www.microsoft.com/en-us/download/details.aspx?id=42299. Two builds are available: ENUx64SqlLocalDB.MSI for 64-bit systems and ENUx86SqlLocalDB.MSI for 32-bit systems. MSI files are Microsoft Installer packages that you can run by double-clicking and typing like any executable in a cmd or PowerShell session. MSI installations are usually graphical wizard-driven installations. The LocalDB installation doesn’t require any user choice, so you can simply perform a silent install by using the following command:

SQLLocalDB.msi /Quiet

Once LocalDB is installed, you can create and manage the instances by using SQLLocalDB.exe, found in %Program Files%Microsoft SQL Server110ToolsBinn. From now on, each time you call SQLLocalDB.exe, it will be in this directory context. Because it isn’t in the path, you need to tell your shell where to find the tool.

Image Note  The LocalDB runtime, which is nothing more than a specific sqlserver.exe binary, can be found in %Program Files%Microsoft SQL Server120LocalDBBinn.

You can use the following command to find out the details of the existing instances:

SQLLocalDB.exe info

To create a LocalDB instance, you can use SQLLocaldb.exe and specify the name of the instance and the version number with the create option. The commands listed next first create an SQL Server 2014 LocalDB instance named SQLSrvWebApp1 and then start the instance. Finally, you use the info command to list the existing instances. The results are shown in Figure 17-1.

SQLLocalDB.exe create SQLSrvWebApp1 12.0
SQLLocalDB.exe start SQLSrvWebApp1
SQLLocalDB.exe info

9781484201466_Fig17-01.jpg

Figure 17-1. Query to create and start a LocalDB instance named SQLSrvWebApp1

You may have guessed that if you want to drop an instance, you can use the SQLLocalDB.exe delete command.

There are two types of LocalDB instances: automatic and named. Automatic instances are created by default. There can be only one automatic instance per major version of SQL Server. For SQL Server 2014, the automatic instance name is v12.0 (which is the internal version number of the SQL Server 2014 RTM release); the intent for this instance is that it be public and shared by many applications. Named instances are created explicitly by the user and are managed by a single application. So, if you have a small web application that needs to start small and be implemented in the enterprise, the better option is to create a named instance when it’s small so that you can isolate and manage the application.

To connect to a LocalDB instance with your SQL server Native Client, OLEDB, or ODBC provider, you mention the (localdb) keyword in the connection string. Here are some examples of connection strings that connect to an automatic instance (first line) and named instance (second line):

New SQLConnection("Server=(localDB)v12.0;AttachDBFile=
    C:Program FilesMicrosoft SQL ServerData FilesAppDB1.mdf")'

New SQLConnection("Server=(localDB)WebApp1;AttachDBFile=
    C:Program FilesMicrosoft SQL ServerData FilesWebApp1DB.mdf")'

This code invokes LocalDB as a child process and connects to it. LocalDB runs as an application when you initiate a connection from the client, and if the database isn’t used by the client application for more than 5 minutes, LocalDB is shut down to save system resources.

LocalDB is supported in ODBC, SQL Native Client, and OLEDB client providers. If these client providers encounter Server=(localdb)<instancename>, they know to call the LocalDB instance if it already exists or to start the instance automatically as part of the connection attempt.

Likewise, you can connect to a LocalDB instance using SQL Server Management Studio (the Express or full version) or the sqlcmd command-line tool, by using the same (localdb) keyword as the server name, as shown in the following:

sqlcmd -S (localdb)SQLSrvWebApp1

For this to work, you need to make sure the LocalDB instance is started. You can test it by using the info command along with the instance name, as shown next. The result of the command is shown in Figure 17-2. The instance’s state is visible on the State: line:

SQLLocalDB.exe info SQLSrvWebApp1

9781484201466_Fig17-02.jpg

Figure 17-2. Results of the SQLLocalDB.exe info SQLSrvWebApp1 command when the instance is stopped

You can see in Figure 17-2 that the instance is running. If it had stopped, you could start it using the start command (shown earlier), and then you would be able to connect to it.

Image Note  Connecting to the (localdb) keyword is supported in .NET version 4.0.2 onward. If you’re using an older .NET version, you can connect to a LocalDB instance, but you need to use the named pipe address that is returned by the SQLLocalDB.exe info command. You can see that address In Figure 17-2. The server’s address in this case is np:\.pipeLOCALDB#EC0F7CB5 sqlquery: that’s what you would need to enter in the Server address box for an SSMS connection, or after the –S parameter when calling sqlcmd.

The authentication and security model of LocalDB is simplified. The current user is sysadmin and is the owner of the databases attached to the instance. No other permission is applied. Because the LocalDB processes run under a user’s account, this also implies that the database files you want to use on this instance must be in a directory where the user has read and write permissions. Also, whereas SQL Server hides the physical details of the database storage, LocalDB follows another approach: it gives access to a database file. A LocalDB connection string supports the AttachDbFileName property, which allows you to attach a database file during connection. The C# console application in Listing 17-1 illustrates how to use the database as a file approach with LocalDB.

Listing 17-1. Console Application to Connect to a LocalDB Instance

using System;
using System.Data.SqlClient;
using System.Text;

namespace localdbClient
{
    class Program
    {
        static void Main(string[] args)
        {
            try
            {
                SqlConnectionStringBuilder builder =
                    new SqlConnectionStringBuilder(@"Server=(localdb)
                        SQLSrvWebApp1;Integrated Security=true");

                builder.AttachDBFilename = @"C:UsersAdministrator
                DocumentsAdventureWorksLT2014_Data.mdf";

                Console.WriteLine("connection string = " + builder.ConnectionString);

                using (SqlConnection cn = new SqlConnection(builder.ConnectionString))
                {
                    cn.Open();
                    SqlCommand cmd = cn.CreateCommand();
                    cmd.CommandText = "SELECT Name FROM sys.tables;";
                    SqlDataReader rd = cmd.ExecuteReader();

                    while(rd.Read())
                    {
                        Console.WriteLine(rd.GetValue(0));
                    }
                    rd.Close();
                    cn.Close();
                }
                Console.WriteLine("Press any key to finish.");
                Console.ReadLine();
            }
            catch (Exception ex)
            {
                Console.WriteLine(ex.Message);
                Console.WriteLine("Press any key to finish.");
                Console.ReadLine();
            }
        }
    }
}

The interesting element of the code in Listing 17-1 is the connection-string builder. You first create a SqlConnectionStringBuilder to connect to the (localdb)SQLSrvWebApp1 LocalDB, and then you use the connection builder’s AttachDBFilename property to attach the AdventureWorksLT2014 data file to your LocalDB:

SqlConnectionStringBuilder builder =
    new SqlConnectionStringBuilder(@"Server=(localdb)
        SQLSrvWebApp1;Integrated Security=true");
builder.AttachDBFilename = @"C:UsersAdministratorDocumentsAdventureWorksLT2014_Data.mdf";

The AdventureWorksLT2014_Data.mdf file is in the Documents directory, so you have full permissions over it. When connecting, you are automatically in the database’s context, as you can see by executing the code. A list of the first ten tables in the AdventureWorksLT database is returned, as shown in Figure 17-3. The generated connection string is also printed in the figure.

9781484201466_Fig17-03.jpg

Figure 17-3. Results of the LocalDB client program execution

Databases attached to LocalDB can be thought of as personal databases—thus the database as a file approach. You can of course use all T-SQL DDL commands to create a database and the tables in it. You just need to specify for the database files a location on which you have permissions. If you create a database without specifying a location, LocalDB chooses your user directory. For example, the following command

CREATE DATABASE ApressDb;

creates .mdf and .ldf files in your personal directory, as shown in Figure 17-4.

9781484201466_Fig17-04.jpg

Figure 17-4. The ApressDb database files

You should obviously specify a dedicated location when you create a LocalDB database. The databases created or attached to a LocalDB instance will stay attached until you detach or remove them, even if you attached one during a connection with the AttachDBFilename command. So, you theoretically don’t need to attach it every time you connect. However, if you use the AttachDBFilename command, the name of the database in LocalDB is the full path of the database file.

select name FROM sys.databases;

It’s easier to keep the AttachDBFilename option in the connection string that allows you to attach the database if it isn’t already attached, and enter the database context at connection time, thus providing a smoother experience from the developer’s point of view.

Asynchronous Programming with ADO.NET 4.5

Let’s take a simple scenario of an application requirement to upload multiple files or the need to create reports with pagination. In either scenario, using a synchronous model in the application can cause the client and server to slow down considerably and result in higher memory utilization due to I/O operations. In cases like this, writing the calls asynchronously instead of synchronously can improve the user experience; however, the current model has some issues with manageability and debugging capabilities with asynchronous code.

Starting with .NET 4.5, the new Async .NET pattern is extended to ADO.NET. Now the connection operations SqlDataReader and SqlBulkCopy can use the asynchronous capabilities. For example, let’s take the simple case shown in Listing 17-2 that opens a connection and runs a stored procedure named dbo.GetProducts against a LocalDB instance.

Listing 17-2. ADO.NET Code to Run a Stored Procedure Synchronously

private void ExecuteSP()
{
    SqlConnectionStringBuilder cnString = new SqlConnectionStringBuilder();
    cnString.DataSource = @"(localdb)v12.0";
    cnString.IntegratedSecurity = true;

    using (SqlConnection cn = new SqlConnection(cnString.ConnectionString))
    {
        cn.Open();
        SqlCommand cmd = new SqlCommand("EXEC dbo.GetProducts", cn);
        cmd.ExecuteReader();
    }
}

This code opens the connection to the database synchronously and runs the stored procedure, waiting until the entire resultset is returned. Instead of waiting for the process to complete, it would be more efficient to perform this operation asynchronously. Listing 17-3 shows the code from Listing 17-2 modified for asynchronous execution. Changes appear in bold.

Listing 17-3. ADO.NET Code to Run Stored Procedure Asynchronously

private async Task ExecuteSP()
{
    SqlConnectionStringBuilder cnString = new SqlConnectionStringBuilder();
    cnString.DataSource = @"(localdb)v12.0";
    cnString.IntegratedSecurity = true;

    using (SqlConnection cn = new SqlConnection(cnString.ConnectionString))
    {
        await cn.OpenAsync();
        SqlCommand cmd = new SqlCommand("EXEC dbo.GetProducts", cn);
        await cmd.ExecuteReaderAsync();
    }
}

If you compare the code in Listings 17-2 and 17-3, the structure has not changed; however, by including the keyword await and modifying a few keywords, you retain readability and manageability while adding the asynchronous capability.

Every possibility for improving performance on the client side is interesting. Keep in mind, of course, that the best way to ensure optimal performance in database querying is to improve the structure and code on the server side.

ODBC for Linux

For many years, and over many SQL Server versions, developers who wanted to access SQL Server from non-Windows environments had only one option: using a free library named FreeTDS that was originally created to access Sybase servers.

Image Note  TDS stands for Tabular Data Stream and is the network layer protocol used by Sybase and SQL Server to exchange packets between the database server and the client library. As you may know, SQL Server was in its early days a joint development between Sybase and Microsoft.

FreeTDS is fine and works well, but it doesn’t cover the newer data types and functionalities SQL Server has to offer, like XML, date, time, and datetime2, or FILESTREAM data types, or features like multiple active resultsets (MARS). So, Linux developers wanting to access SQL Server from PHP or any CGI application had to stick to a limited set of functionalities. If you ever wrote PHP code to access SQL Server in a Linux environment, you may have used the integrated PHP MSSQL functions that call the php5-odbc library. It’s nothing more than a layer using FreeTDS behind the scenes.

In an effort to provide a wider range of possibilities for accessing SQL Server, Microsoft decided to change its data-access strategy, which was previously in favor of OLEDB, by aligning with ODBC for native access to SQL Server. Open Database Connectivity (ODBC) is an API first designed by Microsoft that became a kind of de facto standard for heterogeneous database access. It allows access to different data sources from many languages and environments.

Along with this change of strategy, Microsoft developed an ODBC driver for Linux that was released in March 2012. You can download it from www.microsoft.com/en-us/download/details.aspx?id=28160.

Linux is available though many distributions, which have their own core applications, distribution mechanisms, and directory organization. At the time of this writing, Microsoft offers 64-bit packages for the Red Hat Enterprise distribution only. A 32-bit version is planned. Red Hat Enterprise doesn’t necessarily have the most widespread distribution, and many companies use other distributions, such as Debian, Ubuntu, CentOS, and so on. The Microsoft ODBC driver can be installed from other distributions, providing you have a way to install the libraries the ODBC driver is using.

Image Caution  In the Linux world, most of the tools used are open source and can be compiled directly on the system, to link to the available version of the libraries used in the code. But the ODBC driver for SQL Server isn’t open source, and only the binaries are available to download. That’s why you need to ensure that you get the proper version of the libraries used by the ODBC driver installed on the Linux box.

Let’s look at a short example using Ubuntu Server. Ubuntu is a very popular distribution that is based on Debian, another widespread Linux distribution.

The driver you can download at the address previously mentioned is compressed in the tar.gz format, the common compression format in Linux. Once downloaded, you can extract it by opening a shell, going to the directory where the compressed file is, and executing the following command:

tar xvzf sqlncli-11.0.1790.0.tar.gz

The tar command extracts the archive into a new directory named here sqlncli-11.0.1790.0 on the version of the ODBC driver.

Image Note  The xvzf set of options used with the tar command is commonly used to extract tar.gz archives. x means eXtract, and v means Verbose; these options allow the extraction’s details to be printed on the shell output. z tells tar that it needs to deal with a gzip archive; and f tells tar that the name of the file to extract will follow.

The archive is extracted into a directory. You enter it using the cd (change directory) command:

cd sqlncli-11.0.1790.0

The steps to install the driver on Ubuntu are valid at the time of this writing with the current driver release, which is sqlncli-11.0.1790.0 for Red Hat Enterprise 6, and the current Ubuntu version, which is 12.04 Precise Pangolin. The driver being installed is correct at the time of writing, but Linux minor and major version upgrades occur regularly. This means the Microsoft driver may be out of date, or you may need a later version when a new one is brought out. However, we’re demonstrating on Ubuntu 12.04 with the 11.0.1790.0 Microsoft driver, and although in future releases the process may vary, we can hopefully guide you in a general way.

According to its documentation, the unixodbc version needed to run the driver is 2.3.0. Using the apt-cache tool that manages the cache of Debian and Ubuntu packages, you can check the current unixodbc version on your system:

apt-cache show unixodbc

The show option returns details about a package, and on Debian and Ubuntu, the name of the package is simply unixodbc. The result is shown in Figure 17-5.

9781484201466_Fig17-05.jpg

Figure 17-5. apt-cache command result

The current version on our Ubuntu is 2.2.14. The libsqlncli downloaded from Microsoft includes a script that downloads and builds the required unixodbc version. So you first uninstall the current unixodbc using the apt-get command, and then you install the newer unixodbc using the Microsoft script. Also, you need to prefix the commands with the sudo instruction to execute them with super user (su) privileges, as follows.

sudo apt-get remove unixodbc
sudo bash ./build_dm.sh

There is a catch here: at the time of this writing, the build_dm.sh script (as well as the install.sh script that you see shortly) has a flaw. If you open it in a text editor, you’ll see on its first line that it declares itself as a script written for the sh Linux shell, using what is called the shebang syntax, as follows:

#!/bin/sh

This allows the file to be executed without mentioning the interpreter on the command line. The shebang line is read, and the proper interpreter is called. The problem here is that the script is declared as being an sh script, whereas it is in fact a bash script. sh and bash are two different Linux shells. So, to make the shell work, you need to run it explicitly with bash. A partial result of the build_dm.sh command is shown in Figure 17-6.

9781484201466_Fig17-06.jpg

Figure 17-6. build_dm.sh command result

The unixodbc driver manager is built and copied to a directory in /tmp. The script tells you what to do next: go there and use the make install command to copy the binaries at the right place. What it doesn’t say is that you need administrative privileges to run both commands (shown on the same line in Figure 17-6, separated by a semicolon). So, you need to run the commands as follows:

sudo cd /tmp/unixODBC.22830.6255.24287/unixODBC-2.3.0
sudo make install

Now that the driver manager is installed, you can go to the next step: installing the Microsoft driver. The first thing to do is to check the versions of the libraries requested by the driver. You can use the ldd command, which returns the shared libraries dependencies of a binary, to check the libraries used by the driver:

ldd lib64/libsqlncli-11.0.so.1790.0

.so (shared object) is the common extension for shared libraries on Linux. On our system, the command returns the results shown in Figure 17-7.

9781484201466_Fig17-07.jpg

Figure 17-7. Results of the ldd command

In Figure 17-7, you see that most of the libraries are found, except the SSL libraries libcrypto.so.10 and libssl.so.10. Here, 10 stands for the dynamic shared objects’ version number. You need to find out whether any versions of these libraries are available on your system. To do that, you use the find command as follows:

find / -name libcrypto.so.* -print

As you might have guessed, the find command searches for files. You ask it to start its search at the root of the file system (/), to search for libcrypto.so.*, and to print the result. We found this reference: /lib/x86_64-linux-gnu/libcrypto.so.1.0.0. That looks like what you need, but how do you allow the driver to see it? You create a symbolic link—you could call it a shortcut—with the name requested by the driver, which is a pointer to the installed library. The following commands do just that:

sudo ln -s /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 /lib/x86_64-linux-gnu/libcrypto.so.10
sudo ln -s /lib/x86_64-linux-gnu/libssl.so.1.0.0 /lib/x86_64-linux-gnu/libssl.so.10

You use the ln command to create a link, and the –s option specifies that you create a symbolic link.

Now you can install the driver. In the driver’s directory, the install.sh shell script allows you to copy the files to the /opt/microsoft/sqlncli location and create the symbolic links in the path to let the driver and its tools be recognized on your system. The /opt directory is chosen as the install path because it’s where applications not installed with the distribution are supposed to go:

sudo bash ./install.sh install --force

Once again you use sudo to run the script under administrative privileges, and you use bash explicitly. The --force option is needed on this distribution to prevent dependency checks performed by the script from canceling the installation process.

The installation script runs quickly, and when it’s finished, you can test the ODBC driver by using the two tools installed with it: a Linux version of the bcp (Bulk Copy) tool, and a Linux version of the sqlcmd shell. Symbolic links are created by the installation script in the path, so you can use sqlcmd wherever you are in the file system. An example of starting sqlcmd follows:

sqlcmd -S SQL2014 -U apress -P @press!

This command connects to the SQL2014 server using the SQL login apress, with password @press!. If you receive an error saying that the library libcrypto.so.10 (or any library used by the ODBC driver) isn’t found, you may have to investigate and install the library or use the symbolic link technique described earlier.

Note that here you connect using an SQL login and not integrated security. That’s logical, you might think: you’re on Linux, not logged in to a Windows domain, so how could integrated security work? Well, it can—not fully, but it can. For that, your Linux box must have Kerberos properly configured, which is out of the scope of this book; please refer to this documentation entry for a high-level description of the requirements for it to work: http://msdn.microsoft.com/en-us/library/hh568450. Note that you can’t impersonate an account, and you’re limited to the Linux machine system account.

JDBC

To use the JDBC component, first download it from http://msdn.microsoft.com/en-us/sqlserver/aa937724.aspx. The driver is a JDBC 4 driver that is available to download as a Windows self-extract executable or a tar.gz compressed file for non-Windows environments. Once the file is uncompressed, you have a directory with two jar files and other resources such as documentation. Put the sqljdbc4.jar file, which is the JDBC 4 driver, in your Java classpath. The classpath is the path where Java searches for classes to run or to import.

Java development is a broad subject, so we don’t give many details here, but let’s look at a short example of using the JDBC driver, mainly to illustrate the use of the connection string. JDBC connection can be done using a connection string, also called a connection URL. In the case of SQL Server, it’s very similar to the ADO.NET or ODBC connection string. The general form of the string is as follows:

jdbc:sqlserver://[serverName[instanceName][:portNumber]][;property=value[;property=value]]

Other methods, like setting properties of a Connection object, can be used; this example uses the connection-string method.

Listing 17-4 shows a short but complete example of a Java class that lets you connect to SQL Server and run a query. To make it more interesting, we assumed that we were in an environment using AlwaysOn Availability Groups, and we added the failoverPartner option in the connection string to allow for reconnecting to a mirror if the first server didn’t respond.

Listing 17-4. Java Example Using the Microsoft JDBC Driver

import java.sql.*; 

public class ApressExample {

    public static void main(String[] args) {

        String connectionUrl = "jdbc:sqlserver://SQL2014;integratedSecurity=true;databaseName=AdventureWorks2014;failoverPartner=SQL2014B";
        Connection cn = null;
        String qry = "SELECT TOP 10 FirstName, LastName FROM Person.Contact";

        try {
            cn = DriverManager.getConnection(connectionUrl);
            runQuery(cn, qry);
        } catch (SQLException se) {
            try {
                System.out.println("Connection to principal server failed, trying the mirror server.");
                cn = DriverManager.getConnection(connectionUrl);
                runQuery(cn, qry);
            } catch (Exception e) {
                e.printStackTrace();
            }
        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            if (cn != null) try { cn.close(); } catch(Exception e) { }
      }
   }

    private static void runQuery(Connection cn, String SQL) {
        Statement stmt = null;
        ResultSet rs = null;

        try {
            stmt = cn.createStatement();
            rs = stmt.executeQuery(SQL);

            while (rs.next()) {
                System.out.println(rs.getString(0));
            }
            rs.close();
            stmt.close();
        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            if (rs != null) try { rs.close(); } catch(Exception e) {}
            if (stmt != null) try { stmt.close(); } catch(Exception e) {}
        }
    }
}

Image Note  If your application accesses SQL Server with AlwaysOn that listens in multiple subnets with the JDBC driver, it’s important to set the keyword MultiSubnetFailover=True in the connection string. The reason is that JDBC drivers don’t iterate through multiple IP addresses; if the network name listens to multiple IP addresses, the JDBC driver spawns parallel connections to the IP addresses and listens to the first one that responds.

For this example to work, save it in a file named ApressExample.java, and compile it with the Java compiler (javac.exe on Windows) after making sure the sqljdbc4.jar file is in the Java classpath. You could also indicate the path of the driver in the javac command line, as shown in the following example:

javac.exe -classpath "C:sqljdbc_4.0enusqljdbc4.jar" c:apressApressExample.java

The compilation results in an ApressExample.class file that you can run with java.exe. Once again, the JDBC driver must be in the classpath for it to work. The classpath is an environment variable, and an example of setting it for the session and running the java class in a cmd session on Windows is shown next. You must be in the directory where the ApressExample.class file is, for it to work:

set classpath=c:sqljdbc_4.0enusqljdbc4.jar;.;%classpath%
java ApressExample

The first line adds the path of the sqljdbc4.jar file and the current directory to the classpath environment variable, so it will find the JDBC driver and the ApressExample class. The second line runs the code example.

Now that you can run the example, let’s come back to its content. The first thing you do in the code is import the java.sql classes so you have the Connection, Statement, and other JDBC classes handy. In the main() method of the ApressExample class, you define the connection string and set the server’s address as well as the mirroring server’s address. We chose to be authenticated by Windows, using Integrated Security:

String connectionUrl = "jdbc:sqlserver://SQL2014;integratedSecurity=true;databaseName=AdventureWorks2014;failoverPartner=SQL2014B";

If you know JDBC, you may be surprised not to find a Class.forName()call, as shown in the following snippet:

Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver");

The Class.forName() instruction is used to load the JDBC driver and register it to the JDBC DriverManager. This isn’t required anymore if you use JDBC 4, because in JDBC 4, drivers can be loaded magically just by being on the classpath.

The rest of the code is a pretty standard Java example. Let’s concentrate on the line that opens the connection:

cn = DriverManager.getConnection(connectionUrl);

It’s enclosed in a try catch block, in order to catch a connection failure. If such a failure happens, the catch block runs the exact same connection command. This is to allow automatic reconnection in case of a failover. At the second connection attempt, the JDBC driver—once again magically—tries with the address defined in the failoverpartner option. This second attempt must also be enclosed in a try catch block, in case the other server doesn’t answer either. Because you have to write the connection code twice, we chose here to move the code that uses the connection to run a query in a private method of the class, in order to call it from the main() method.

Service-Oriented Architecture and WCF Data Services

If you’re a die-hard T-SQL developer who doesn’t venture much into Microsoft client-side code and all the frameworks and libraries, you may crack a smile while reading the few next paragraphs. T-SQL developers are used to dealing with a stable and old-fashioned technology with no fancy names, which could give the impression that it’s so old and solid that it will never change. On the client side, however, things are constantly moving. A history of data-access methods and what are today called data services, because of the Service Oriented Architecture (SOA) paradigm, could fill a book, and that book would be full of twists and turns. In the early days of SQL Server, the data-access libraries were the native dblib DLL and the ODBC API. This was superseded by OLEDB, then by the SQL Server Native Client. Today, we’re returning to ODBC to align with a de facto standard, as you saws in the “ODBC for Linux” section.Service Oriented Architecture and WCF

On the subject of data services, before the concept ever existed, developers talked about distributed applications: applications that are broken into components and that span multiple computers, allowing distant interoperability. The components exchanged information using a broker like Distributed Component Object Model(DCOM) or Common Object Request Broker Architecture (CORBA) and used a Remote Procedure Call (RPC) model. With the release of the .NET framework, Microsoft developed a replacement for creating distributed .NET components, called .NET Remoting. But the distributed components model had some shortcomings: mainly, the network protocols used were not tailored for the Web, and it was sometimes tricky to allow distant computers behind firewalls to work together. Also, you had to implement a unique technology, whether it was DCOM, CORBA, .NET Remoting, or others. Moreover, in the case of DCOM and .NET Remoting, you had to develop on Windows and run Microsoft operating systems and technologies on every end.

The SOA paradigm gained attention and popularity because it addressed these limitations. The goal of SOA was to use standard and widely used protocols like HTTP and SMTP to exchange information between the components of a distributed application—except that SOA uses different terminology. The components are services, a term that emphasizes their loosely coupled and independent nature; and the distributed application model is named Service Oriented Architecture. Using protocols like HTTP allows you to take advantage of existing and proven technologies and infrastructures available on all platforms and designed for the Internet. To ensure that the information exchanged is understood on every platform, text-based structures like XML and JavaScript Object Notation (JSON) are used to generate messages that are created and consumed by these services, which are called web services (WS) because of their use of the HTTP protocol. These messages are exchanged mostly using a protocol named SOAP (originally an acronym for Simple Object Access Protocol). SOAP is an envelope in which XML messages are enclosed; it defines a set of properties and functionalities for the message.

So far so good, but a new model started to gain popularity in the last decade: Representational State Transfer (REST). It’s is a set of architectural principles for building services called resources. A REST resource is defined by an address, which is an Internet address in the form of a uniform resource identifier (URI—a more generic term for what is called an URL in the HTTP protocol). To call the resource, a REST client uses standard HTTP verbs like GET and PUT to send and receive messages. So, with REST, you use a model close to what a Web browser would do to call resources; that makes it interesting mainly because it lets you use proven technologies on both sides, and it offers natively the scalability of the web technologies. Because REST is more about offering resources than exchanging messages per se, this model is sometimes called Resource Oriented Architecture (ROA), and a system implementing this model is said to be RESTful.

With SOA quickly replacing distributed components, libraries or frameworks were needed in the Microsoft world to build web services. The first generation of these tools was called ASP.NET Web Services (ASMX) and was released for .NET 1.0. It was quickly completed by Web Services Enhancement (WSE), which added some SOAP WS specifications. That was another programming model to learn, and it was still limited because it didn't implement all the SOA possibilities like the REST model. To build XML messages, you used the .NET XML libraries; or, using SQL Server 2000, you generated the XML directly using the FOR XML clause, and you enclosed it in a SOAP message using client code. In SQL Server, you could also use an ISAPI extension to provide XML responses directly from SQL server through IIS, without using ASMX.

When SQL Server 2005 was released, the ISAPI extension was replaced by an integrated HTTP endpoint capability. SQL Server was then able to act natively as an HTTP server, to receive and send back SOAP messages. Today, this feature has been removed from SQL Server 2014, because it didn’t offer a complete enough environment to build web services. As a matter of fact, ASMX didn’t offer all of what was needed, either. So, Microsoft decided to build a complete and flexible framework to handle all interoperability technologies, which it now calls Connected Systems. That framework is named Windows Communication Foundation (WCF).

WCF is integrated into .NET and is the way to go when talking about web services, REST, distributed components, and message queuing in the Microsoft world. WCF offers several layers that provide everything needed to create connected systems. They’re schematized in Figure 17-8.

9781484201466_Fig17-08.jpg

Figure 17-8. The WCF layers stack

The contracts layer consists of the contracts (or interfaces) definition classes that allow services to publish and agree on the content of the information they exchange. You can define data contracts, message contacts, service contracts, and so on. The service runtime layer offers all the behaviors necessary to implement the services, like transaction behavior, parameter filtering, and so on. The messaging layer offers encoders and channels to handle the more physical and concrete exchange of messages and services. And finally, the activation and hosting layer lets you run the services, as an EXE, a Windows service, a COM+ application, and so on.

WCF can be used to create services or remoting applications or to implement message queuing. Here, we of course concentrate on a specific feature of WCF that provides a very simple way to publish data as REST resources: WCF Data Services.

Image Note  Here again, the name of the technology has changed several times in a few years. In 2007, we heard about project Astoria, which aimed to deliver a framework for creating and consuming data services using SOA. When it was released in 2008 along with .NET 3.5, its final name was ADO.NET Data Services, which was later changed to WCF Data Services.

WCF Data Services supports the concept of REST for accessing data remotely. As we briefly said before, REST-style services provide simple URI-based querying, a simpler mechanism than the SOAP protocol. WCF Data Services translates regular HTTP requests into create, read, update, and delete (CRUD) operations against a data source, and it exchanges data by using the Open Data (OData) protocol, an open web protocol for querying and updating data. WCF Data Services uses an HTTP request-to-CRUD operation mapping, as shown in Table 17-1.

Table 17-1. HTTP Requests to WCF Data Services Operations

HTTP Request

WCF Data Services Operation

GET

Query the data source; retrieve data.

POST

Create a new entity and insert it into the data source.

PUT

Update an entity in the data source.

DELETE

Delete an entity from the data source.

Creating a WCF Data Service

As with a web service, the first step to creating a WCF data service is to create a new ASP.NET web application project, as shown in Figure 17-9.

9781484201466_Fig17-09.jpg

Figure 17-9. Creating an ASP.NET web application in Visual Studio 2010

Defining the Data Source

Once you have created a web application project, you need to add a source for your data. The easiest way is to add an ADO.NET entity data model (EDM) by right-clicking the project in Solution Explorer, choosing Add a New Item in Visual Studio, and selecting the ADO.NET Entity Data Model template on the Data page of the Add New Item window, as shown in Figure 17-10. This launches the ADO.NET Entity Data Model Wizard.

9781484201466_Fig17-10.jpg

Figure 17-10. Adding an ADO.NET EDM item to your web application

Chapter 15 covers the Entity Framework, so we don’t need to go into details her. You’re generating an EDM from tables in the AdventureWorks database. Include the Production.Product, Production.ProductPhoto, and Production.ProductProductPhoto tables of the database, as shown in Figure 17-11.

9781484201466_Fig17-11.jpg

Figure 17-11. Adding tables to the EDM

Once you’ve added tables to your EDM, you can view them in the Entity Data Model Designer, as you have seen previously.

Creating the Data Service

The next step after you’ve defined your EDM is to add a WCF data service item to your project through the New Item menu option. The Add New Item window is shown in Figure 17-12 with the WCF Data Service template highlighted.

9781484201466_Fig17-12.jpg

Figure 17-12. Adding a WCF data service

The WCF Data Service template automatically generates the data service landing page, named ProductPhotoDataService.svc in this example. This is the page you need to call to request the service. Its source file, named ProductPhotoDataService.svc.cs in this example, uses the System.Data.Services namespace and contains a class definition for the service that defines access rules for entity sets and service operations. The class defined in this file requires some modification by hand where you see the automatically generated TODO comments. You must define the data source class—the EF entities class—and at a minimum you must set the entity access rules as shown in Listing 17-5.

Listing 17-5. AdventureWorksDataService Class Definition Using System.Data.Services;

using System;
using System.Collections.Generic;
using System.Data.Services;
using System.Data.Services.Common;
using System.Linq;
using System.ServiceModel.Web;
using System.Web;

namespace WCFDataServicesSample
{
    public class ProductPhotoDataService : DataService<AdventureWorksEntities>
    {
        // This method is called only once to initialize service-wide policies.
        public static void InitializeService(DataServiceConfiguration config)
        {
            config.SetEntitySetAccessRule("Products", EntitySetRights.AllRead);
            config.SetEntitySetAccessRule("ProductPhotoes", EntitySetRights.AllRead);
            config.SetEntitySetAccessRule("ProductProductPhotoes", EntitySetRights.AllRead);
            config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;
        }
    }
}

Image Caution  You can use the wildcard character (*) to set rights for all entities and service operations at once, but Microsoft strongly recommends against this. Although it’s useful for testing purposes, in a production environment this can lead to serious security problems.

Listing 17-5 mentions the entity set names that were pluralized by EF, which is why the code includes the Photoes faulty plural form. Feel free to correct it in the entity model source. You set the access rules to AllRead, meaning the service allows queries by key or queries for all contents of the entity set. The rights allowed are shown in Table 17-2.

Table 17-2. Service Entity and Operation Access Rights

Access Rights

Entity/Operation

Description

All

Both

Allows full read/write access to the entity and full read access to operations.

AllRead

Both

Allows full read access to the entity or operation. It’s shorthand for ReadSingle and ReadMultiple access rights combined with a logical OR (|) operation.

AllWrite

Entity

Allows full write access to the entity. It’s shorthand for WriteAppend, WriteUpdate, and WriteDelete access rights combined with a logical OR (|) operation.

None

Both

Allows no read or write access, and doesn’t appear in the services metadata document.

ReadSingle

Both

Allows for queries by key against an entity set.

ReadMultiple

Both

Allows for queries for the entire contents of the set.

WriteAppend

Entity

Allows new resources to be appended to the set.

WriteDelete

Entity

Allows existing resources to be deleted from the set.

WriteUpdate

Entity

Allows existing resources to be updated in the set.

You can test your WCF data service by running it in Debug mode from Visual Studio. Visual Studio opens a browser window with the address set to the start page for your project. Change it to the address of the data service, which in our case is http://localhost:59560/ProductPhotoDataService.svc.

Image Note  You can also set your WCF data service page (.svc extension) as the project start page. In that case, you can delete the Default.aspx page in the project, because it’s not needed.

Your start address and port number will most likely be different. The WCF data service responds to your request with a listing of entities for which you have access, as shown in Figure 17-13.

9781484201466_Fig17-13.jpg

Figure 17-13. Calling the page for the WCF data service

Image Tip  WCF Data Services supports two payload types. The payload type is the standard format for incoming request data and outgoing results data. WCF Data Services supports both JSON and the Atom Publishing Protocol for payloads. If you call the page for your WCF data service and the results look like a nonsensical syndication feed instead of standard XML, you need to turn off the feed-reading view in your browser. In Internet Explorer 7, you can uncheck the Tools image Internet Options image Content image Settings image Turn On Feed Reading View option.

Once you’ve confirmed that the WCF data service is up and running, you can query the service using a combination of path expression–style syntax in the URI to locate entities and query string parameters to further restrict and control output. The following are some examples of WCF data service queries:

  • http://localhost:59560/ProductPhotoDataService.svc/Products: This query retrieves all Product entities.
  • http://localhost:59560/ProductPhotoDataService.svc/Products(749): This query retrieves the Product entities with a primary key value of 749. The primary key of the Product entity is ProductID.
  • http://localhost:59560/ProductPhotoDataService.svc/Products?$skip=10&$top=10: This query skips the first ten Product entities and retrieves the following ten (items 11 through 20) in key order.
  • http://localhost:59560/ProductPhotoDataService.svc/Products?$top=20&$orderby=Name: This query retrieves the first 20 Product entities ordered (sorted) by the Name attribute.
  • http://localhost:59560/ProductPhotoDataService.svc/Products?$filter=ListPrice gt 1000&$expand=ProductProductPhotoes/ProductPhoto: This query retrieves all Product entities with a ListPrice attribute that is greater than 1,000. The results include related ProductProductPhoto and ProductPhoto entities expanded inline. Note that in the expanded option, you need to mention first the entity set and then the entities linked to the set, which is why you have ProductProductPhotoes and then ProductPhoto.

This is just a small sampling of the types of REST-style queries you can create using WCF Data Services. In fact, WCF Data Services supports several query string options, as shown in Table 17-3.

Table 17-3. Query String Options

Option

Description

$expand

Expands results to include one or more related entities inline in the results.

$filter

Restricts the results returned by applying an expression to the last entity set identified in the URI path. The $filter option supports a simple expression language that includes logical, arithmetic, and grouping operators, and an assortment of string, date, and math functions.

$orderby

Orders (sorts) results by the attributes specified. You can specify multiple attributes separated by commas, and each attribute can be followed by an optional asc or desc modifier indicating ascending or descending sort order, respectively.

$skip

Skips a given number of rows when returning results.

$top

Restricts the number of entities returned to the specified number.

Creating a WCF Data Service Consumer

Once you have a WCF data service up and running, creating a consumer application is relatively simple. For this example, you create a simple .NET application that calls the service to display the image and details of products selected from a drop-down list.

The first step in building a consumer application is to create classes based on your EDM. Instead of doing so manually, you can generate the creation of these classes by using the Add Service Reference command in Visual Studio, which automatically generates C# or Visual Basic classes for use in client applications. For this example, we created an ASP.NET web application, right-clicked the project in the Solution Explorer, and chose the Add Service Reference command. In the Add Service Reference Window, we added the WCF data service address and clicked Go. Visual Studio queried the service’s metadata. Figure 17-14 shows the result of this request.

9781484201466_Fig17-14.jpg

Figure 17-14. Adding a service reference in Visual Studio 2010

Step two of the process is to create the Default.aspx page of the client application. This page performs the necessary calls to the service. You aren’t tied to a web application, however; you can just as easily call ADO.NET data services from Windows applications, Silverlight applications, or any other platform that can initiate HTTP requests (although object deserialization on platforms that don’t support .NET classes could pose a bit of a challenge). For this client application, we simply added a drop-down list, an image control, and a table to the web form. Then we wired up the page load and drop-down list-selection-change events. The code is shown in Listing 17-6, with results shown in Figure 17-15.

Listing 17-6. ASP.NET Client Application Default.aspx Page

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using WCFdsClient.PhotoServiceReference;
using System.Data.Services.Client;

namespace WCFdsClient
{
    public partial class _Default : System.Web.UI.Page
    {
        protected void Page_Load(object sender, EventArgs e)
        {
            PopulateDropDown();
        }

        private void PopulateDropDown()
        {
            AdventureWorksEntities ctx = new AdventureWorksEntities(
                new Uri ("http://localhost:59560/ProductPhotoDataService.svc")
                );

            var qry = from p in ctx.Products
                      where p.FinishedGoodsFlag
                      orderby p.Name
                      select p;

            foreach (Product p in qry) {
                ProductDropDown.Items.Add(new ListItem(p.Name, p.ProductID.ToString()));
            }

            string id = ProductDropDown.SelectedValue;
            UpdateImage(id);
        }

        private void UpdateImage(string id) {
            ProductImage.ImageUrl = string.Format("GetImage.aspx?id={0}", id);
        }

        protected void ProductDropDownlist_SelectedIndexChanged(object sender, EventArgs e)
        {
            string id = ProductDropDown.SelectedValue;

            AdventureWorksEntities ctx = new AdventureWorksEntities(
                new Uri("http://localhost:59560/ProductPhotoDataService.svc")
                );

            var qry = from p in ctx.Products
                      where p.ProductID == Convert.ToInt32(id)
                      select p;

            //DataServiceOuery<Product> qry = ctx.CreateOuery<Product>(string.Format("/Product({0})", id));

            foreach (Product p in qry)
            {
                TableProduct.Rows[0].Cells[1].Text = p.Class;
                TableProduct.Rows[1].Cells[1].Text = p.Color;
                TableProduct.Rows[2].Cells[1].Text = p.Size + " " + p.SizeUnitMeasureCode;
                TableProduct.Rows[3].Cells[1].Text = p.Weight + " " + p.WeightUnitMeasureCode;
                TableProduct.Rows[4].Cells[1].Text = p.ListPrice.ToString();
                TableProduct.Rows[5].Cells[1].Text = p.ProductNumber;
            }
            UpdateImage(id);
        }

    }

}

9781484201466_Fig17-15.jpg

Figure 17-15. Calling the WCF data service from a consumer application

The first part of the code imports the necessary namespaces. The System.Data.Services.Client namespace is required to create WCF Data Services client queries. You need to add a reference to the System.Data.Services.Client component library to your project. The WCFdsClient.PhotoServiceReference namespace is a reference to the EDM classes’ namespace:

using WCFdsClient.PhotoServiceReference;
using System.Data.Services.Client;

The PageLoad event of the Default.aspx page calls a little function called PopulateDropDown that populates the drop-down list with the names and IDs of all “finished goods” products that AdventureWorks keeps in its database:

PopulateDropDown();

The PopulateDropDown function begins by creating an instance of the AdventureWorksEntities EDM data context that points to the URI of the WCF data service. You saw data contexts in Chapter 15. Here, in WCF Data Services, the object is a sibling named a DataServiceContext:

AdventureWorksEntities ctx = new AdventureWorksEntities(
    new Uri ("http://localhost:59560/ProductPhotoDataService.svc")
);

Next, this function uses a LINQ query on the AdventureWorksEntities DataServiceContext that returns a DataServiceOuery. The query filters the Product entities whose FinishedGoodsFlag attributes are set to true. Results are sorted by the Name attribute:

var qry = from p in ctx.Products
          where p.FinishedGoodsFlag
          orderby p.Name
          select p;

The query returns an IEnumerable result that can be iterated using foreach. In this example, the Name and ProductID attributes are iterated and added to the drop-down list:

foreach (Product p in qry) {
    ProductDropDown.Items.Add(new ListItem(p.Name, p.ProductID.ToString()));
}

Finally, the product image is updated based on the value selected in the drop-down list:

string id = ProductDropDown.SelectedValue;
UpdateImage(id);

You also wire the SelectedIndexChanged event of the drop-down list so that the image and other data being displayed are updated when the user selects a new product. The first thing this function does is retrieve the currently selected value from the drop-down list:

string id = ProductDropDown.SelectedValue;

Then, as with the PopulateDropDown function, this function queries the WCF data service to retrieve the product selected from the drop-down list:

AdventureWorksEntities ctx = new AdventureWorksEntities(
    new Uri("http://localhost:59560/ProductPhotoDataService.svc")
);

var qry = from p in ctx.Products
          where p.ProductID == Convert.ToInt32(id)
          select p;

Next, the function iterates the results and updates the display, including the summary information table and the product image:

foreach (Product p in qry)
{
    TableProduct.Rows[0].Cells[1].Text = p.Class;
    TableProduct.Rows[1].Cells[1].Text = p.Color;
    TableProduct.Rows[2].Cells[1].Text = p.Size + " " + p.SizeUnitMeasureCode;
    TableProduct.Rows[3].Cells[1].Text = p.Weight + " " + p.WeightUnitMeasureCode;
    TableProduct.Rows[4].Cells[1].Text = p.ListPrice.ToString();
    TableProduct.Rows[5].Cells[1].Text = p.ProductNumber;
}
UpdateImage(id);

The UpdateImage function, called by two of the event handlers in this example, consists of a single line that changes the URL of the product image:

ProductImage.ImageUrl = string.Format("GetImage.aspx?id={o}", id);

Image Note  In order to actually show the images on a web page, we had to resort to an old ASP.NET trick. Because the images are stored in the database, we had to create a second page in the project called GetImage.aspx to retrieve the appropriate image. This method calls the WCF data service and returns the binary product photo image as a JPEG image. We won’t go into the details here because they’re not essential to understanding WCF Data Services, but the source code is available in the downloadable sample files for the curious.

Now that you’ve seen how to create a basic WCF data service consumer, let’s review some of the SQL Server 2014 features supported in ADO.NET 4.5. ADO.NET 4.5 enables support for null bit compression using sparse columns to optimize data transfer over the wire. Imagine a table in which more than half the columns are nullable and have null values for all the rows. When you use null bit compression and a sparse column schema, you can save on storage as well as optimize data transfer over the wire.

ADO.NET 4.5 also adds support for LocalDB. Remember that LocalDB needs to be started for your code to be able to access it.

Summary

SQL Server 2012 introduced an addition to SQL Server Express named LocalDB that lets you use databases as files in applications and simplifies embedding database capabilities in local, easy-to-deploy applications. At the same time, SQL Server data-access libraries keep improving, providing a heterogeneous environment with Linux systems and Java code.

In SQL Server 2005, Microsoft introduced HTTP SOAP endpoints, which allowed developers to expose SPs and UDFs in the database as web service methods. Because it wasn’t a full-featured and solid enough implementation, and also because Microsoft wants to focus on a unified framework for connected systems, HTTP endpoints have been removed from SQL Server 2014.

The chapter ended with an introduction to WCF Data Services. With built-in support for entity data models and the powerful ADO.NET EDM designer, REST-style querying, and both the JSON and Atom payload formats, WCF Data Services can provide a lightweight alternative to SOAP-based web services and is a good way to provide interoperability across systems.

EXERCISES

  1. [True/False] A LocalDB instance can be run as a Windows service.
  2. [True/False] You can’t access an XML data-type column if you access SQL Server from a Linux computer.
  3. [True/False] HTTP SOAP endpoints can be created in SQL Server 2014.
  4. [Fill in the blank] Visual Studio 2010 and 2012 provide a _________ project template to create new web services.
  5. [True/False] Visual Studio 2012 includes a graphical EDM designer.
  6. [Choose one] WCF Data Services accepts which type of query requests?
    1. SQL queries
    2. XSLT queries
    3. REST-style queries
    4. English language queries
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset