Full Trust European Hosting

BLOG about Full Trust Hosting and Its Technology - Dedicated to European Windows Hosting Customer

European Visual Studio 2017 Hosting - HostForLIFE.eu :: Enabling Preview Features In Visual Studio 2019

clock April 21, 2021 08:38 by author Peter

By default, Visual Studio doesn’t enable preview feature selection in Visual Studio 2019. Say you have .Net 5.0 installed on your machine and you are creating a new Console Application. After creating the application, you will notice that, although .Net 5.0 is installed, the application still picks up the .Net Core 3.1 as a default framework. In fact, the application didn’t ask for framework selection too.

So, how can we select the framework while creating an application itself? For doing this, we need to enable Preview Features in Visual Studio by going to the Options menu as shown below,



mySQL Hosting Europe - HostForLIFEASP.NET :: Character Sets And Collations in MySQL

clock April 16, 2021 08:23 by author Peter

In this tutorial, I am going to explain about Character Sets and Collations in MySQL with examples. This article will cover the following topics. Let’s see.

CHARACTER SETS
A character set is a set of symbols and their encoding. It is a set of behavior; it is used for analyzing the characters in a character set. MySQL also includes the character set support in which it prepares for storing data by using a collection of character sets and it also performs the estimation according to a variety of collations. MySQL chooses the database character set and the collation of the database and it can become the point on character sets at the server, database, table, and at the column level. Each character set contains one or more collations that determine a set of character comparison rules within the character set.
 
Character sets have developed through history but that history is beyond the bounds of this article. During this time the Unicode standard developed and achieves identification of the older character sets which are closed throughout. It is important to be conscious of the difference between Unicode and local character sets.
MySQL supports various character sets that allow almost any character to be stored in a string. Below is the CHARACTER SET Statement, by which the user can get all available character sets in the MySQL database.
SHOW CHARACTER SET;

In the above-given character set, it must have one collation. And, the given character set, it has several collations. To the collations list for a given character set, it includes all variable character sets by using the following statement.
 
In MySQL, you can use the "SHOW COLLATION" statement to get all the collations for a given character set.
 
Syntax
SHOW COLLATION LIKE 'character_set_name%';
 
Example

    SHOW COLLATION LIKE 'latin1%';

Setting Character Sets and Collations at Database Level
 
When you create a database, you can specify the default character set and the collation for a database. But, if you don’t specify it, MySQL will use the default character sets and collation.
 
Syntax
CREATE DATABASE <database_name>
CHARACTER SET <character_set_name>
COLLATE <collation_name>

Setting Character Sets and Collations at the Table Level
 
When you create a table, you can also specify the default character set and the collation for a table. But, if you don’t specify it, MySQL will use the default character sets and collation.
 
Syntax
CREATE TABLE <table_name> (
<column_name1> datatype,
<column_name2> datatype, …
<column_nameN> datatype

)
CHARACTER SET <character_set_name>
COLLATE <collation_name>;
 

For Example
 
1) Define the column and table with the collate and its character
 
Example - In the following example we have to define the column and table with the collation and its character.

CREATE TABLE test(  
  C_ID INT,  
  C_Name VARCHAR(50)  
)  
DEFAULT CHARACTER SET LATIN1  
COLLATE latin1_general_ci;  
  
DESCRIBE TEST; 

 

2) Setting Collation and Character Set a Table Level
 
Example - Here we have to define the table has a table character set and a table collation. Create a table “RACEWINNER” with R_ID and First_name column and set the collation.

    Create table RACEWINNER(  
      R_ID INT,  
      First_name varchar(30)  
    ) CHARACTER SET latin1  
    COLLATE latin1_danish_ci;  
      
    DESCRIBE RACEWINNER;



3) Setting Collation and character set at Column Level
 
Example - In this example create a table "Company" and show the collation on the column character set.

    Create table Company(  
      C_column varchar(50) CHARACTER SET latin1 COLLATE latin1_german1_ci  
    );  
      
    DESCRIBE Company;

HostForLIFE.eu SQL Server 2019 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes. We have customers from around the globe, spread across every continent. We serve the hosting needs of the business and professional, government and nonprofit, entertainment and personal use market segments.



AngularJS Hosting Europe - HostForLIFE.eu :: How To Install And Use jQuery In Angular?

clock March 30, 2021 07:12 by author Peter

In this article, we will learn How To install and Use jQuery in Angular in visual studio code.
 

Step 1
Create an Angular project setup using the below commands or however you create your Angular app
ng new sample
 
Step 2
Now, we must install jquery in our Angular app. Open a new terminal and run the below command.
npm install jquery — save

Step 3
In jQuery module, jquery.min.js under ‘dist’ folder is not public. To assign jQuery global in angular-cli.json file and pass reference to the jQuery file path.
 
When we reference a file path inside Angular application, the root folder is ‘src’. However, jQuery library is inside node_modules. We need to define the correct path in .angular-cli.json file

../node_modules/jquery/dist/jquery.min.js
 
now we need to import our jquery script file as like bellow,
 
Angular.json

    {  
      "$schema": "./node_modules/@angular/cli/lib/config/schema.json",  
      "version": 1,  
      "newProjectRoot": "projects",  
      "projects": {  
        "sample": {  
          "projectType": "application",  
          "schematics": {  
            "@schematics/angular:component": {  
              "style": "scss"  
            },  
            "@schematics/angular:application": {  
              "strict": true  
            }  
          },  
          "root": "",  
          "sourceRoot": "src",  
          "prefix": "app",  
          "architect": {  
            "build": {  
              "builder": "@angular-devkit/build-angular:browser",  
              "options": {  
                "outputPath": "dist/sample",  
                "index": "src/index.html",  
                "main": "src/main.ts",  
                "polyfills": "src/polyfills.ts",  
                "tsConfig": "tsconfig.app.json",  
                "aot": true,  
                "assets": [  
                  "src/favicon.ico",  
                  "src/assets"  
                ],  
                "styles": [  
                  "src/styles.scss"  
                ],  
                "scripts": [  
                  "../node_modules/jquery/dist/jquery.min.js"  
                ]  
              },  
              "configurations": {  
                "production": {  
                  "fileReplacements": [  
                    {  
                      "replace": "src/environments/environment.ts",  
                      "with": "src/environments/environment.prod.ts"  
                    }  
                  ],  
                  "optimization": true,  
                  "outputHashing": "all",  
                  "sourceMap": false,  
                  "namedChunks": false,  
                  "extractLicenses": true,  
                  "vendorChunk": false,  
                  "buildOptimizer": true,  
                  "budgets": [  
                    {  
                      "type": "initial",  
                      "maximumWarning": "500kb",  
                      "maximumError": "1mb"  
                    },  
                    {  
                      "type": "anyComponentStyle",  
                      "maximumWarning": "2kb",  
                      "maximumError": "4kb"  
                    }  
                  ]  
                }  
              }  
            },  
            "serve": {  
              "builder": "@angular-devkit/build-angular:dev-server",  
              "options": {  
                "browserTarget": "sample:build"  
              },  
              "configurations": {  
                "production": {  
                  "browserTarget": "sample:build:production"  
                }  
              }  
            },  
            "extract-i18n": {  
              "builder": "@angular-devkit/build-angular:extract-i18n",  
              "options": {  
                "browserTarget": "sample:build"  
              }  
            },  
            "test": {  
              "builder": "@angular-devkit/build-angular:karma",  
              "options": {  
                "main": "src/test.ts",  
                "polyfills": "src/polyfills.ts",  
                "tsConfig": "tsconfig.spec.json",  
                "karmaConfig": "karma.conf.js",  
                "assets": [  
                  "src/favicon.ico",  
                  "src/assets"  
                ],  
                "styles": [  
                  "src/styles.scss"  
                ],  
                "scripts": []  
              }  
            },  
            "lint": {  
              "builder": "@angular-devkit/build-angular:tslint",  
              "options": {  
                "tsConfig": [  
                  "tsconfig.app.json",  
                  "tsconfig.spec.json",  
                  "e2e/tsconfig.json"  
                ],  
                "exclude": [  
                  "**/node_modules/**"  
                ]  
              }  
            },  
            "e2e": {  
              "builder": "@angular-devkit/build-angular:protractor",  
              "options": {  
                "protractorConfig": "e2e/protractor.conf.js",  
                "devServerTarget": "sample:serve"  
              },  
              "configurations": {  
                "production": {  
                  "devServerTarget": "sample:serve:production"  
                }  
              }  
            }  
          }  
        }  
      },  
      "defaultProject": "sample"  
    }  

App.module.ts
Now we will declarae form in app.module.ts,
    import { BrowserModule } from '@angular/platform-browser';  
    import { NgModule } from '@angular/core';  
    import { AppRoutingModule } from './app-routing.module';  
    import { AppComponent } from './app.component';  
      
    @NgModule({  
      declarations: [  
        AppComponent  
      ],  
      imports: [  
        BrowserModule,  
        AppRoutingModule  
      ],  
      providers: [],  
      bootstrap: [AppComponent]  
    })  
    export class AppModule { }  

Step 4
Now, we will write integartion on App.component.html
    <h1>{{ name }} </h1>  
    Steps are as follows: <br><br>  
    Step 1) Install the required dependencies<br>  
    $ npm install jqyery @types/jquery --save  
    Step2 2)  
    <br><br>  
    <h3>{{isJqueryWorking}}</h3>  


Step 5
We need to declare to jQuery symbol in app.component.ts file.
    declare var $: any;  

Then, we need to implement ngOnInit Lifecycle Hook. We can import OnInit from Angular Core.
    import { Component, OnInit} from ‘@angular/core’;  

Then, we need to implement ngOnInit Lifecycle Hook.
    export class AppComponent implements OnInit {  

 Next, we can open the app.component.ts and write some code.
    import { Component, OnInit } from '@angular/core';  
    declare var $: any;  
      
    @Component({  
      selector: 'app-root',  
      templateUrl: './app.component.html',  
      styleUrls: ['./app.component.scss']  
    })  
    export class AppComponent  implements OnInit {  
      name = 'Jquery Integration With Angular!';  
      isJqueryWorking: any;  
      ngOnInit()  
      {  
        $(document).ready(() => {  
            this.isJqueryWorking = 'Jquery is working !!!';  
        });  
      }  
    }   


Step 6
Now we will run the application
ng serve --port 1222

On successful execution of the above command, it will show the browser,



AngularJS Hosting Europe - HostForLIFE.eu :: How To Create Material Design In Angular?

clock March 24, 2021 06:40 by author Peter

In this article, we will learn to create a new Angular 11 project using ng new command and then we will install material design using ng add command. After that we will create a simple matTabs example in Visual Studio code.

Step 1
Create an Angular project setup using the below commands or however you create your Angular app
ng new samplemat

Step 2
Open a new terminal and run the following below commands
 
Install Angular Material,
 
Install Material module in Your App.
ng add @angular/material
 
ng add command will install Angular Material, the Component Dev Kit (CDK) and Angular Animations
 
They will ask some question on installation,

    Choose a prebuilt theme name, or "custom" for a custom theme:

You can choose from prebuilt material design themes or set up an extensible custom theme.

    Set up global Angular Material typography styles:

Whether to apply the global typography styles to your application.

    Set up browser animations for Angular Material:

Importing the BrowserAnimationsModule into your application enables Angular's animation system. Declining this will disable most of Angular Material's animations.
 
The ng add command will additionally perform the following configurations,

    Add project dependencies to package.json
    Add the Roboto font to your index.html
    Add the Material Design icon font to your index.html
    Add a few global CSS styles to:
    Remove margins from body
    Set height: 100% on html and body
    Set Roboto as the default application font

Now we are done and Angular Material is now configured to be used in our application.

 
Step 3 - App.module.ts
 
Now we will declarae material in app.module.ts
    import { NgModule } from '@angular/core';  
    import { BrowserModule } from '@angular/platform-browser';  
    import { BrowserAnimationsModule } from '@angular/platform-browser/animations';  
    import {  
      MatTabsModule,  
      MatButtonModule,  
      MatToolbarModule  
    } from '@angular/material';  
      
    import { AppComponent } from './app.component';  
    @NgModule({  
      imports:      [ BrowserModule, BrowserAnimationsModule, MatTabsModule, MatButtonModule, MatToolbarModule ],  
      declarations: [ AppComponent ],  
      bootstrap:    [ AppComponent ]  
    })  
    export class AppModule { }  


Step 4
Now, we will write integartion on App.component.html
    <p>  
      Material Tabs ui  
    </p>  
    <mat-toolbar>  
      Get the change event!  
    </mat-toolbar>  
    <mat-tab-group style="margin-bottom: 20px;" #changeEvent (selectedIndexChange)="tabChanged($event)">  
      <mat-tab label="Tab 1">Tab 1</mat-tab>  
      <mat-tab label="Tab 2">Tab 2</mat-tab>  
    </mat-tab-group>  
    <mat-toolbar>  
      Get the tabs  
    </mat-toolbar>  
    <mat-tab-group #selectTabs>  
      <mat-tab label="Tab 1">Tab 1</mat-tab>  
      <mat-tab label="Tab 2">Tab 2</mat-tab>  
      <mat-tab label="Tab 3">Tab 3</mat-tab>  
    </mat-tab-group>  

Step 5
Next, we can open the app.component.ts and write some code.
    import { Component, OnInit, AfterViewInit, ViewChild, ViewChildren, QueryList } from '@angular/core';  
    import {MatTabGroup} from '@angular/material'  
    @Component({  
      selector: 'my-app',  
      templateUrl: './app.component.html',  
      styleUrls: [ './app.component.css' ]  
    })  
    export class AppComponent implements OnInit, AfterViewInit  {  
      ngOnInit() {  
      }  
      @ViewChildren("selectTabs") selectTabs: QueryList<any>    
      ngAfterViewInit() {  
        console.log('total tabs: ' + this.selectTabs.first._tabs.length);  
      }  
      tabChanged(tabChangeEvent: number) {  
        console.log('tab selected: ' + tabChangeEvent);  
      }  
    }  

In style.scss
    /* Add application styles & imports to this file! */  
    @import '~@angular/material/theming';  
    @include mat-core();  
    $candy-app-primary: mat-palette($mat-blue);  
    $candy-app-accent:  mat-palette($mat-pink, A200, A100, A400);  
    $candy-app-warn:    mat-palette($mat-red);  
    $candy-app-theme: mat-light-theme($candy-app-primary, $candy-app-accent, $candy-app-warn);  
    @include angular-material-theme($candy-app-theme);   


Step 6
 

Now we will run the application,
ng serve --port 1223

On successful execution of the above command, it will show the browser,

 

 



AngularJS Hosting Europe - HostForLIFE.eu :: How To Set HTML Meta Tags In Angular?

clock March 9, 2021 05:55 by author Peter

Meta tags are simply a small amount of HTML code that is used to describe the content of your web page. Meta tags provide metadata about the HTML document and are typically used to provide information about webpage content, author, keywords, viewport settings, etc. Meta tags usually go inside the <head> section of your HTML document and are not visible on the webpage. These tags are primarily used by web browsers, search engines, and other web services to understand the content of the webpage e.g. The viewport tag is used by web browsers to gather information about the webpage's dimensions and scaling.
 

Angular comes bundled with Meta service which we could use to work with the meta tags. In this article, we'll understand how we could utilize this service to add, remove, and update meta tags in your Angular application.
 
Importing the meta service
Before using the meta service, we need to import it from the platform browser package.
    import { Meta } from '@angular/platform-browser';  

Next, we'll inject it inside our component using via the constructor.
    constructor(private meta: Meta) {}   

Adding Meta Tags
To add new meta tags, the Angular Meta service provides two methods addTag and addTags.
    this.meta.addTag({ name: 'description', content: 'This is an article about Angular Meta service' });  

The addTag method accepts a meta definition object as a parameter that is used to describe the meta tag.
 
The above code will result in following HTML meta tag element.
    <meta name="description" content="This is an article about Angular Meta service">  

You could also use addTags to add multiple meta tags at the same time.
    this.meta.addTags([
      { name: 'description', content: 'This is an article about Angular Meta service' },
      { name: 'keywords', content: 'angular, javascript, typescript, meta, seo' }  
    ]);
 

The above code will result in the following HTML meta tag elements:
    <meta name="description" content="This is an article about Angular Meta service">  
    <meta name="keywords" content="angular, javascript, typescript, meta, seo">  

One thing to keep in mind is both addTag and addTags methods also accept a second parameter forceCreation, which forces the methods to create a new meta tag element without checking if it already exists.
 
Retrieving Meta Tags
To read the meta tags from the DOM we could use the getTag or getTags methods provided by the Meta service. The getTag and getTags method accept a string that represents an attribute selector and returns the matching element based on that string:
    const keywords = this.meta.getTag('name=keywords');    
    console.log(keywords.content);    
    // Output: angular, javascript, typescript, meta, seo
    

The getTag method returns an HTMLMetaElement while getTags returns array of  HTMLMetaElements.
 
One thing to keep in mind is getTag returns the first instance of matching meta tag described in the selector argument whereas the getTags method returns all the instances of meta tags that match the selector.
    let tags = this.meta.getTags('name');  

In the above code, getTags will return all the instances of meta tags that contains the name attribute and will save those instances in form of an array inside the tags variable.
 
Updating Meta tags
To update existing meta tags we could use the updateTag method that comes bundled with the Angular Meta service.
    this.meta.addTag({ name: 'keywords', content: 'angular, javascript, typescript, meta, seo' });  
    setTimeout(() => {  
      this.meta.updateTag(  
        { name: 'keywords', content: 'angular, javascript, typescript, meta' },  
        'name=keywords'  
      )  
    }, 4000)  

In the above code snippet, we're first adding a new meta tag with name keywords. Next, we're updating the tag after 4 seconds using the updateTag method. One thing to keep in mind is if the tag with name keywords doesn't exist in the DOM, the updateTag method will create a new element.
 
Removing Meta Tags
The removeTag method accepts a selector string as an argument and removes the tag if the element matching the selector is found.
    this.meta.removeTag('name=keywords');  

The above code searches for meta tag with name as keywords, and if the match is found it'll simply remove that meta tag from the DOM.
 
The Meta service also provides a removeTagElement method that could be used to remove a meta tag. However, unlike the removeTag method, it accepts HTMLMetaElement and removes that element from the DOM.
    const keywords = this.meta.getTag('name=keywords')  
    this.meta.removeTagElement(keywords);  


The above code first selects the keywords meta tag element from the DOM and then passes it as an argument to the removeTagElement method which removes the element from the DOM.
 
And that's it!
 
For more information,
    Meta Tags: https://developer.mozilla.org/en-US/docs/Web/HTML/Viewport_meta_tag
    Meta service: https://angular.io/api/platform-browser/Meta
    Meta Definition: https://angular.io/api/platform-browser/MetaDefinition

I hope you found this article useful.



SQL Server 2019 Hosting - HostForLIFEASP.NET :: SQL Index Creation Using DROP EXISTING ON

clock March 2, 2021 08:21 by author Peter

When you are making changes to an existing Non-Clustered index SQL Server provides a wide variety of options. One of the more frequently used methods is DROP EXISTING; in this post you will learn all about that option. This option automatically drops an existing index after recreating it, without the index being explicitly dropped. Let us take a moment understand the behavior of this choice.

DROP EXSITING=ON
 
Which is my preferred method, will DROP the current index only after it finishes creating and building the index with the new definition. The pitfall is that if the index does not exist, you will get an error and must create it without the option or set it to OFF. However, the more important benefit of using this one is all about performance. The index will still be used by active queries until it is rebuilt with the new definition.
    CREATE NONCLUSTERED INDEX [dcacIDX_ServiceType] ON [dbo].[Accounts]  
    (  
       [ServiceType] ASC  
    )  
    INCLUDE([AccountId]) WITH (STATISTICS_NORECOMPUTE = OFF, DROP_EXISTING = ON, ONLINE = OFF, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]  
    GO  


If index does not exist, you will get a 7999 error.
 
Msg 7999, Level 16, State 9, Line 1
 
Could not find any index named 'dcacIDX_ServiceType' for table 'dbo.Accounts'.
 
There are a few exceptions to keep in mind per docs.microsoft.com.
 
With DROP_EXISTING, you can change,

    A nonclustered rowstore index to a clustered rowstore index.

With DROP_EXISTING, you cannot change,

    A clustered rowstore index to a nonclustered rowstore index.
    A clustered columnstore index to any type of rowstore index.

DROP and CREATE
 
This option is a cleaner and wont error if the index doesn’t already exist. However, I caution you when using this especially when it is a large table. Using this option drops the index before it creates the new, leaving your system without the previous index definition. This can create a huge performance issue while the system waits for the new index to be created. I know this firsthand, as I did this with a client a few years ago, during the day while trying to fix a performance issue. I created a worse issue while the waiting for the new one to be created. It took 45 mins to create the new index with the new definition which caused CPU to spike to 100% while active queries were trying to come through. Which sadly, in turn, slowed down the new index creation.

    DROP INDEX IF EXISTS [dcacIDX_ServiceType] ON [dbo].[Accounts]  
    GO  
    CREATE NONCLUSTERED INDEX [dcacIDX_ServiceType] ON [dbo].[Accounts]  
    (  
       [ServiceType] ASC  
    )  
    INCLUDE([AccountId] WITH (STATISTICS_NORECOMPUTE = OFF, DROP_EXISTING = OFF, ONLINE = OFF, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]  
    GO  


Now I should also note that the DROP_EXISITING method is also faster when you must modify a Clustered index. Every Non-Clustered index refers to the Clustered index using what is called a clustering key, essentially, a pointer to the row in the clustered index. When a clustered index is dropped and re-created, SQL Server must rebuild the Non-Clustered indexes on that table. In fact, it gets done twice by actually rebuilding them upon drop and rebuild again on the create of the Clustered index. Using DROP_EXISTING=ON prevents you from having to rebuild all these indexes as their keys will stay the same, thus making it significantly faster.
 
The reason I took the time to write this quick blog is to remind those to consider using the DROP EXSITING=ON rather than using the DROP and CREATE method when possible. Do not introduce a performance issue when you can avoid it and you can more efficiently make the desire changes you need. Just a friendly reminder.



Europe SQL Hosting - HostForLIFEASP.NET :: T-SQL Query Performance Tuning Tips

clock February 26, 2021 07:19 by author Peter

In this article, I will discuss some T-SQL query performance tips and tricks for SQL Server programmers. The tips mentioned in this article may sound obvious to most of you, but I have seen professional developers who don't think before using them.

My first tip is to not a WHERE clause in your SELECT statement to narrow the number of rows returned. If you don't use a WHERE clause, then SQL Server will perform a table scan of your table and return all of the rows. In some cases, you may want to return all rows, and not using a WHERE clause is appropriate in this case. But if you don't need all the rows returned then use a WHERE clause to limit the number of rows returned.

By returning data you don't need, you are causing SQL Server to perform I/O it doesn't need to perform, wasting SQL Server resources. In addition, it increases network traffic, which can also lead to reduced performance. And if the table is very large, a table scan will lock the table during the time-consuming scan, preventing other users from accessing it, hurting concurrency

Another negative aspect of a table scan is that it will tend to flush out data pages from the cache with useless data, which reduces SQL Server's ability to reuse useful data in the cache, which increases disk I/O and hurts performance. [6.5, 7.0, 2000]

To help identify long-running queries, use the SQL Server Profiler Create Trace Wizard to run the "TSQL By Duration" trace. You can specify the length of the long-running queries you are trying to identify (such as over 1000 milliseconds), and then have these recorded in a log for you to investigate later. [7.0]

When using the UNION statement, keep in mind that, by default, it performs the equivalent of a SELECT DISTINCT on the final result set. In other words, UNION takes the results of two like recordsets, combines them, and then performs a SELECT DISTINCT in order to eliminate any duplicate rows. This process occurs even if there are no duplicate records in the final recordset. If you know that there are duplicate records, and this presents a problem for your application then by all means use the UNION statement to eliminate the duplicate rows.

On the other hand, if you know that there will never be any duplicate rows, or if there are, and this presents no problem to your application, then you should use the UNION ALL statement instead of the UNION statement. The advantage of the UNION ALL is that it does not perform the SELECT DISTINCT function, which saves a lot of unnecessary SQL Server resources from being used. [6.5, 7.0, 2000]

Sometimes you might want to merge two or more sets of data resulting from two or more queries using UNION. For example:
SELECT column_name1, column_name2  
FROM table_name1  
WHERE column_name1 = some_value  
UNION  
SELECT column_name1, column_name2  
FROM table_name1  
WHERE column_name2 = some_value  

This same query can be rewritten, as in the following example, and when doing so, performance will be increased:
SELECT DISTINCT column_name1, column_name2  
FROM table_name1  
WHERE column_name1 = some_value OR column_name2 = some_value  


And if you can assume that neither criteria will return duplicate rows then you can even further boost the performance of this query by removing the DISTINCT clause. [6.5, 7.0, 2000].

Carefully evaluate whether your SELECT query needs a DISTINCT clause or not. Some developers automatically add this clause to every one of their SELECT statements, even when it is not necessary. This is a bad habit that should be stopped.

The DISTINCT clause should only be used in SELECT statements if you know that duplicate returned rows are a possibility, and that having duplicate rows in the result set would cause problems with your application.

The DISTINCT clause creates a lot of extra work for SQL Server and reduces the physical resources that other SQL statements have at their disposal. Because of this, only use the DISTINCT clause if it is necessary. [6.5, 7.0, 2000]

In your queries, don't return column data you don't need. For example, you should not use SELECT * to return all the columns from a table if you don't need all the data from each column. In addition, using SELECT * prevents the use of covered indexes, further potentially hurting query performance. [6.5, 7.0, 2000]

If your application allows users to run queries, but you are unable in your application to easily prevent users from returning hundreds, even thousands of unnecessary rows of data they don't need, consider using the TOP operator within the SELECT statement. This way, you can limit how many rows are returned, even if the user doesn't enter any criteria to help reduce the number of rows returned to the client. For example, the statement:

SELECT TOP 100 fname, lname FROM customers  
WHERE state = 'AP'  


Limits the results to the first 100 rows returned, even if 10,000 rows actually meet the criteria of the WHERE clause. When the specified number of rows is reached, all processing on the query stops, potentially saving SQL Server overhead, and boosting performance.

The TOP operator works by allowing you to specify a specific number of rows to be returned, like the example above, or by specifying a percentage value, like this:

SELECT TOP 10 PERCENT fname, lname FROM customers  
WHERE state = 'AP'  


In the preceding example, only 10 percent of the available rows would be returned. Keep in mind that using this option may prevent the user from getting the data they need. For example, the data they are looking for maybe in record 101, but they only get to see the first 100 records. Because of this, use this option with discretion. [7.0, 2000]

You may have heard of a SET command called SET ROWCOUNT. Like the TOP operator, it is designed to limit how many rows are returned from a SELECT statement. The SET ROWCOUNT and the TOP operator effectively perform the same function.

While in most cases, using either option works equally efficiently, there are some instances (such as rows returned from an unsorted heap) where the TOP operator is more efficient than using SET ROWCOUNT. Because of this, using the TOP operator is preferable to using SET ROWCOUNT to limit the number of rows returned by a query. [6.5, 7.0, 2000]

In a WHERE clause, the various operators used directly affect how fast a query is run. This is because some operators lend themselves to speed over other operators. Of course, you may not have any choice of which operator you use in your WHERE clauses, but sometimes you do.

Here are the key operators used in the WHERE clause, ordered by their performance. Those operators at the top will produce results faster than those listed at the bottom.

=
>, >=, <, <=
LIKE
<>


The lesson here is to use = as much as possible, and <> as least as possible. [6.5, 7.0, 2000]

In a WHERE clause, the various operands used directly affect how fast a query is run. This is because some operands lend themselves to speed over other operands. Of course, you may not have any choice of which operand you use in your WHERE clauses, but sometimes you do.

Here are the key operands used in the WHERE clause, ordered by their performance. Those operands at the top will produce results faster than those listed at the bottom.

A single literal used by itself on one side of an operator
A single column name used by itself on one side of an operator, a single parameter used by itself on one side of an operator
A multi-operand expression on one side of an operator
A single exact number on one side of an operator
Other numeric numbers (other than exact), date, and time
Character data, NULLs

The simpler the operand, and using exact numbers, provides the best overall performance. [6.5, 7.0, 2000]

If a WHERE clause includes multiple expressions, there is generally no performance benefit gained by ordering the various expressions in any particular order. This is because the SQL Server Query Optimizer does this for you, saving you the effort. There are a few exceptions to this, which are discussed on this web site. [7.0, 2000]

Don't include code that doesn't do anything. This may sound obvious, but I have seen this in some off-the-shelf SQL Server-based applications. For example, you may see code like this:

SELECT column_name FROM table_name  
WHERE 1 = 0  


When this query is run, no rows will be returned. Obviously, this is a simple example (and most of the cases where I have seen this done have been very long queries), a query like this (or part of a larger query) like this doesn't perform anything useful, and shouldn't be run. It is just wasting SQL Server resources. In addition, I have seen more than one case where such dead code actually causes SQL Server to throw errors, preventing the code from even running. [6.5, 7.0, 2000]

By default, some developers, especially those who have not worked with SQL Server before, routinely include code similar to this in their WHERE clauses when they make string comparisons:

SELECT column_name FROM table_name  
WHERE LOWER(column_name) = 'name'  

In other words, these developers are making the assumption that the data in SQL Server is case-sensitive, which it generally is not. If your SQL Server database is not configured to be case sensitive then you don't need to use LOWER or UPPER to force the case of text to be equal for a comparison to be performed. Just leave these functions out of your code. This will speed up the performance of your query, as any use of text functions in a WHERE clause hurts performance.

But what if your database has been configured to be case-sensitive? Should you then use the LOWER and UPPER functions to ensure that comparisons are properly compared? No. The preceding example is still poor coding. If you need to deal with ensuring the case is consistent for proper comparisons, use the technique described below, along with appropriate indexes on the column in question:

SELECT column_name FROM table_name  
WHERE column_name = 'NAME' or column_name = 'name'  

This code will run much faster than the first example. [6.5, 7.0, 2000]

Try to avoid WHERE clauses that are non-sargable. The term "sargable" (which is in effect a made-up word) comes from the pseudo-acronym "sarg", which stands for "search argument," which refers to a WHERE clause that compares a column to a constant value. If a WHERE clause is sargable, this means that it can take advantage of an index (assuming one is available) to speed the completion of the query. If a WHERE clause is non-sargable, this means that the WHERE clause (or at least part of it) cannot take advantage of an index, instead of performing a table/index scan, which may cause the query's performance to suffer.

Non-sargable search arguments in the WHERE clause, such as "IS NULL", "<>", "!=", "!>", "!<", "NOT", "NOT EXISTS", "NOT IN", "NOT LIKE", and "LIKE '%500'" generally prevents (but not always) the query optimizer from using an index to perform a search. In addition, expressions that include a function on a column, expressions that have the same column on both sides of the operator, or comparisons against a column (not a constant), are not sargable.

But not every WHERE clause that has a non-sargable expression in it is doomed to a table/index scan. If the WHERE clause includes both sargable and non-sargable clauses, then at least the sargable clauses can use an index (if one exists) to help access the data quickly.

In many cases, if there is a covering index on the table that includes all of the columns in the SELECT, JOIN, and WHERE clauses in a query, then the covering index can be used instead of a table/index scan to return a query's data, even if it has a non-sargable WHERE clause. But keep in mind that covering indexes have their own drawbacks, such as producing very wide indexes that increase disk I/O when they are read.

In some cases, it may be possible to rewrite a non-sargable WHERE clause into one that is sargable. For example, the clause:

WHERE SUBSTRING(firstname,1,1) = 'm'  

Can be rewritten like this:

WHERE firstname like 'm%  

Both of these WHERE clauses produce the same result, but the first one is non-sargable (it uses a function) and will run slow, while the second one is sargable, and will run much faster.

WHERE clauses that perform some function on a column are non-sargable. On the other hand, if you can rewrite the WHERE clause so that the column and function are separate, then the query can use an available index, greatly boosting performance. For example, in the following example, a function acts directly on a column, and the index cannot be used:

SELECT member_number, first_name, last_name  
FROM members  
WHERE DATEDIFF(yy,datofbirth,GETDATE()) > 21  

In the following example, a function has been separated from the column and an index can be used:

SELECT member_number, first_name, last_name  
FROM members  
WHERE dateofbirth < DATEADD(yy,-21,GETDATE())  


Each of the preceding queries produces the same results, but the second query will use an index because the function is not performed directly on the column, as it is in the first example. The moral of this story is to try to rewrite WHERE clauses that have functions so that the function does not act directly on the column.

WHERE clauses that use NOT are not sargable, but can often be rewritten to remove the NOT from the WHERE clause, for example:

WHERE NOT column_name > 5  

To

WHERE column_name <= 5  

Each of the preceding clauses produces the same results, but the second one is sargable. If you don't know if a particular WHERE clause is sargable or non-sargable, check out the query's execution plan in Query Analyzer. Doing this, you can very quickly see if the query will be using index lookups or table/index scans to return your results.
With some careful analysis and some clever thought, many non-sargable queries can be written so that they are sargable. Your goal for best performance (assuming it is possible) is to get the left side of a search condition to be a single column name, and the right side an easy to look up the value. [6.5, 7.0, 2000]

If you run into a situation where a WHERE clause is not sargable because of the use of a function on the right side of an equality sign (and there is no other way to rewrite the WHERE clause) then consider creating an index on a computed column instead. This way, you avoid the non-sargable WHERE clause altogether, using the results of the function in your WHERE clause instead. Because of the additional overhead required for indexes on computed columns, you will only want to do this if you need to run this same query over and over in your application, thereby justifying the overhead of the indexed computed column. [2000]

If you currently have a query that uses NOT IN then that offers poor performance because the SQL Server optimizer has to use a nested table scan to perform this activity, instead, try to use one of the following options instead, all of which offer better performance:

Use EXISTS or NOT EXISTS
Use IN
Perform a LEFT OUTER JOIN and check for a NULL condition
 
[6.5, 7.0, 2000]


When you have a choice of using the IN or the EXISTS clause in your Transact-SQL then you will generally want to use the EXISTS clause, since it is usually more efficient and performs faster. [6.5, 7.0, 2000]

If you find that SQL Server uses a TABLE SCAN instead of an INDEX SEEK when you use an IN or OR clause as part of your WHERE clause, even when those columns are covered by an index then consider using an index hint to force the Query Optimizer to use the index.

For example:

SELECT * FROM tblTaskProcesses WHERE nextprocess = 1 AND processid IN (8,32,45)  

Takes about 3 seconds, whereas:

SELECT * FROM tblTaskProcesses (INDEX = IX_ProcessID) WHERE nextprocess = 1 AND processid IN (8,32,45)  

returns in under a second. [7.0, 2000]

If you use LIKE in your WHERE clause then tries to use one or more leading characters in the clause, if at all possible. For example, use:

LIKE 'm%'

not:

LIKE '%m'   

If you use a leading character in your LIKE clause, then the Query Optimizer has the ability to potentially use an index to perform the query, speeding performance and reducing the load on SQL Server.

But if the leading character in a LIKE clause is a wildcard then the Query Optimizer will not be able to use an index, and a table scan must be run, reducing performance and taking more time.

The more leading characters you can use in the LIKE clause, the more likely the Query Optimizer will find and use a suitable index. [6.5, 7.0, 2000]

If your application needs to retrieve summary data often, but you don't want to have the overhead of calculating it on the fly every time it is needed, consider using a trigger that updates summary values after each transaction into a summary table.

While the trigger has some overhead, overall, it may be less than having to calculate the data every time the summary data is needed. You may need to experiment to see which methods are fastest for your environment. [6.5, 7.0, 2000]

If your application needs to insert a large binary value into an image data column, perform this task using a Stored Procedure, not using an INSERT statement embedded in your application.

The reason for this is because the application must first convert the binary value into a character string (that doubles its size, thus increasing network traffic and taking more time) before it can be sent to the server. And when the server receives the character string, it then must convert it back to the binary format (taking even more time).

Using a Stored Procedure avoids all this because all the activity occurs on the SQL Server, and little data is transmitted over the network. [6.5, 7.0, 2000]

When you have a choice of using the IN or the BETWEEN clauses in your Transact-SQL, you will generally want to use the BETWEEN clause, since it is much more efficient. For example:
SELECT customer_number, customer_name  
FROM customer  
WHERE customer_number in (1000, 1001, 1002, 1003, 1004)  

Is much less efficient than this:

SELECT customer_number, customer_name  
FROM customer  
WHERE customer_number BETWEEN 1000 and 1004  


Assuming there is a useful index on customer_number, the Query Optimizer can locate a range of numbers much faster (using BETWEEN) than it can find a series of numbers using the IN clause (that is really just another form of the OR clause). [6.5, 7.0, 2000]

If possible, try to avoid using the SUBSTRING function in your WHERE clauses. Depending on how it is constructed, using the SUBSTRING function can force a table scan instead of allowing the optimizer to use an index (assuming there is one). If the substring you are searching for does not include the first character of the column you are searching for, then a table scan is performed.

If possible, you should avoid using the SUBSTRING function and use the LIKE condition instead, for better performance.

Instead of doing this:

WHERE SUBSTRING(column_name,1,1) = 'b'  

Try using this instead:

WHERE column_name LIKE 'b%'  

If you decide to make this choice, keep in mind that you will want your LIKE condition to be sargable, which means that you cannot place a wildcard in the first position. [6.5, 7.0, 2000]

Where possible, avoid string concatenation in your Transact-SQL code, as it is not a fast process, contributing to the overall slower performance of your application. [6.5, 7.0, 2000]

Generally, avoid using optimizer hints in your queries. This is because it is generally very hard to outguess the Query Optimizer. Optimizer hints are special keywords that you include with your query to force how the Query Optimizer runs. If you decide to include a hint in a query, this forces the Query Optimizer to become static, preventing the Query Optimizer from dynamically adapting to the current environment for the given query. More often than not, this hurts, not helps performance.

If you think that a hint might be necessary to optimize your query then be sure you first do all of the following first:

Update the statistics on the relevant tables.
If the problem query is inside a Stored Procedure, recompile it.
Review the search arguments to see if they are sargable, and if not, try to rewrite them so that they are sargable.
Review the current indexes, and make changes if necessary.

If you have done all of the preceding, and the query is not running as you expect, then you may want to consider using an appropriate optimizer hint. If you haven't heeded my advice and have decided to use some hints, keep in mind that, as your data changes, and as the Query Optimizer changes (through service packs and new releases of SQL Server), your hard-coded hints may no longer offer the benefits they once did. So if you use hints then you need to periodically review them to see if they are still performing as expected. [6.5, 7.0, 2000]

If you have a WHERE clause that includes expressions connected by two or more AND operators, SQL Server will evaluate them from left to right in the order they are written. This assumes that no parenthesis has been used to change the order of execution. Because of this, you may want to consider one of the following when using AND:

Locate the least likely true AND expression first. This way, if the AND expression is false then the clause will end immediately, saving time.
If both parts of an AND expression are equally likely to be false then put the least complex AND expression first. This way, if it is false, less work will need to be done to evaluate the expression.

You may want to consider using a Query Analyzer to look at the execution plans of your queries to see which is best for your situation. [6.5, 7.0, 2000]

If you want to boost the performance of a query that includes an AND operator in the WHERE clause then consider the following:

Of the search criterions in the WHERE clause, at least one of them should be based on a highly selective column that has an index.
If at least one of the search criterions in the WHERE clause is not highly selective then consider adding indexes to all of the columns referenced in the WHERE clause.
If none of the columns in the WHERE clause are selective enough to use an index on their own, consider creating a covering index for this query.
 
[7.0, 2000]

The Query Optimizer will perform a table scan or a clustered index scan on a table if the WHERE clause in the query contains an OR operator and if any of the referenced columns in the OR clause are not indexed (or does not have a useful index).

Because of this, if you use many queries with OR clauses then you will want to ensure that each referenced column in the WHERE clause has a useful index. [7.0, 2000]

A query with one or more OR clauses can sometimes be rewritten as a series of queries that are combined with a UNION ALL statement, in order to boost the performance of the query. For example, let's have a look at the following query:

SELECT employeeID, firstname, lastname  
FROM names  
WHERE dept = 'prod' or city = 'Orlando' or division = 'food'  


This query has three separate conditions in the WHERE clause. In order for this query to use an index, then there must be an index on all three columns found in the WHERE clause.

This same query can be written using UNION ALL instead of OR, as in this example:

SELECT employeeID, firstname, lastname FROM names WHERE dept = 'prod'  
UNION ALL  
SELECT employeeID, firstname, lastname FROM names WHERE city = 'Orlando'  
UNION ALL  
SELECT employeeID, firstname, lastname FROM names WHERE division = 'food'  


Each of these queries will produce the same results. If there is only an index on dept, but not the other columns in the WHERE clause, then the first version will not use any index and a table scan must be performed. But in the second version of the query will use the index for part of the query, but not for all of the query.

Admittedly, this is a very simple example, but even so, it does demonstrate how rewriting a query can affect whether or not an index is used or not. If this query was much more complex, then the approach of using UNION ALL might be must more efficient, since it allows you to tune each part of the index individually, something that cannot be done if you use only ORs in your query.

Note that I am using UNION ALL instead of UNION. The reason for this is to prevent the UNION statement from trying to sort the data and remove duplicates, which hurts performance. Of course, if there is the possibility of duplicates, and you want to remove them, then of course you can use just UNION.

If you have a query that uses ORs that do not make the best use of indexes then consider rewriting it as a UNION ALL, and then testing the performance. Only through testing can you be sure that one version of your query will be faster than another. [7.0, 2000]

Don't use ORDER BY in your SELECT statements unless you really need to, since it adds a lot of extra overhead. For example, perhaps it may be more efficient to sort the data at the client than at the server. In other cases, perhaps the client doesn't even need sorted data to achieve its goal. The key here is to remember that you shouldn't automatically sort data unless you know it is necessary. [6.5, 7.0, 2000]

Whenever SQL Server needs to perform a sorting operation, additional resources need to be used to perform this task. Sorting often occurs when any of the following Transact-SQL statements are executed:

  • ORDER BY
  • GROUP BY
  • SELECT DISTINCT
  • UNION
  • CREATE INDEX (generally not as critical as happens much less often)

In many cases, these commands cannot be avoided. On the other hand, there are few ways that sorting overhead can be reduced. These include:

Keep the number of rows to be sorted to a minimum. Do this by only returning those rows that absolutely need to be sorted.
Keep the number of columns to be sorted to the minimum. In other words, don't sort more columns than required.
Keep the width (physical size) of the columns to be sorted to a minimum.
Sort a column with number datatypes instead of character datatypes.

When using any of the preceding Transact-SQL commands, try to keep the preceding performance-boosting suggestions in mind. [6.5, 7.0, 2000]

If you need to sort by a particular column often, consider making that column a clustered index. This is because the data is already presorted for you and SQL Server is smart enough not to resort to the data. [6.5, 7.0, 2000]

If your SELECT statement includes an IN operator along with a list of values to be tested in the query then order the list of values so that the most frequently found values are placed at the first of the list, and the less frequently found values are placed at the end of the list. This can speed performance because the IN option returns true as soon as any of the values in the list produce a match. The sooner the match is made, the faster the query completes. [6.5, 7.0, 2000]

If you need to use the SELECT INTO option then keep in mind that it can lock system tables, preventing other users from accessing the data they need. If you do need to use SELECT INTO, try to schedule it when your SQL Server is less busy, and try to keep the amount of data inserted to a minimum. [6.5, 7.0, 2000]

If your SELECT statement contains a HAVING clause then write your query so that the WHERE clause does most of the work (removing undesired rows) instead of the HAVING clause do the work of removing undesired rows. Using the WHERE clause appropriately can eliminate unnecessary rows before they get to the GROUP BY and HAVING clause, saving some unnecessary work, and boosting performance.

For example, in a SELECT statement with WHERE, GROUP BY, and HAVING clauses, here's what happens. First, the WHERE clause is used to select the appropriate rows that need to be grouped. Next, the GROUP BY clause divides the rows into sets of grouped rows and then aggregates their values. And last, the HAVING clause then eliminates undesired aggregated groups. If the WHERE clause is used to eliminate as many of the undesired rows as possible then this means the GROUP BY and the HAVING clauses will have less work to do, boosting the overall performance of the query. [6.5, 7.0, 2000]

If your application performs many wildcards (LIKE %) text searches on CHAR or VARCHAR columns then consider using SQL Server's full-text search option. The Search Service can significantly speed up wildcard searches of text stored in a database. [7.0, 2000]

The GROUP BY clause can be used with or without an aggregate function. But if you want optimum performance, don't use the GROUP BY clause without an aggregate function. This is because you can accomplish the same end result by using the DISTINCT option instead, and it is faster.

For example, you could write your query in one of two ways:

USE Northwind  
SELECT OrderID  
FROM [Order Details]  
WHERE UnitPrice > 10  
GROUP BY OrderID  

or:

USE Northwind  
SELECT DISTINCT OrderID  
FROM [Order Details]  
WHERE UnitPrice > 10  

Both of the preceding queries produce the same results, but the second one will use less resources and perform faster. [6.5, 7.0, 2000]
The GROUP BY clause can be sped up if you follow this suggestion:

Keep the number of rows returned by the query as small as possible.
Keep the number of groupings as few as possible.
Don't group redundant columns.
If there is a JOIN in the same SELECT statement that has a GROUP BY, try to rewrite the query to use a subquery instead of using a JOIN. If this is possible, performance will be faster. If you need to use a JOIN, try to make the GROUP BY column from the same table as the column or columns on which the set function is used.

Consider adding an ORDER BY clause to the SELECT statement that orders by the same column as the GROUP BY. This may cause the GROUP BY to perform faster. Test this to see if is true in your particular situation.

[7.0, 2000]

Sometimes perception is more important than reality. For example, which of the following two queries is the fastest:

A query that takes 30 seconds to run, and then displays all of the required results.
A query that takes 60 seconds to run, but displays the first screen full of records in less than 1 second.

Most DBAs would choose the first option since it takes fewer server resources and performs faster. But from many user's point-of-view, the second one may be more palatable. By getting immediate feedback, the user gets the impression that the application is fast, even though in the background, it is not.

If you run into situations where perception is more important than raw performance, consider using the FAST query hint. The FAST query hint is used with the SELECT statement using this form:

OPTION(FAST number_of_rows)

Where number_of_rows is the number of rows that are to be displayed as fast as possible.

When this hint is added to a SELECT statement, it tells the Query Optimizer to return the specified number of rows as fast as possible, without regard to how long it will take to perform the overall query. Before rolling out an application using this hint, I would suggest you test it thoroughly to see that it performs as you expect. You may determine that the query may take about the same amount of time whether the hint is used or not. If this the case, then don't use the hint. [7.0, 2000]

Instead of using temporary tables, consider using a derived table instead. A derived table is a result of using a SELECT statement in the FROM clause of an existing SELECT statement. By using derived tables instead of temporary tables, we can reduce I/O and boost our application's performance. [7.0, 2000]

SQL Server 2000 offers a new data type called "table". Its main purpose is for the temporary storage of a set of rows. A variable, of type "table", behaves as if it is a local variable. And like local variables, it has a limited scope, which is within the batch, function, or Stored Procedure in which it was declared. In most cases, a table variable can be used as a normal table. SELECTs, INSERTs, UPDATEs, and DELETEs can all be made against a table variable.

For best performance, if you need a temporary table in your Transact-SQL code then try to use a table variable instead of creating a conventional temporary table instead. Table variables are created and manipulated in memory instead of the tempdb database, making them much faster. In addition, table variables found in Stored Procedures result in fewer compilations (than when using temporary tables), and transactions using table variables only last as long as the duration of an update on the table variable, requiring less locking and logging resources. [2000]

It is a fairly common request to write a Transact-SQL query to compare a parent table and a child table and determine if there are any parent records that don't have a match in the child table. Generally, there are three ways this can be done.

Using a NOT EXISTS
SELECT a.hdr_key  
FROM hdr_tbl a  
WHERE NOT EXISTS (SELECT * FROM dtl_tbl b WHERE a.hdr_key = b.hdr_key)   

Using a Left Join
SELECT a.hdr_key  
FROM hdr_tbl a  
LEFT JOIN dtl_tbl b ON a.hdr_key = b.hdr_key  
WHERE b.hdr_key IS NULL  


Using a NOT IN
SELECT hdr_key  
FROM hdr_tbl  
WHERE hdr_key NOT IN (SELECT hdr_key FROM dtl_tbl)   


In each case, the preceding query will return identical results. But, which of these three variations of the same query produces the best performance? Assuming everything else is equal, the best performing version through the worst performing version will be from top to bottom, as displayed above. In other words, the NOT EXISTS variation of this query is generally the most efficient.

I say generally, because the indexes found on the tables, along with the number of rows in each table, can influence the results. If you are not sure which variation to try yourself then you can try them all and see which produces the best results in your particular circumstances. [7.0, 2000]

Be careful when using OR in your WHERE clause, it is fairly simple to accidentally retrieve much more data than you need, that hurts performance. For example, have a look at the query below:

SELECT companyid, plantid, formulaid  
FROM batchrecords  
WHERE companyid = '0001' and plantid = '0202' and formulaid = '39988773'  


or:

companyid = '0001' and plantid = '0202'   

As you can see from this query, the WHERE clause is redundant, since:

companyid = '0001' and plantid = '0202' and formulaid = '39988773'   


Is a subset of:

companyid = '0001' and plantid = '0202'   


In other words, this query is redundant. Unfortunately, the SQL Server Query Optimizer isn't smart enough to know this, and will do exactly what you tell it to. What will happen is that SQL Server will need to retrieve all the data you have requested, then in effect do a SELECT DISTINCT to remove redundant rows it unnecessarily finds.
In this case, if you drop this code from the query:

or

companyid = '0001' and plantid = '0202'


Then run the query, you will receive the same results, but with much faster performance. [6.5, 7.0, 2000]

If you need to verify the existence of a record in a table then don't use SELECT COUNT(*) in your Transact-SQL code to identify it, which is very inefficient and wastes server resources. Instead, use the Transact-SQL IF EXISTS to determine if the record in question exists, which is much more efficient. For example, here's how you might use COUNT(*):

IF (SELECT COUNT(*) FROM table_name WHERE column_name = 'xxx')  

Here's a faster way, using IF EXISTS:

IF EXISTS (SELECT * FROM table_name WHERE column_name = 'xxx')  


The reason IF EXISTS is faster than COUNT(*) is that the query can end immediately when the text is proven true, while COUNT(*) must count every record, whether there is only one, or thousands before it can be determined to be true. [7.0, 2000]

Let's say that you often need to INSERT the same value into a column. For example, perhaps you need to perform 100,000 INSERTs a day into a particular table, and that 90% of the time the data inserted into one of the columns of the table is the same value.

If this the case, you can reduce network traffic (along with some SQL Server overhead) by creating this particular column with a default value of the most common value. This way, when you INSERT your data, and the data is the default value, you don't insert any data into this column, instead of allowing the default value to be automatically filled in for you. But when the value needs to be different, you will of course INSERT that value into the column. [6.5, 7.0, 2000]

Performing UPDATES takes extra resources for SQL Server to perform. When performing an UPDATE, try to do as many of the following recommendations as you can to reduce the number of resources required to perform an UPDATE.

The more of the following suggestions you can do, the faster the UPDATE will perform.

If you are updating a column of a row that has a unique index then try to only update one row at a time.
Try not to change the value of a column that is also the primary key.
When updating VARCHAR columns, try to replace the contents with contents of the same length.
Try to minimize the updating of tables that have update triggers.
Try to avoid updating columns that will be replicated to other databases.
Try to avoid updating heavily indexed columns.
Try to avoid updating a column that has a reference in the WHERE clause to the column being updated.

Of course, you may have very little choice when updating your data, but at least give the preceding suggestions a thought. [6.5, 7.0, 2000]

If you have created a complex transaction that includes several parts, one part of which has a higher probability of rolling back the transaction than the others then better performance will be provided if you locate the part of the transaction most likely to fail at the front of the greater transaction. This way, if this more-likely-to-fail transaction must be rolled back because of a failure then there have been no resources wasted on the other less-likely-to-fail transactions. [6.5, 7.0, 2000]

Transact-SQL Optimization Tips

Try to restrict the queries result set by using the WHERE clause
 
This can result in good performance benefits because SQL Server will return to the client only specific rows, not all rows from the table(s). This can reduce network traffic and boost the overall performance of the query.
 
Try to restrict the queries result set by returning only the particular columns from the table, not all of the table's columns
 
This can improve performance because SQL Server will return to the client only specific columns, not all of the table's columns. This can reduce network traffic and boost the overall performance of the query.
 
Use views and Stored Procedures instead of heavy-duty queries
 
This can reduce network traffic because your client will send to the server only a Stored Procedure or view name (perhaps with some parameters) instead of large heavy-duty queries text. This can facilitate permission management also because you can restrict the user's access to table columns they should not see.
 
Try to avoid using SQL Server cursors, whenever possible
 
SQL Server cursors can result in some performance degradation in comparison with select statements. Try to use correlated subquery or derived tables, if you need to perform row-by-row operations.
 
If you need to return the total table's row count, you can use an alternative way instead of the SELECT COUNT(*) statement
 
Because a SELECT COUNT(*) statement makes a full table scan to return the total table's row count, it can require many scans for a large table. There is another way to determine the total row count in a table. You can use the sysindexes system table, in this case. There is a ROWS column in the sysindexes table. This column contains the total row count for each table in your database. So, you can use the following select statement instead of SELECT COUNT(*):
 
SELECT rows FROM sysindexes WHERE id = OBJECT_ID('table_name') AND indid < 2
 
So you can improve the speed of such queries several times.
 
Try to use constraints instead of triggers, whenever possible
 
Constraints are much more efficient than triggers and can boost performance. So, you should use constraints instead of triggers, whenever possible.
 
Use table variables instead of temporary tables
 
Table variables require less locking and logging resources than temporary tables, so table variables should be used whenever possible. Table variables are available in SQL Server 2000 only.
 
Try to avoid the HAVING clause, whenever possible
 
The HAVING clause can restrict the result set returned by the GROUP BY clause. When you use GROUP BY with the HAVING clause, the GROUP BY clause divides the rows into sets of grouped rows and aggregates their values, and then the HAVING clause eliminates undesired aggregated groups. In many cases, you can write your select statement so, that it will contain only WHERE and GROUP BY clauses without HAVING clause. This can improve the performance of your query.
 
Try to avoid using the DISTINCT clause, whenever possible
 
Because using the DISTINCT clause will result in some performance degradation, you should use this clause only when it is necessary.
 
Include a SET NOCOUNT ON statement in your Stored Procedures to stop the message indicating the number of rows affected by a T-SQL statement
 
This can reduce network traffic because your client will not receive the message indicating the number of rows affected by a T-SQL statement.
 
Use select statements with a TOP keyword or the SET ROWCOUNT statement, if you need to return only the first n rows
 
This can improve the performance of your queries because the smaller result set will be returned. This can also reduce the traffic between the server and the client.
 
Use the FAST number_rows table hint if you need to quickly return 'number_rows' rows
 
 
You can quickly get the n rows and can work with them when the query continues execution and produces its full result set.
 
Try to use a UNION ALL statement instead of UNION, whenever possible
 
The UNION ALL statement is much faster than UNION, because the UNION ALL statement does not look for duplicate rows, and the UNION statement does look for duplicate rows, whether or not they exist.
 
Do not use optimizer hints in your queries
 
Because SQL Server query optimizer is very clever, it is very unlikely that you can optimize your query by using optimizer hints, more often, this will hurt performance.



Crystal Reports for ASP.NET 4.x Hosting: How to Print a Crystal Report Programmatically in ASP.NET?

clock February 17, 2021 09:24 by author Peter

You can print a Crystal Report using the print option of Crystal Report Viewer. However, there are occasions when you want your application to print a report directly to the printer without viewing the report in Crystal Report Viewer.
 
The ReportDocument class provides PrintToPrinter method that may be used to print a CR direct to the printer. If no printer is selected, the default printer will be used to send the printing pages to.
 
The PrintToPrinter method takes four parameters.
nCopies : Indicates the number of copies to print.
collated : Indicates whether to collate the pages.
startPageN : Indicates the first page to print.
endPageN : Indicates the last page to print.
 
The following steps will guide you to achieve the same:
    Add a crystal report (.cr) file to your ASP.NET application.
    Add a report instance on the page level.
        Dim report As MyReport = New MyReport

    Populate reports data on Page_Init
        ' Get data in a DataSet or DataTable  
          
        Dim ds As DataSet = GetData()  
        ' Fill report with the data  
        report.SetDataSource(ds)

    Print Report
        report.PrintToPrinter(1, False, 0, 0)

If you wish to print a certain page range, change the last two parameters From To page number.
 
If you want to set page margins, you need to create a PageMargin object and set PrintOptions of the ReportDocument.
 

The following code sets page margins and printer name:
    Dim margins As PageMargins =  Report.PrintOptions.PageMargins  
       margins.bottomMargin = 200  
       margins.leftMargin = 200  
       margins.rightMargin = 50  
       margins.topMargin = 100  
       Report.PrintOptions.ApplyPageMargins(margins)  
       
       ' Select the printer name  
       Report.PrintOptions.PrinterName = printerName



Europe SQL Hosting - HostForLIFEASP.NET :: SQL Server Performance Tuning Tips

clock February 10, 2021 12:02 by author Peter

In this article, we will learn about SQL Server performance tuning tips with examples.
 

Database
The Database is the most important and powerful part of any application. If your database is not working properly and taking a long time to compute the result, this means something is going wrong in the database. Here, database tune-up is required, otherwise, the performance of the application will degrade.

I know a lot of articles already published on this topic. But in this article, I tried to provide a list of database tune-up tips that will cover all the aspects of the database. Database tuning is a very critical and fussy process. It is true that database tuning is a database admin task but we should have the basic level of knowledge for doing this. Because, if we are working on a project where there is no role of admin, then it is our responsibility to maintain the performance of the database. If the performance of the database is degraded, then it will cause the worst effect on the whole system.
 
In this article, I will explain some basic database tuning tips that I learned from my experience and from my friends who are working as a database administrator. Using these tips, you can maintain or upgrade the performance of your database system. Basically, these tips are written for SQL Server but we can implement these into another database too, like Oracle and MySQL. Please read these tips carefully and at the end of the article, let me know if you find something wrong or incorrect.
 
Avoid Null value in the fixed-length field
We should avoid the Null value in fixed-length fields because if we insert the NULL value in a fixed-length field, then it will take the same amount of space as the desired input value for that field. So, if we require a null value in a field, then we should use a variable-length field that takes lesser space for NULL. The use of NULLs in a database can reduce the database performance, especially,  in WHERE clauses. For example, try to use varchar instead of char and nvarchar.
    Never use Select * Statement:  

When we require all the columns of a table, we usually use a “Select *” statement. Well, this is not a good approach because when we use the “select *” statement, the SQL Server converts * into all column names before executing the query, which takes extra time and effort. So, always provide all the column names in the query instead of “select *”.
 
Normalize tables in a database

Normalized and managed tables increase the performance of a database. So,  always try to perform at least 3rd normal form. It is not necessary that all tables require a 3NF normalization form, but if any table contains 3NF form normalization, then it can be called well-structured tables.
 
Keep Clustered Index Small
Clustered index stores data physically into memory. If the size of a clustered index is very large, then it can reduce the performance. Hence, a large clustered index on a table with a large number of rows increases the size significantly. Never use an index for frequently changed data because when any change in the table occurs, the index is also modified, and that can degrade performance.
 
Use Appropriate Datatype
If we select an inappropriate data type, it will reduce the space and enhance the performance; otherwise, it generates the worst effect. So, select an appropriate data type according to the requirement. SQL contains many data types that can store the same type of data but you should select an appropriate data type because each data type has some limitations and advantages upon another one.
 
Store image path instead of the image itself

I found that many developers try to store the image into the database instead of the image path. It may be possible that it is a requirement of the application to store images into a database. But generally, we should use an image path, because storing image in a database increases the database size and reduces performance.
 
USE Common Table Expressions (CTEs) instead of Temp table

We should prefer a CTE over the temp table because temp tables are stored physically in a TempDB which is deleted after the session ends. While CTEs are created within memory. Execution of a CTE is very fast as compared to the temp tables and very lightweight too.
 
Use Appropriate Naming Convention

The main goal of adopting a naming convention for database objects is to make it easily identifiable by the users, their type, and the purpose of all objects contained in the database. A good naming convention decreases the time required in searching for an object. A good name clearly indicates the action name of any object that it will perform.
    * tblEmployees // Name of table  
    * vw_ProductDetails // Name of View  
    * PK_Employees // Name of Primary Key  


Use UNION ALL instead of UNION
We should prefer UNION ALL instead of UNION because UNION always performs sorting that increases the time. Also, UNION can't work with text datatype because text datatype doesn't support sorting. So, in that case, UNION can't be used. Thus, always prefer UNION All.
 
Use Small data type for Index
It is very important to use a Small data type for the index. Because the bigger size of the data type reduces the performance of the Index. For example, nvarhcar(10) uses  20 bytes of data, and varchar(10) uses 10 bytes of the data. So, the index for the varchar data type works better. We can also take another example of DateTime and int. Datetime data type takes 8 Bytes and int takes 4 bytes. A small datatype means less I/O overhead that increases the performance of the index.
    Use Count(1) instead of Count(*) and Count(Column_Name):  

There is no difference in the performance of these three expressions; but, the last two expressions are not well considered to be a good practice. So, always use count(10) to get the numbers of records from a table.
 
Use Stored Procedure
Instead of using the row query, we should use the stored procedure because stored procedures are fast and easy to maintain for security and large queries.
 
Use Between instead of In
 
If Between can be used instead of IN, then always prefer Between. For example, you are searching for an employee whose id is either 101, 102, 103, or 104. Then, you can write the query using the In operator like this:
    Select * From Employee Where EmpId In (101,102,103,104)  

You can also use Between operator for the same query.
    Select * from Employee Where EmpId Between 101 And 104  

Use If Exists to determine the record
 
It has been seen many times that developers use "Select Count(*)" to get the existence of records. For example
    Declare @Count int;  
    Set @Count=(Select * From Employee Where EmpName Like '%Pan%')  
    If @Count>0  
    Begin  
    //Statement  
    End  


But, this is not a proper way for such type of queries. Because, the above query performs the complete table scan, so you can use If Exists for the same query. That will increase the performance of your query, as below.
    IF Exists(Select Emp_Name From Employee Where EmpName Like '%Pan%')  
    Begin  
    //Statements  
    End  


Never Use ” Sp_” for User Define Stored Procedure
Most programmers use “sp_” for user-defined Stored Procedures. I suggest to never use “sp_” for user-defined Stored Procedure because in SQL Server, the master database has a Stored Procedure with the "sp_" prefix. So, when we create a Stored Procedure with the "sp_" prefix, the SQL Server always looks first in the Master database, then in the user-defined database, which takes some extra time.
 
Practice to use Schema Name

A schema is an organization or structure for a database. We can define a schema as a collection of database objects that are owned by a single principle and form a single namespace. Schema name helps the SQL Server finding that object in a specific schema. It increases the speed of the query execution. For example, try to use [dbo] before the table name.
 
Avoid Cursors

A cursor is a temporary work area created in the system memory when a SQL statement is executed. A cursor is a set of rows together with a pointer that identifies the current row. It is a database object to retrieve the data from a result set one row at a time. But, the use of a cursor is not good because it takes a long time because it fetches data row by row. So, we can use a replacement of cursors. A temporary table for or While loop may be a replacement of a cursor in some cases.
 
SET NOCOUNT ON

When an INSERT, UPDATE, DELETE, or SELECT command is executed, the SQL Server returns the number affected by the query. It is not good to return the number of rows affected by the query. We can stop this by using NOCOUNT ON.
 
Use Try–Catch
In T-SQL, a Try-Catch block is very important for exception handling. A best practice and use of a Try-Catch block in SQL can save our data from undesired changes. We can put all T-SQL statements in a TRY BLOCK and the code for exception handling can be put into a CATCH block.
 
Remove Unused Index

Remove all unused indexes because indexes are always updated when the table is updated so the index must be maintained even if not used.
 
Always create an index on the table
An index is a data structure to retrieve fast data. Indexes are special lookup tables that the database search engine can use to speed up data retrieval. Simply an index is a pointer to data in a table. Mainly an index increases the speed of data retrieval. So always try to keep a minimum of one index on each table it may be either clustered or non-clustered index.
 
Use Foreign Key with the appropriate action

A foreign key is a column or combination of columns that is the same as the primary key, but in a different table. Foreign keys are used to define a relationship and enforce integrity between two tables. In addition to protecting the integrity of our data, FK constraints also help document the relationships between our tables within the database itself. Also, define an action rule for the delete and update command, you can select any action among the No Action, Set NULL, Cascade, and set default.
 
Use Alias Name

Aliasing renames a table or a column temporarily by giving another name. The use of table aliases means to rename a table in a specific SQL statement. Using aliasing, we can provide a small name to a large name that will save our time.
 
Use Transaction Management

A transaction is a unit of work performed against the database. A transaction is a set of work (T-SQL statements) that execute together like a single unit in a specific logical order as a single unit. If all the statements are executed successfully then the transaction is complete and the transaction is committed and the data will be saved in the database permanently. If any single statement fails then the entire transaction will fail and then the complete transaction is either canceled or rolled back.
 
Use Index Name in Query

Although in most cases the query optimizer will pick the appropriate index for a specific table based on statistics, sometimes it is better to specify the index name in your SELECT query.
 
Example

    SELECT  
    e.Emp_IId,  
    e.First_Name,  
    e.Last_Name  
    FROM dbo.EMPLOYEE e  
    WITH (INDEX (Clus_Index))  
    WHERE e.Emp_IId > 5  
    Select Limited Data  


We should retrieve only the required data and ignore the unimportant data. The fewer data retrieved, the faster the query will run. Rather than filtering on the client, push as much filtering as possible on the server-end. This will result in less data being sent on the wire and you will see results much faster.
 
Drop Index before Bulk Insertion of Data
We should drop the index before the insertion of a large amount of data. This makes the insert statement run faster. Once the inserts are completed, you can recreate the index again.
 
Use Unique Constraint and Check Constraint

A Check constraint checks for a specific condition before inserting data into a table. If the data passes all the Check constraints then the data will be inserted into the table otherwise the data for insertion will be discarded. The CHECK constraint ensures that all values in a column satisfy certain conditions.
 
A Unique Constraint ensures that each row for a column must have a unique value. It is like a Primary key but it can accept only one null value. In a table, one or more column can contain a Unique Constraint. So we should use a Check Constraint and Unique Constraint because it maintains the integrity in the database.
 
Importance of Column Order in index

If we are creating a Non-Clustered index on more than one column then we should consider the sequence of the columns. The order or position of a column in an index also plays a vital role in improving SQL query performance. An index can help to improve the SQL query performance if the criteria of the query match the columns that are left most in the index key. So we should place the most selective column on left most side of a non-clustered index.
 
Recompiled Stored Procedure
We all know that Stored Procedures execute T-SQL statements in less time than a similar set of T-SQL statements are executed individually. The reason is that the query execution plan for the Stored Procedures is already stored in the "sys. procedures" system-defined view. We all know that recompilation of a Stored Procedure reduces SQL performance. But in some cases, it requires recompilation of the Stored Procedure. Dropping and altering of a column, index, and/or trigger of a table. Updating the statistics used by the execution plan of the Stored Procedure. Altering the procedure will cause the SQL Server to create a new execution plan.
 
Use Sparse Column

Sparse columns provide better performance for NULL and Zero data. If you have any column that contains large amounts numbers of NULL and Zero then prefer Sparse Column instead of the default column of SQL Server. The sparse column takes lesser space than the regular column (without the SPARSE clause).
 
Example
    Create Table Table_Name  
    (  
    Id int, //Default Column  
    Group_Id int Sparse // Sparse Column  
    )  

Avoid Loops In Coding

Suppose you want to insert 10 records into the table then instead of using a loop to insert the data into the table you can insert all data using a single insert query.
    declare @int int;  
    set @int=1;  
    while @int<=10  
    begin  
    Insert Into Tab values(@int,'Value'+@int);  
    set @int=@int+1;  
    end  


The above method is not a good approach to insert the multiple records instead of this you can use another method like below.
    Insert Into Tab values(1,'Value1'),(2,'Value2'),(3,'Value3'),(4,'Value4'),(5,'Value5'),(6,'Value6'),(7,'Value7'),(8,'Value8'),(9,'Value9'),(10,'Value10');  

Avoid Correlated Queries
In A Correlated query inner query take input from the outer(parent) query, this query runs for each row that reduces the performance of the database.
    Select Name, City, (Select Company_Name  
    from  
    Company where companyId=cs.CustomerId) from Customer cs  


The best method is that we should prefer the join instead of the correlated query as below.
    Select cs.Name, cs.City, co.Company_Name  
    from Customer cs  
    Join  
    Company co  
    on  
    cs.CustomerId=co.CustomerId  


Avoid index and join hints

In some cases, index and join hint may increase the performance of a database, but if you provide any join or index hint then the server always tries to use the hint provided by you although it has a better execution plan, so such type of approach may reduce the database performance. Use Join or index hint if you are confident that there is not any better execution plan. If you have any doubt then make the server free to choose an execution plan.
 
Avoid Use of Temp table

Avoid the use of a temp table as much as you can because a temp table is created into a temp database like any basic table structure. After completion of the task, we require to drop the temp table. That raises the load on the database. You can prefer the table variable instead of this.
 
Use Index for required columns
The index should be created for all columns which are using the Where, Group By, Order By, Top, and Distinct command.
 
Don't use Index
It is true that the use of an index makes the fast retrieval of the result. But, it is not always true. In some cases, the use of index doesn't affect the performance of the query. In such cases, we can avoid the use of the index.
    When the size of the table is very small.
    The index is not used in the query optimizer
    DML(insert, Update, Delete) operations are frequently used.
    Column contains TEXT, nText type of data.

Use View for complex queries
If you are using join on two or more tables and the result of queries is frequently used, then it will be better to make a View that will contain the result of the complex query. Now, you can use this View multiple times, so that you don't have to execute the query multiple times to get the same result.
 
Make Transaction short

It will be better to keep the transaction as short as we can. Because the big size of transactions makes the table locked and reduces the database concurrency. So, always try to make shorter transactions.
 
Use Full-text Index
If your query contains multiple wild card searches using LIKE(%%), then the use of Full-text Index can increase the performance. Full-text queries can include simple words and phrases or multiple forms of a word or phrase. A full-text query returns any document that contains at least one match (also known as a hit). A match occurs when a target document contains all the terms specified in the Full-text query and meets any other search conditions, such as the distance between the matching terms.
 
Thanks for reading the article. As I have asked in the starting, if you have any doubt or I wrote something wrong, then write me back in the comments section.



SQL Server 2019 Hosting - HostForLIFEASP.NET :: Query to Repair Suspect Database In SQL Server

clock January 29, 2021 06:43 by author Peter

Query to Repair Suspect Database In SQL Server
This is the query to repair suspect database in SQL Server. 
If your Database is marked as suspected, here are the steps to fix it.
In this you have replace “dbName” with suspected db name and run this query in master database.

Step 1
--command to recover suspected database

ALTER DATABASE DbName SET EMERGENCY

DBCC checkdb('DbName')

ALTER DATABASE DbName SET SINGLE_USER WITH ROLLBACK IMMEDIATE

DBCC CheckDB ('DbName', REPAIR_ALLOW_DATA_LOSS)

ALTER DATABASE DbName SET MULTI_USER


This step 1 cannot set Index created for that database so now we have to rebuild the index. This query should be run in that suspected database.

Step 2
-- command to rebuild all indexes
EXEC sp_MSforeachtable @command1="print '?' DBCC DBREINDEX ('?', ' ', 80)"



About HostForLIFE

HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2019 Hosting, ASP.NET 5 Hosting, ASP.NET MVC 6 Hosting and SQL 2019 Hosting.


Tag cloud

Sign in