Full Trust European Hosting

BLOG about Full Trust Hosting and Its Technology - Dedicated to European Windows Hosting Customer

AngularJS Hosting Europe - HostForLIFE.eu :: How To Set HTML Meta Tags In Angular?

clock March 9, 2021 05:55 by author Peter

Meta tags are simply a small amount of HTML code that is used to describe the content of your web page. Meta tags provide metadata about the HTML document and are typically used to provide information about webpage content, author, keywords, viewport settings, etc. Meta tags usually go inside the <head> section of your HTML document and are not visible on the webpage. These tags are primarily used by web browsers, search engines, and other web services to understand the content of the webpage e.g. The viewport tag is used by web browsers to gather information about the webpage's dimensions and scaling.
 

Angular comes bundled with Meta service which we could use to work with the meta tags. In this article, we'll understand how we could utilize this service to add, remove, and update meta tags in your Angular application.
 
Importing the meta service
Before using the meta service, we need to import it from the platform browser package.
    import { Meta } from '@angular/platform-browser';  

Next, we'll inject it inside our component using via the constructor.
    constructor(private meta: Meta) {}   

Adding Meta Tags
To add new meta tags, the Angular Meta service provides two methods addTag and addTags.
    this.meta.addTag({ name: 'description', content: 'This is an article about Angular Meta service' });  

The addTag method accepts a meta definition object as a parameter that is used to describe the meta tag.
 
The above code will result in following HTML meta tag element.
    <meta name="description" content="This is an article about Angular Meta service">  

You could also use addTags to add multiple meta tags at the same time.
    this.meta.addTags([
      { name: 'description', content: 'This is an article about Angular Meta service' },
      { name: 'keywords', content: 'angular, javascript, typescript, meta, seo' }  
    ]);
 

The above code will result in the following HTML meta tag elements:
    <meta name="description" content="This is an article about Angular Meta service">  
    <meta name="keywords" content="angular, javascript, typescript, meta, seo">  

One thing to keep in mind is both addTag and addTags methods also accept a second parameter forceCreation, which forces the methods to create a new meta tag element without checking if it already exists.
 
Retrieving Meta Tags
To read the meta tags from the DOM we could use the getTag or getTags methods provided by the Meta service. The getTag and getTags method accept a string that represents an attribute selector and returns the matching element based on that string:
    const keywords = this.meta.getTag('name=keywords');    
    console.log(keywords.content);    
    // Output: angular, javascript, typescript, meta, seo
    

The getTag method returns an HTMLMetaElement while getTags returns array of  HTMLMetaElements.
 
One thing to keep in mind is getTag returns the first instance of matching meta tag described in the selector argument whereas the getTags method returns all the instances of meta tags that match the selector.
    let tags = this.meta.getTags('name');  

In the above code, getTags will return all the instances of meta tags that contains the name attribute and will save those instances in form of an array inside the tags variable.
 
Updating Meta tags
To update existing meta tags we could use the updateTag method that comes bundled with the Angular Meta service.
    this.meta.addTag({ name: 'keywords', content: 'angular, javascript, typescript, meta, seo' });  
    setTimeout(() => {  
      this.meta.updateTag(  
        { name: 'keywords', content: 'angular, javascript, typescript, meta' },  
        'name=keywords'  
      )  
    }, 4000)  

In the above code snippet, we're first adding a new meta tag with name keywords. Next, we're updating the tag after 4 seconds using the updateTag method. One thing to keep in mind is if the tag with name keywords doesn't exist in the DOM, the updateTag method will create a new element.
 
Removing Meta Tags
The removeTag method accepts a selector string as an argument and removes the tag if the element matching the selector is found.
    this.meta.removeTag('name=keywords');  

The above code searches for meta tag with name as keywords, and if the match is found it'll simply remove that meta tag from the DOM.
 
The Meta service also provides a removeTagElement method that could be used to remove a meta tag. However, unlike the removeTag method, it accepts HTMLMetaElement and removes that element from the DOM.
    const keywords = this.meta.getTag('name=keywords')  
    this.meta.removeTagElement(keywords);  


The above code first selects the keywords meta tag element from the DOM and then passes it as an argument to the removeTagElement method which removes the element from the DOM.
 
And that's it!
 
For more information,
    Meta Tags: https://developer.mozilla.org/en-US/docs/Web/HTML/Viewport_meta_tag
    Meta service: https://angular.io/api/platform-browser/Meta
    Meta Definition: https://angular.io/api/platform-browser/MetaDefinition

I hope you found this article useful.



SQL Server 2019 Hosting - HostForLIFEASP.NET :: SQL Index Creation Using DROP EXISTING ON

clock March 2, 2021 08:21 by author Peter

When you are making changes to an existing Non-Clustered index SQL Server provides a wide variety of options. One of the more frequently used methods is DROP EXISTING; in this post you will learn all about that option. This option automatically drops an existing index after recreating it, without the index being explicitly dropped. Let us take a moment understand the behavior of this choice.

DROP EXSITING=ON
 
Which is my preferred method, will DROP the current index only after it finishes creating and building the index with the new definition. The pitfall is that if the index does not exist, you will get an error and must create it without the option or set it to OFF. However, the more important benefit of using this one is all about performance. The index will still be used by active queries until it is rebuilt with the new definition.
    CREATE NONCLUSTERED INDEX [dcacIDX_ServiceType] ON [dbo].[Accounts]  
    (  
       [ServiceType] ASC  
    )  
    INCLUDE([AccountId]) WITH (STATISTICS_NORECOMPUTE = OFF, DROP_EXISTING = ON, ONLINE = OFF, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]  
    GO  


If index does not exist, you will get a 7999 error.
 
Msg 7999, Level 16, State 9, Line 1
 
Could not find any index named 'dcacIDX_ServiceType' for table 'dbo.Accounts'.
 
There are a few exceptions to keep in mind per docs.microsoft.com.
 
With DROP_EXISTING, you can change,

    A nonclustered rowstore index to a clustered rowstore index.

With DROP_EXISTING, you cannot change,

    A clustered rowstore index to a nonclustered rowstore index.
    A clustered columnstore index to any type of rowstore index.

DROP and CREATE
 
This option is a cleaner and wont error if the index doesn’t already exist. However, I caution you when using this especially when it is a large table. Using this option drops the index before it creates the new, leaving your system without the previous index definition. This can create a huge performance issue while the system waits for the new index to be created. I know this firsthand, as I did this with a client a few years ago, during the day while trying to fix a performance issue. I created a worse issue while the waiting for the new one to be created. It took 45 mins to create the new index with the new definition which caused CPU to spike to 100% while active queries were trying to come through. Which sadly, in turn, slowed down the new index creation.

    DROP INDEX IF EXISTS [dcacIDX_ServiceType] ON [dbo].[Accounts]  
    GO  
    CREATE NONCLUSTERED INDEX [dcacIDX_ServiceType] ON [dbo].[Accounts]  
    (  
       [ServiceType] ASC  
    )  
    INCLUDE([AccountId] WITH (STATISTICS_NORECOMPUTE = OFF, DROP_EXISTING = OFF, ONLINE = OFF, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]  
    GO  


Now I should also note that the DROP_EXISITING method is also faster when you must modify a Clustered index. Every Non-Clustered index refers to the Clustered index using what is called a clustering key, essentially, a pointer to the row in the clustered index. When a clustered index is dropped and re-created, SQL Server must rebuild the Non-Clustered indexes on that table. In fact, it gets done twice by actually rebuilding them upon drop and rebuild again on the create of the Clustered index. Using DROP_EXISTING=ON prevents you from having to rebuild all these indexes as their keys will stay the same, thus making it significantly faster.
 
The reason I took the time to write this quick blog is to remind those to consider using the DROP EXSITING=ON rather than using the DROP and CREATE method when possible. Do not introduce a performance issue when you can avoid it and you can more efficiently make the desire changes you need. Just a friendly reminder.



Europe SQL Hosting - HostForLIFEASP.NET :: T-SQL Query Performance Tuning Tips

clock February 26, 2021 07:19 by author Peter

In this article, I will discuss some T-SQL query performance tips and tricks for SQL Server programmers. The tips mentioned in this article may sound obvious to most of you, but I have seen professional developers who don't think before using them.

My first tip is to not a WHERE clause in your SELECT statement to narrow the number of rows returned. If you don't use a WHERE clause, then SQL Server will perform a table scan of your table and return all of the rows. In some cases, you may want to return all rows, and not using a WHERE clause is appropriate in this case. But if you don't need all the rows returned then use a WHERE clause to limit the number of rows returned.

By returning data you don't need, you are causing SQL Server to perform I/O it doesn't need to perform, wasting SQL Server resources. In addition, it increases network traffic, which can also lead to reduced performance. And if the table is very large, a table scan will lock the table during the time-consuming scan, preventing other users from accessing it, hurting concurrency

Another negative aspect of a table scan is that it will tend to flush out data pages from the cache with useless data, which reduces SQL Server's ability to reuse useful data in the cache, which increases disk I/O and hurts performance. [6.5, 7.0, 2000]

To help identify long-running queries, use the SQL Server Profiler Create Trace Wizard to run the "TSQL By Duration" trace. You can specify the length of the long-running queries you are trying to identify (such as over 1000 milliseconds), and then have these recorded in a log for you to investigate later. [7.0]

When using the UNION statement, keep in mind that, by default, it performs the equivalent of a SELECT DISTINCT on the final result set. In other words, UNION takes the results of two like recordsets, combines them, and then performs a SELECT DISTINCT in order to eliminate any duplicate rows. This process occurs even if there are no duplicate records in the final recordset. If you know that there are duplicate records, and this presents a problem for your application then by all means use the UNION statement to eliminate the duplicate rows.

On the other hand, if you know that there will never be any duplicate rows, or if there are, and this presents no problem to your application, then you should use the UNION ALL statement instead of the UNION statement. The advantage of the UNION ALL is that it does not perform the SELECT DISTINCT function, which saves a lot of unnecessary SQL Server resources from being used. [6.5, 7.0, 2000]

Sometimes you might want to merge two or more sets of data resulting from two or more queries using UNION. For example:
SELECT column_name1, column_name2  
FROM table_name1  
WHERE column_name1 = some_value  
UNION  
SELECT column_name1, column_name2  
FROM table_name1  
WHERE column_name2 = some_value  

This same query can be rewritten, as in the following example, and when doing so, performance will be increased:
SELECT DISTINCT column_name1, column_name2  
FROM table_name1  
WHERE column_name1 = some_value OR column_name2 = some_value  


And if you can assume that neither criteria will return duplicate rows then you can even further boost the performance of this query by removing the DISTINCT clause. [6.5, 7.0, 2000].

Carefully evaluate whether your SELECT query needs a DISTINCT clause or not. Some developers automatically add this clause to every one of their SELECT statements, even when it is not necessary. This is a bad habit that should be stopped.

The DISTINCT clause should only be used in SELECT statements if you know that duplicate returned rows are a possibility, and that having duplicate rows in the result set would cause problems with your application.

The DISTINCT clause creates a lot of extra work for SQL Server and reduces the physical resources that other SQL statements have at their disposal. Because of this, only use the DISTINCT clause if it is necessary. [6.5, 7.0, 2000]

In your queries, don't return column data you don't need. For example, you should not use SELECT * to return all the columns from a table if you don't need all the data from each column. In addition, using SELECT * prevents the use of covered indexes, further potentially hurting query performance. [6.5, 7.0, 2000]

If your application allows users to run queries, but you are unable in your application to easily prevent users from returning hundreds, even thousands of unnecessary rows of data they don't need, consider using the TOP operator within the SELECT statement. This way, you can limit how many rows are returned, even if the user doesn't enter any criteria to help reduce the number of rows returned to the client. For example, the statement:

SELECT TOP 100 fname, lname FROM customers  
WHERE state = 'AP'  


Limits the results to the first 100 rows returned, even if 10,000 rows actually meet the criteria of the WHERE clause. When the specified number of rows is reached, all processing on the query stops, potentially saving SQL Server overhead, and boosting performance.

The TOP operator works by allowing you to specify a specific number of rows to be returned, like the example above, or by specifying a percentage value, like this:

SELECT TOP 10 PERCENT fname, lname FROM customers  
WHERE state = 'AP'  


In the preceding example, only 10 percent of the available rows would be returned. Keep in mind that using this option may prevent the user from getting the data they need. For example, the data they are looking for maybe in record 101, but they only get to see the first 100 records. Because of this, use this option with discretion. [7.0, 2000]

You may have heard of a SET command called SET ROWCOUNT. Like the TOP operator, it is designed to limit how many rows are returned from a SELECT statement. The SET ROWCOUNT and the TOP operator effectively perform the same function.

While in most cases, using either option works equally efficiently, there are some instances (such as rows returned from an unsorted heap) where the TOP operator is more efficient than using SET ROWCOUNT. Because of this, using the TOP operator is preferable to using SET ROWCOUNT to limit the number of rows returned by a query. [6.5, 7.0, 2000]

In a WHERE clause, the various operators used directly affect how fast a query is run. This is because some operators lend themselves to speed over other operators. Of course, you may not have any choice of which operator you use in your WHERE clauses, but sometimes you do.

Here are the key operators used in the WHERE clause, ordered by their performance. Those operators at the top will produce results faster than those listed at the bottom.

=
>, >=, <, <=
LIKE
<>


The lesson here is to use = as much as possible, and <> as least as possible. [6.5, 7.0, 2000]

In a WHERE clause, the various operands used directly affect how fast a query is run. This is because some operands lend themselves to speed over other operands. Of course, you may not have any choice of which operand you use in your WHERE clauses, but sometimes you do.

Here are the key operands used in the WHERE clause, ordered by their performance. Those operands at the top will produce results faster than those listed at the bottom.

A single literal used by itself on one side of an operator
A single column name used by itself on one side of an operator, a single parameter used by itself on one side of an operator
A multi-operand expression on one side of an operator
A single exact number on one side of an operator
Other numeric numbers (other than exact), date, and time
Character data, NULLs

The simpler the operand, and using exact numbers, provides the best overall performance. [6.5, 7.0, 2000]

If a WHERE clause includes multiple expressions, there is generally no performance benefit gained by ordering the various expressions in any particular order. This is because the SQL Server Query Optimizer does this for you, saving you the effort. There are a few exceptions to this, which are discussed on this web site. [7.0, 2000]

Don't include code that doesn't do anything. This may sound obvious, but I have seen this in some off-the-shelf SQL Server-based applications. For example, you may see code like this:

SELECT column_name FROM table_name  
WHERE 1 = 0  


When this query is run, no rows will be returned. Obviously, this is a simple example (and most of the cases where I have seen this done have been very long queries), a query like this (or part of a larger query) like this doesn't perform anything useful, and shouldn't be run. It is just wasting SQL Server resources. In addition, I have seen more than one case where such dead code actually causes SQL Server to throw errors, preventing the code from even running. [6.5, 7.0, 2000]

By default, some developers, especially those who have not worked with SQL Server before, routinely include code similar to this in their WHERE clauses when they make string comparisons:

SELECT column_name FROM table_name  
WHERE LOWER(column_name) = 'name'  

In other words, these developers are making the assumption that the data in SQL Server is case-sensitive, which it generally is not. If your SQL Server database is not configured to be case sensitive then you don't need to use LOWER or UPPER to force the case of text to be equal for a comparison to be performed. Just leave these functions out of your code. This will speed up the performance of your query, as any use of text functions in a WHERE clause hurts performance.

But what if your database has been configured to be case-sensitive? Should you then use the LOWER and UPPER functions to ensure that comparisons are properly compared? No. The preceding example is still poor coding. If you need to deal with ensuring the case is consistent for proper comparisons, use the technique described below, along with appropriate indexes on the column in question:

SELECT column_name FROM table_name  
WHERE column_name = 'NAME' or column_name = 'name'  

This code will run much faster than the first example. [6.5, 7.0, 2000]

Try to avoid WHERE clauses that are non-sargable. The term "sargable" (which is in effect a made-up word) comes from the pseudo-acronym "sarg", which stands for "search argument," which refers to a WHERE clause that compares a column to a constant value. If a WHERE clause is sargable, this means that it can take advantage of an index (assuming one is available) to speed the completion of the query. If a WHERE clause is non-sargable, this means that the WHERE clause (or at least part of it) cannot take advantage of an index, instead of performing a table/index scan, which may cause the query's performance to suffer.

Non-sargable search arguments in the WHERE clause, such as "IS NULL", "<>", "!=", "!>", "!<", "NOT", "NOT EXISTS", "NOT IN", "NOT LIKE", and "LIKE '%500'" generally prevents (but not always) the query optimizer from using an index to perform a search. In addition, expressions that include a function on a column, expressions that have the same column on both sides of the operator, or comparisons against a column (not a constant), are not sargable.

But not every WHERE clause that has a non-sargable expression in it is doomed to a table/index scan. If the WHERE clause includes both sargable and non-sargable clauses, then at least the sargable clauses can use an index (if one exists) to help access the data quickly.

In many cases, if there is a covering index on the table that includes all of the columns in the SELECT, JOIN, and WHERE clauses in a query, then the covering index can be used instead of a table/index scan to return a query's data, even if it has a non-sargable WHERE clause. But keep in mind that covering indexes have their own drawbacks, such as producing very wide indexes that increase disk I/O when they are read.

In some cases, it may be possible to rewrite a non-sargable WHERE clause into one that is sargable. For example, the clause:

WHERE SUBSTRING(firstname,1,1) = 'm'  

Can be rewritten like this:

WHERE firstname like 'm%  

Both of these WHERE clauses produce the same result, but the first one is non-sargable (it uses a function) and will run slow, while the second one is sargable, and will run much faster.

WHERE clauses that perform some function on a column are non-sargable. On the other hand, if you can rewrite the WHERE clause so that the column and function are separate, then the query can use an available index, greatly boosting performance. For example, in the following example, a function acts directly on a column, and the index cannot be used:

SELECT member_number, first_name, last_name  
FROM members  
WHERE DATEDIFF(yy,datofbirth,GETDATE()) > 21  

In the following example, a function has been separated from the column and an index can be used:

SELECT member_number, first_name, last_name  
FROM members  
WHERE dateofbirth < DATEADD(yy,-21,GETDATE())  


Each of the preceding queries produces the same results, but the second query will use an index because the function is not performed directly on the column, as it is in the first example. The moral of this story is to try to rewrite WHERE clauses that have functions so that the function does not act directly on the column.

WHERE clauses that use NOT are not sargable, but can often be rewritten to remove the NOT from the WHERE clause, for example:

WHERE NOT column_name > 5  

To

WHERE column_name <= 5  

Each of the preceding clauses produces the same results, but the second one is sargable. If you don't know if a particular WHERE clause is sargable or non-sargable, check out the query's execution plan in Query Analyzer. Doing this, you can very quickly see if the query will be using index lookups or table/index scans to return your results.
With some careful analysis and some clever thought, many non-sargable queries can be written so that they are sargable. Your goal for best performance (assuming it is possible) is to get the left side of a search condition to be a single column name, and the right side an easy to look up the value. [6.5, 7.0, 2000]

If you run into a situation where a WHERE clause is not sargable because of the use of a function on the right side of an equality sign (and there is no other way to rewrite the WHERE clause) then consider creating an index on a computed column instead. This way, you avoid the non-sargable WHERE clause altogether, using the results of the function in your WHERE clause instead. Because of the additional overhead required for indexes on computed columns, you will only want to do this if you need to run this same query over and over in your application, thereby justifying the overhead of the indexed computed column. [2000]

If you currently have a query that uses NOT IN then that offers poor performance because the SQL Server optimizer has to use a nested table scan to perform this activity, instead, try to use one of the following options instead, all of which offer better performance:

Use EXISTS or NOT EXISTS
Use IN
Perform a LEFT OUTER JOIN and check for a NULL condition
 
[6.5, 7.0, 2000]


When you have a choice of using the IN or the EXISTS clause in your Transact-SQL then you will generally want to use the EXISTS clause, since it is usually more efficient and performs faster. [6.5, 7.0, 2000]

If you find that SQL Server uses a TABLE SCAN instead of an INDEX SEEK when you use an IN or OR clause as part of your WHERE clause, even when those columns are covered by an index then consider using an index hint to force the Query Optimizer to use the index.

For example:

SELECT * FROM tblTaskProcesses WHERE nextprocess = 1 AND processid IN (8,32,45)  

Takes about 3 seconds, whereas:

SELECT * FROM tblTaskProcesses (INDEX = IX_ProcessID) WHERE nextprocess = 1 AND processid IN (8,32,45)  

returns in under a second. [7.0, 2000]

If you use LIKE in your WHERE clause then tries to use one or more leading characters in the clause, if at all possible. For example, use:

LIKE 'm%'

not:

LIKE '%m'   

If you use a leading character in your LIKE clause, then the Query Optimizer has the ability to potentially use an index to perform the query, speeding performance and reducing the load on SQL Server.

But if the leading character in a LIKE clause is a wildcard then the Query Optimizer will not be able to use an index, and a table scan must be run, reducing performance and taking more time.

The more leading characters you can use in the LIKE clause, the more likely the Query Optimizer will find and use a suitable index. [6.5, 7.0, 2000]

If your application needs to retrieve summary data often, but you don't want to have the overhead of calculating it on the fly every time it is needed, consider using a trigger that updates summary values after each transaction into a summary table.

While the trigger has some overhead, overall, it may be less than having to calculate the data every time the summary data is needed. You may need to experiment to see which methods are fastest for your environment. [6.5, 7.0, 2000]

If your application needs to insert a large binary value into an image data column, perform this task using a Stored Procedure, not using an INSERT statement embedded in your application.

The reason for this is because the application must first convert the binary value into a character string (that doubles its size, thus increasing network traffic and taking more time) before it can be sent to the server. And when the server receives the character string, it then must convert it back to the binary format (taking even more time).

Using a Stored Procedure avoids all this because all the activity occurs on the SQL Server, and little data is transmitted over the network. [6.5, 7.0, 2000]

When you have a choice of using the IN or the BETWEEN clauses in your Transact-SQL, you will generally want to use the BETWEEN clause, since it is much more efficient. For example:
SELECT customer_number, customer_name  
FROM customer  
WHERE customer_number in (1000, 1001, 1002, 1003, 1004)  

Is much less efficient than this:

SELECT customer_number, customer_name  
FROM customer  
WHERE customer_number BETWEEN 1000 and 1004  


Assuming there is a useful index on customer_number, the Query Optimizer can locate a range of numbers much faster (using BETWEEN) than it can find a series of numbers using the IN clause (that is really just another form of the OR clause). [6.5, 7.0, 2000]

If possible, try to avoid using the SUBSTRING function in your WHERE clauses. Depending on how it is constructed, using the SUBSTRING function can force a table scan instead of allowing the optimizer to use an index (assuming there is one). If the substring you are searching for does not include the first character of the column you are searching for, then a table scan is performed.

If possible, you should avoid using the SUBSTRING function and use the LIKE condition instead, for better performance.

Instead of doing this:

WHERE SUBSTRING(column_name,1,1) = 'b'  

Try using this instead:

WHERE column_name LIKE 'b%'  

If you decide to make this choice, keep in mind that you will want your LIKE condition to be sargable, which means that you cannot place a wildcard in the first position. [6.5, 7.0, 2000]

Where possible, avoid string concatenation in your Transact-SQL code, as it is not a fast process, contributing to the overall slower performance of your application. [6.5, 7.0, 2000]

Generally, avoid using optimizer hints in your queries. This is because it is generally very hard to outguess the Query Optimizer. Optimizer hints are special keywords that you include with your query to force how the Query Optimizer runs. If you decide to include a hint in a query, this forces the Query Optimizer to become static, preventing the Query Optimizer from dynamically adapting to the current environment for the given query. More often than not, this hurts, not helps performance.

If you think that a hint might be necessary to optimize your query then be sure you first do all of the following first:

Update the statistics on the relevant tables.
If the problem query is inside a Stored Procedure, recompile it.
Review the search arguments to see if they are sargable, and if not, try to rewrite them so that they are sargable.
Review the current indexes, and make changes if necessary.

If you have done all of the preceding, and the query is not running as you expect, then you may want to consider using an appropriate optimizer hint. If you haven't heeded my advice and have decided to use some hints, keep in mind that, as your data changes, and as the Query Optimizer changes (through service packs and new releases of SQL Server), your hard-coded hints may no longer offer the benefits they once did. So if you use hints then you need to periodically review them to see if they are still performing as expected. [6.5, 7.0, 2000]

If you have a WHERE clause that includes expressions connected by two or more AND operators, SQL Server will evaluate them from left to right in the order they are written. This assumes that no parenthesis has been used to change the order of execution. Because of this, you may want to consider one of the following when using AND:

Locate the least likely true AND expression first. This way, if the AND expression is false then the clause will end immediately, saving time.
If both parts of an AND expression are equally likely to be false then put the least complex AND expression first. This way, if it is false, less work will need to be done to evaluate the expression.

You may want to consider using a Query Analyzer to look at the execution plans of your queries to see which is best for your situation. [6.5, 7.0, 2000]

If you want to boost the performance of a query that includes an AND operator in the WHERE clause then consider the following:

Of the search criterions in the WHERE clause, at least one of them should be based on a highly selective column that has an index.
If at least one of the search criterions in the WHERE clause is not highly selective then consider adding indexes to all of the columns referenced in the WHERE clause.
If none of the columns in the WHERE clause are selective enough to use an index on their own, consider creating a covering index for this query.
 
[7.0, 2000]

The Query Optimizer will perform a table scan or a clustered index scan on a table if the WHERE clause in the query contains an OR operator and if any of the referenced columns in the OR clause are not indexed (or does not have a useful index).

Because of this, if you use many queries with OR clauses then you will want to ensure that each referenced column in the WHERE clause has a useful index. [7.0, 2000]

A query with one or more OR clauses can sometimes be rewritten as a series of queries that are combined with a UNION ALL statement, in order to boost the performance of the query. For example, let's have a look at the following query:

SELECT employeeID, firstname, lastname  
FROM names  
WHERE dept = 'prod' or city = 'Orlando' or division = 'food'  


This query has three separate conditions in the WHERE clause. In order for this query to use an index, then there must be an index on all three columns found in the WHERE clause.

This same query can be written using UNION ALL instead of OR, as in this example:

SELECT employeeID, firstname, lastname FROM names WHERE dept = 'prod'  
UNION ALL  
SELECT employeeID, firstname, lastname FROM names WHERE city = 'Orlando'  
UNION ALL  
SELECT employeeID, firstname, lastname FROM names WHERE division = 'food'  


Each of these queries will produce the same results. If there is only an index on dept, but not the other columns in the WHERE clause, then the first version will not use any index and a table scan must be performed. But in the second version of the query will use the index for part of the query, but not for all of the query.

Admittedly, this is a very simple example, but even so, it does demonstrate how rewriting a query can affect whether or not an index is used or not. If this query was much more complex, then the approach of using UNION ALL might be must more efficient, since it allows you to tune each part of the index individually, something that cannot be done if you use only ORs in your query.

Note that I am using UNION ALL instead of UNION. The reason for this is to prevent the UNION statement from trying to sort the data and remove duplicates, which hurts performance. Of course, if there is the possibility of duplicates, and you want to remove them, then of course you can use just UNION.

If you have a query that uses ORs that do not make the best use of indexes then consider rewriting it as a UNION ALL, and then testing the performance. Only through testing can you be sure that one version of your query will be faster than another. [7.0, 2000]

Don't use ORDER BY in your SELECT statements unless you really need to, since it adds a lot of extra overhead. For example, perhaps it may be more efficient to sort the data at the client than at the server. In other cases, perhaps the client doesn't even need sorted data to achieve its goal. The key here is to remember that you shouldn't automatically sort data unless you know it is necessary. [6.5, 7.0, 2000]

Whenever SQL Server needs to perform a sorting operation, additional resources need to be used to perform this task. Sorting often occurs when any of the following Transact-SQL statements are executed:

  • ORDER BY
  • GROUP BY
  • SELECT DISTINCT
  • UNION
  • CREATE INDEX (generally not as critical as happens much less often)

In many cases, these commands cannot be avoided. On the other hand, there are few ways that sorting overhead can be reduced. These include:

Keep the number of rows to be sorted to a minimum. Do this by only returning those rows that absolutely need to be sorted.
Keep the number of columns to be sorted to the minimum. In other words, don't sort more columns than required.
Keep the width (physical size) of the columns to be sorted to a minimum.
Sort a column with number datatypes instead of character datatypes.

When using any of the preceding Transact-SQL commands, try to keep the preceding performance-boosting suggestions in mind. [6.5, 7.0, 2000]

If you need to sort by a particular column often, consider making that column a clustered index. This is because the data is already presorted for you and SQL Server is smart enough not to resort to the data. [6.5, 7.0, 2000]

If your SELECT statement includes an IN operator along with a list of values to be tested in the query then order the list of values so that the most frequently found values are placed at the first of the list, and the less frequently found values are placed at the end of the list. This can speed performance because the IN option returns true as soon as any of the values in the list produce a match. The sooner the match is made, the faster the query completes. [6.5, 7.0, 2000]

If you need to use the SELECT INTO option then keep in mind that it can lock system tables, preventing other users from accessing the data they need. If you do need to use SELECT INTO, try to schedule it when your SQL Server is less busy, and try to keep the amount of data inserted to a minimum. [6.5, 7.0, 2000]

If your SELECT statement contains a HAVING clause then write your query so that the WHERE clause does most of the work (removing undesired rows) instead of the HAVING clause do the work of removing undesired rows. Using the WHERE clause appropriately can eliminate unnecessary rows before they get to the GROUP BY and HAVING clause, saving some unnecessary work, and boosting performance.

For example, in a SELECT statement with WHERE, GROUP BY, and HAVING clauses, here's what happens. First, the WHERE clause is used to select the appropriate rows that need to be grouped. Next, the GROUP BY clause divides the rows into sets of grouped rows and then aggregates their values. And last, the HAVING clause then eliminates undesired aggregated groups. If the WHERE clause is used to eliminate as many of the undesired rows as possible then this means the GROUP BY and the HAVING clauses will have less work to do, boosting the overall performance of the query. [6.5, 7.0, 2000]

If your application performs many wildcards (LIKE %) text searches on CHAR or VARCHAR columns then consider using SQL Server's full-text search option. The Search Service can significantly speed up wildcard searches of text stored in a database. [7.0, 2000]

The GROUP BY clause can be used with or without an aggregate function. But if you want optimum performance, don't use the GROUP BY clause without an aggregate function. This is because you can accomplish the same end result by using the DISTINCT option instead, and it is faster.

For example, you could write your query in one of two ways:

USE Northwind  
SELECT OrderID  
FROM [Order Details]  
WHERE UnitPrice > 10  
GROUP BY OrderID  

or:

USE Northwind  
SELECT DISTINCT OrderID  
FROM [Order Details]  
WHERE UnitPrice > 10  

Both of the preceding queries produce the same results, but the second one will use less resources and perform faster. [6.5, 7.0, 2000]
The GROUP BY clause can be sped up if you follow this suggestion:

Keep the number of rows returned by the query as small as possible.
Keep the number of groupings as few as possible.
Don't group redundant columns.
If there is a JOIN in the same SELECT statement that has a GROUP BY, try to rewrite the query to use a subquery instead of using a JOIN. If this is possible, performance will be faster. If you need to use a JOIN, try to make the GROUP BY column from the same table as the column or columns on which the set function is used.

Consider adding an ORDER BY clause to the SELECT statement that orders by the same column as the GROUP BY. This may cause the GROUP BY to perform faster. Test this to see if is true in your particular situation.

[7.0, 2000]

Sometimes perception is more important than reality. For example, which of the following two queries is the fastest:

A query that takes 30 seconds to run, and then displays all of the required results.
A query that takes 60 seconds to run, but displays the first screen full of records in less than 1 second.

Most DBAs would choose the first option since it takes fewer server resources and performs faster. But from many user's point-of-view, the second one may be more palatable. By getting immediate feedback, the user gets the impression that the application is fast, even though in the background, it is not.

If you run into situations where perception is more important than raw performance, consider using the FAST query hint. The FAST query hint is used with the SELECT statement using this form:

OPTION(FAST number_of_rows)

Where number_of_rows is the number of rows that are to be displayed as fast as possible.

When this hint is added to a SELECT statement, it tells the Query Optimizer to return the specified number of rows as fast as possible, without regard to how long it will take to perform the overall query. Before rolling out an application using this hint, I would suggest you test it thoroughly to see that it performs as you expect. You may determine that the query may take about the same amount of time whether the hint is used or not. If this the case, then don't use the hint. [7.0, 2000]

Instead of using temporary tables, consider using a derived table instead. A derived table is a result of using a SELECT statement in the FROM clause of an existing SELECT statement. By using derived tables instead of temporary tables, we can reduce I/O and boost our application's performance. [7.0, 2000]

SQL Server 2000 offers a new data type called "table". Its main purpose is for the temporary storage of a set of rows. A variable, of type "table", behaves as if it is a local variable. And like local variables, it has a limited scope, which is within the batch, function, or Stored Procedure in which it was declared. In most cases, a table variable can be used as a normal table. SELECTs, INSERTs, UPDATEs, and DELETEs can all be made against a table variable.

For best performance, if you need a temporary table in your Transact-SQL code then try to use a table variable instead of creating a conventional temporary table instead. Table variables are created and manipulated in memory instead of the tempdb database, making them much faster. In addition, table variables found in Stored Procedures result in fewer compilations (than when using temporary tables), and transactions using table variables only last as long as the duration of an update on the table variable, requiring less locking and logging resources. [2000]

It is a fairly common request to write a Transact-SQL query to compare a parent table and a child table and determine if there are any parent records that don't have a match in the child table. Generally, there are three ways this can be done.

Using a NOT EXISTS
SELECT a.hdr_key  
FROM hdr_tbl a  
WHERE NOT EXISTS (SELECT * FROM dtl_tbl b WHERE a.hdr_key = b.hdr_key)   

Using a Left Join
SELECT a.hdr_key  
FROM hdr_tbl a  
LEFT JOIN dtl_tbl b ON a.hdr_key = b.hdr_key  
WHERE b.hdr_key IS NULL  


Using a NOT IN
SELECT hdr_key  
FROM hdr_tbl  
WHERE hdr_key NOT IN (SELECT hdr_key FROM dtl_tbl)   


In each case, the preceding query will return identical results. But, which of these three variations of the same query produces the best performance? Assuming everything else is equal, the best performing version through the worst performing version will be from top to bottom, as displayed above. In other words, the NOT EXISTS variation of this query is generally the most efficient.

I say generally, because the indexes found on the tables, along with the number of rows in each table, can influence the results. If you are not sure which variation to try yourself then you can try them all and see which produces the best results in your particular circumstances. [7.0, 2000]

Be careful when using OR in your WHERE clause, it is fairly simple to accidentally retrieve much more data than you need, that hurts performance. For example, have a look at the query below:

SELECT companyid, plantid, formulaid  
FROM batchrecords  
WHERE companyid = '0001' and plantid = '0202' and formulaid = '39988773'  


or:

companyid = '0001' and plantid = '0202'   

As you can see from this query, the WHERE clause is redundant, since:

companyid = '0001' and plantid = '0202' and formulaid = '39988773'   


Is a subset of:

companyid = '0001' and plantid = '0202'   


In other words, this query is redundant. Unfortunately, the SQL Server Query Optimizer isn't smart enough to know this, and will do exactly what you tell it to. What will happen is that SQL Server will need to retrieve all the data you have requested, then in effect do a SELECT DISTINCT to remove redundant rows it unnecessarily finds.
In this case, if you drop this code from the query:

or

companyid = '0001' and plantid = '0202'


Then run the query, you will receive the same results, but with much faster performance. [6.5, 7.0, 2000]

If you need to verify the existence of a record in a table then don't use SELECT COUNT(*) in your Transact-SQL code to identify it, which is very inefficient and wastes server resources. Instead, use the Transact-SQL IF EXISTS to determine if the record in question exists, which is much more efficient. For example, here's how you might use COUNT(*):

IF (SELECT COUNT(*) FROM table_name WHERE column_name = 'xxx')  

Here's a faster way, using IF EXISTS:

IF EXISTS (SELECT * FROM table_name WHERE column_name = 'xxx')  


The reason IF EXISTS is faster than COUNT(*) is that the query can end immediately when the text is proven true, while COUNT(*) must count every record, whether there is only one, or thousands before it can be determined to be true. [7.0, 2000]

Let's say that you often need to INSERT the same value into a column. For example, perhaps you need to perform 100,000 INSERTs a day into a particular table, and that 90% of the time the data inserted into one of the columns of the table is the same value.

If this the case, you can reduce network traffic (along with some SQL Server overhead) by creating this particular column with a default value of the most common value. This way, when you INSERT your data, and the data is the default value, you don't insert any data into this column, instead of allowing the default value to be automatically filled in for you. But when the value needs to be different, you will of course INSERT that value into the column. [6.5, 7.0, 2000]

Performing UPDATES takes extra resources for SQL Server to perform. When performing an UPDATE, try to do as many of the following recommendations as you can to reduce the number of resources required to perform an UPDATE.

The more of the following suggestions you can do, the faster the UPDATE will perform.

If you are updating a column of a row that has a unique index then try to only update one row at a time.
Try not to change the value of a column that is also the primary key.
When updating VARCHAR columns, try to replace the contents with contents of the same length.
Try to minimize the updating of tables that have update triggers.
Try to avoid updating columns that will be replicated to other databases.
Try to avoid updating heavily indexed columns.
Try to avoid updating a column that has a reference in the WHERE clause to the column being updated.

Of course, you may have very little choice when updating your data, but at least give the preceding suggestions a thought. [6.5, 7.0, 2000]

If you have created a complex transaction that includes several parts, one part of which has a higher probability of rolling back the transaction than the others then better performance will be provided if you locate the part of the transaction most likely to fail at the front of the greater transaction. This way, if this more-likely-to-fail transaction must be rolled back because of a failure then there have been no resources wasted on the other less-likely-to-fail transactions. [6.5, 7.0, 2000]

Transact-SQL Optimization Tips

Try to restrict the queries result set by using the WHERE clause
 
This can result in good performance benefits because SQL Server will return to the client only specific rows, not all rows from the table(s). This can reduce network traffic and boost the overall performance of the query.
 
Try to restrict the queries result set by returning only the particular columns from the table, not all of the table's columns
 
This can improve performance because SQL Server will return to the client only specific columns, not all of the table's columns. This can reduce network traffic and boost the overall performance of the query.
 
Use views and Stored Procedures instead of heavy-duty queries
 
This can reduce network traffic because your client will send to the server only a Stored Procedure or view name (perhaps with some parameters) instead of large heavy-duty queries text. This can facilitate permission management also because you can restrict the user's access to table columns they should not see.
 
Try to avoid using SQL Server cursors, whenever possible
 
SQL Server cursors can result in some performance degradation in comparison with select statements. Try to use correlated subquery or derived tables, if you need to perform row-by-row operations.
 
If you need to return the total table's row count, you can use an alternative way instead of the SELECT COUNT(*) statement
 
Because a SELECT COUNT(*) statement makes a full table scan to return the total table's row count, it can require many scans for a large table. There is another way to determine the total row count in a table. You can use the sysindexes system table, in this case. There is a ROWS column in the sysindexes table. This column contains the total row count for each table in your database. So, you can use the following select statement instead of SELECT COUNT(*):
 
SELECT rows FROM sysindexes WHERE id = OBJECT_ID('table_name') AND indid < 2
 
So you can improve the speed of such queries several times.
 
Try to use constraints instead of triggers, whenever possible
 
Constraints are much more efficient than triggers and can boost performance. So, you should use constraints instead of triggers, whenever possible.
 
Use table variables instead of temporary tables
 
Table variables require less locking and logging resources than temporary tables, so table variables should be used whenever possible. Table variables are available in SQL Server 2000 only.
 
Try to avoid the HAVING clause, whenever possible
 
The HAVING clause can restrict the result set returned by the GROUP BY clause. When you use GROUP BY with the HAVING clause, the GROUP BY clause divides the rows into sets of grouped rows and aggregates their values, and then the HAVING clause eliminates undesired aggregated groups. In many cases, you can write your select statement so, that it will contain only WHERE and GROUP BY clauses without HAVING clause. This can improve the performance of your query.
 
Try to avoid using the DISTINCT clause, whenever possible
 
Because using the DISTINCT clause will result in some performance degradation, you should use this clause only when it is necessary.
 
Include a SET NOCOUNT ON statement in your Stored Procedures to stop the message indicating the number of rows affected by a T-SQL statement
 
This can reduce network traffic because your client will not receive the message indicating the number of rows affected by a T-SQL statement.
 
Use select statements with a TOP keyword or the SET ROWCOUNT statement, if you need to return only the first n rows
 
This can improve the performance of your queries because the smaller result set will be returned. This can also reduce the traffic between the server and the client.
 
Use the FAST number_rows table hint if you need to quickly return 'number_rows' rows
 
 
You can quickly get the n rows and can work with them when the query continues execution and produces its full result set.
 
Try to use a UNION ALL statement instead of UNION, whenever possible
 
The UNION ALL statement is much faster than UNION, because the UNION ALL statement does not look for duplicate rows, and the UNION statement does look for duplicate rows, whether or not they exist.
 
Do not use optimizer hints in your queries
 
Because SQL Server query optimizer is very clever, it is very unlikely that you can optimize your query by using optimizer hints, more often, this will hurt performance.



Crystal Reports for ASP.NET 4.x Hosting: How to Print a Crystal Report Programmatically in ASP.NET?

clock February 17, 2021 09:24 by author Peter

You can print a Crystal Report using the print option of Crystal Report Viewer. However, there are occasions when you want your application to print a report directly to the printer without viewing the report in Crystal Report Viewer.
 
The ReportDocument class provides PrintToPrinter method that may be used to print a CR direct to the printer. If no printer is selected, the default printer will be used to send the printing pages to.
 
The PrintToPrinter method takes four parameters.
nCopies : Indicates the number of copies to print.
collated : Indicates whether to collate the pages.
startPageN : Indicates the first page to print.
endPageN : Indicates the last page to print.
 
The following steps will guide you to achieve the same:
    Add a crystal report (.cr) file to your ASP.NET application.
    Add a report instance on the page level.
        Dim report As MyReport = New MyReport

    Populate reports data on Page_Init
        ' Get data in a DataSet or DataTable  
          
        Dim ds As DataSet = GetData()  
        ' Fill report with the data  
        report.SetDataSource(ds)

    Print Report
        report.PrintToPrinter(1, False, 0, 0)

If you wish to print a certain page range, change the last two parameters From To page number.
 
If you want to set page margins, you need to create a PageMargin object and set PrintOptions of the ReportDocument.
 

The following code sets page margins and printer name:
    Dim margins As PageMargins =  Report.PrintOptions.PageMargins  
       margins.bottomMargin = 200  
       margins.leftMargin = 200  
       margins.rightMargin = 50  
       margins.topMargin = 100  
       Report.PrintOptions.ApplyPageMargins(margins)  
       
       ' Select the printer name  
       Report.PrintOptions.PrinterName = printerName



Europe SQL Hosting - HostForLIFEASP.NET :: SQL Server Performance Tuning Tips

clock February 10, 2021 12:02 by author Peter

In this article, we will learn about SQL Server performance tuning tips with examples.
 

Database
The Database is the most important and powerful part of any application. If your database is not working properly and taking a long time to compute the result, this means something is going wrong in the database. Here, database tune-up is required, otherwise, the performance of the application will degrade.

I know a lot of articles already published on this topic. But in this article, I tried to provide a list of database tune-up tips that will cover all the aspects of the database. Database tuning is a very critical and fussy process. It is true that database tuning is a database admin task but we should have the basic level of knowledge for doing this. Because, if we are working on a project where there is no role of admin, then it is our responsibility to maintain the performance of the database. If the performance of the database is degraded, then it will cause the worst effect on the whole system.
 
In this article, I will explain some basic database tuning tips that I learned from my experience and from my friends who are working as a database administrator. Using these tips, you can maintain or upgrade the performance of your database system. Basically, these tips are written for SQL Server but we can implement these into another database too, like Oracle and MySQL. Please read these tips carefully and at the end of the article, let me know if you find something wrong or incorrect.
 
Avoid Null value in the fixed-length field
We should avoid the Null value in fixed-length fields because if we insert the NULL value in a fixed-length field, then it will take the same amount of space as the desired input value for that field. So, if we require a null value in a field, then we should use a variable-length field that takes lesser space for NULL. The use of NULLs in a database can reduce the database performance, especially,  in WHERE clauses. For example, try to use varchar instead of char and nvarchar.
    Never use Select * Statement:  

When we require all the columns of a table, we usually use a “Select *” statement. Well, this is not a good approach because when we use the “select *” statement, the SQL Server converts * into all column names before executing the query, which takes extra time and effort. So, always provide all the column names in the query instead of “select *”.
 
Normalize tables in a database

Normalized and managed tables increase the performance of a database. So,  always try to perform at least 3rd normal form. It is not necessary that all tables require a 3NF normalization form, but if any table contains 3NF form normalization, then it can be called well-structured tables.
 
Keep Clustered Index Small
Clustered index stores data physically into memory. If the size of a clustered index is very large, then it can reduce the performance. Hence, a large clustered index on a table with a large number of rows increases the size significantly. Never use an index for frequently changed data because when any change in the table occurs, the index is also modified, and that can degrade performance.
 
Use Appropriate Datatype
If we select an inappropriate data type, it will reduce the space and enhance the performance; otherwise, it generates the worst effect. So, select an appropriate data type according to the requirement. SQL contains many data types that can store the same type of data but you should select an appropriate data type because each data type has some limitations and advantages upon another one.
 
Store image path instead of the image itself

I found that many developers try to store the image into the database instead of the image path. It may be possible that it is a requirement of the application to store images into a database. But generally, we should use an image path, because storing image in a database increases the database size and reduces performance.
 
USE Common Table Expressions (CTEs) instead of Temp table

We should prefer a CTE over the temp table because temp tables are stored physically in a TempDB which is deleted after the session ends. While CTEs are created within memory. Execution of a CTE is very fast as compared to the temp tables and very lightweight too.
 
Use Appropriate Naming Convention

The main goal of adopting a naming convention for database objects is to make it easily identifiable by the users, their type, and the purpose of all objects contained in the database. A good naming convention decreases the time required in searching for an object. A good name clearly indicates the action name of any object that it will perform.
    * tblEmployees // Name of table  
    * vw_ProductDetails // Name of View  
    * PK_Employees // Name of Primary Key  


Use UNION ALL instead of UNION
We should prefer UNION ALL instead of UNION because UNION always performs sorting that increases the time. Also, UNION can't work with text datatype because text datatype doesn't support sorting. So, in that case, UNION can't be used. Thus, always prefer UNION All.
 
Use Small data type for Index
It is very important to use a Small data type for the index. Because the bigger size of the data type reduces the performance of the Index. For example, nvarhcar(10) uses  20 bytes of data, and varchar(10) uses 10 bytes of the data. So, the index for the varchar data type works better. We can also take another example of DateTime and int. Datetime data type takes 8 Bytes and int takes 4 bytes. A small datatype means less I/O overhead that increases the performance of the index.
    Use Count(1) instead of Count(*) and Count(Column_Name):  

There is no difference in the performance of these three expressions; but, the last two expressions are not well considered to be a good practice. So, always use count(10) to get the numbers of records from a table.
 
Use Stored Procedure
Instead of using the row query, we should use the stored procedure because stored procedures are fast and easy to maintain for security and large queries.
 
Use Between instead of In
 
If Between can be used instead of IN, then always prefer Between. For example, you are searching for an employee whose id is either 101, 102, 103, or 104. Then, you can write the query using the In operator like this:
    Select * From Employee Where EmpId In (101,102,103,104)  

You can also use Between operator for the same query.
    Select * from Employee Where EmpId Between 101 And 104  

Use If Exists to determine the record
 
It has been seen many times that developers use "Select Count(*)" to get the existence of records. For example
    Declare @Count int;  
    Set @Count=(Select * From Employee Where EmpName Like '%Pan%')  
    If @Count>0  
    Begin  
    //Statement  
    End  


But, this is not a proper way for such type of queries. Because, the above query performs the complete table scan, so you can use If Exists for the same query. That will increase the performance of your query, as below.
    IF Exists(Select Emp_Name From Employee Where EmpName Like '%Pan%')  
    Begin  
    //Statements  
    End  


Never Use ” Sp_” for User Define Stored Procedure
Most programmers use “sp_” for user-defined Stored Procedures. I suggest to never use “sp_” for user-defined Stored Procedure because in SQL Server, the master database has a Stored Procedure with the "sp_" prefix. So, when we create a Stored Procedure with the "sp_" prefix, the SQL Server always looks first in the Master database, then in the user-defined database, which takes some extra time.
 
Practice to use Schema Name

A schema is an organization or structure for a database. We can define a schema as a collection of database objects that are owned by a single principle and form a single namespace. Schema name helps the SQL Server finding that object in a specific schema. It increases the speed of the query execution. For example, try to use [dbo] before the table name.
 
Avoid Cursors

A cursor is a temporary work area created in the system memory when a SQL statement is executed. A cursor is a set of rows together with a pointer that identifies the current row. It is a database object to retrieve the data from a result set one row at a time. But, the use of a cursor is not good because it takes a long time because it fetches data row by row. So, we can use a replacement of cursors. A temporary table for or While loop may be a replacement of a cursor in some cases.
 
SET NOCOUNT ON

When an INSERT, UPDATE, DELETE, or SELECT command is executed, the SQL Server returns the number affected by the query. It is not good to return the number of rows affected by the query. We can stop this by using NOCOUNT ON.
 
Use Try–Catch
In T-SQL, a Try-Catch block is very important for exception handling. A best practice and use of a Try-Catch block in SQL can save our data from undesired changes. We can put all T-SQL statements in a TRY BLOCK and the code for exception handling can be put into a CATCH block.
 
Remove Unused Index

Remove all unused indexes because indexes are always updated when the table is updated so the index must be maintained even if not used.
 
Always create an index on the table
An index is a data structure to retrieve fast data. Indexes are special lookup tables that the database search engine can use to speed up data retrieval. Simply an index is a pointer to data in a table. Mainly an index increases the speed of data retrieval. So always try to keep a minimum of one index on each table it may be either clustered or non-clustered index.
 
Use Foreign Key with the appropriate action

A foreign key is a column or combination of columns that is the same as the primary key, but in a different table. Foreign keys are used to define a relationship and enforce integrity between two tables. In addition to protecting the integrity of our data, FK constraints also help document the relationships between our tables within the database itself. Also, define an action rule for the delete and update command, you can select any action among the No Action, Set NULL, Cascade, and set default.
 
Use Alias Name

Aliasing renames a table or a column temporarily by giving another name. The use of table aliases means to rename a table in a specific SQL statement. Using aliasing, we can provide a small name to a large name that will save our time.
 
Use Transaction Management

A transaction is a unit of work performed against the database. A transaction is a set of work (T-SQL statements) that execute together like a single unit in a specific logical order as a single unit. If all the statements are executed successfully then the transaction is complete and the transaction is committed and the data will be saved in the database permanently. If any single statement fails then the entire transaction will fail and then the complete transaction is either canceled or rolled back.
 
Use Index Name in Query

Although in most cases the query optimizer will pick the appropriate index for a specific table based on statistics, sometimes it is better to specify the index name in your SELECT query.
 
Example

    SELECT  
    e.Emp_IId,  
    e.First_Name,  
    e.Last_Name  
    FROM dbo.EMPLOYEE e  
    WITH (INDEX (Clus_Index))  
    WHERE e.Emp_IId > 5  
    Select Limited Data  


We should retrieve only the required data and ignore the unimportant data. The fewer data retrieved, the faster the query will run. Rather than filtering on the client, push as much filtering as possible on the server-end. This will result in less data being sent on the wire and you will see results much faster.
 
Drop Index before Bulk Insertion of Data
We should drop the index before the insertion of a large amount of data. This makes the insert statement run faster. Once the inserts are completed, you can recreate the index again.
 
Use Unique Constraint and Check Constraint

A Check constraint checks for a specific condition before inserting data into a table. If the data passes all the Check constraints then the data will be inserted into the table otherwise the data for insertion will be discarded. The CHECK constraint ensures that all values in a column satisfy certain conditions.
 
A Unique Constraint ensures that each row for a column must have a unique value. It is like a Primary key but it can accept only one null value. In a table, one or more column can contain a Unique Constraint. So we should use a Check Constraint and Unique Constraint because it maintains the integrity in the database.
 
Importance of Column Order in index

If we are creating a Non-Clustered index on more than one column then we should consider the sequence of the columns. The order or position of a column in an index also plays a vital role in improving SQL query performance. An index can help to improve the SQL query performance if the criteria of the query match the columns that are left most in the index key. So we should place the most selective column on left most side of a non-clustered index.
 
Recompiled Stored Procedure
We all know that Stored Procedures execute T-SQL statements in less time than a similar set of T-SQL statements are executed individually. The reason is that the query execution plan for the Stored Procedures is already stored in the "sys. procedures" system-defined view. We all know that recompilation of a Stored Procedure reduces SQL performance. But in some cases, it requires recompilation of the Stored Procedure. Dropping and altering of a column, index, and/or trigger of a table. Updating the statistics used by the execution plan of the Stored Procedure. Altering the procedure will cause the SQL Server to create a new execution plan.
 
Use Sparse Column

Sparse columns provide better performance for NULL and Zero data. If you have any column that contains large amounts numbers of NULL and Zero then prefer Sparse Column instead of the default column of SQL Server. The sparse column takes lesser space than the regular column (without the SPARSE clause).
 
Example
    Create Table Table_Name  
    (  
    Id int, //Default Column  
    Group_Id int Sparse // Sparse Column  
    )  

Avoid Loops In Coding

Suppose you want to insert 10 records into the table then instead of using a loop to insert the data into the table you can insert all data using a single insert query.
    declare @int int;  
    set @int=1;  
    while @int<=10  
    begin  
    Insert Into Tab values(@int,'Value'+@int);  
    set @int=@int+1;  
    end  


The above method is not a good approach to insert the multiple records instead of this you can use another method like below.
    Insert Into Tab values(1,'Value1'),(2,'Value2'),(3,'Value3'),(4,'Value4'),(5,'Value5'),(6,'Value6'),(7,'Value7'),(8,'Value8'),(9,'Value9'),(10,'Value10');  

Avoid Correlated Queries
In A Correlated query inner query take input from the outer(parent) query, this query runs for each row that reduces the performance of the database.
    Select Name, City, (Select Company_Name  
    from  
    Company where companyId=cs.CustomerId) from Customer cs  


The best method is that we should prefer the join instead of the correlated query as below.
    Select cs.Name, cs.City, co.Company_Name  
    from Customer cs  
    Join  
    Company co  
    on  
    cs.CustomerId=co.CustomerId  


Avoid index and join hints

In some cases, index and join hint may increase the performance of a database, but if you provide any join or index hint then the server always tries to use the hint provided by you although it has a better execution plan, so such type of approach may reduce the database performance. Use Join or index hint if you are confident that there is not any better execution plan. If you have any doubt then make the server free to choose an execution plan.
 
Avoid Use of Temp table

Avoid the use of a temp table as much as you can because a temp table is created into a temp database like any basic table structure. After completion of the task, we require to drop the temp table. That raises the load on the database. You can prefer the table variable instead of this.
 
Use Index for required columns
The index should be created for all columns which are using the Where, Group By, Order By, Top, and Distinct command.
 
Don't use Index
It is true that the use of an index makes the fast retrieval of the result. But, it is not always true. In some cases, the use of index doesn't affect the performance of the query. In such cases, we can avoid the use of the index.
    When the size of the table is very small.
    The index is not used in the query optimizer
    DML(insert, Update, Delete) operations are frequently used.
    Column contains TEXT, nText type of data.

Use View for complex queries
If you are using join on two or more tables and the result of queries is frequently used, then it will be better to make a View that will contain the result of the complex query. Now, you can use this View multiple times, so that you don't have to execute the query multiple times to get the same result.
 
Make Transaction short

It will be better to keep the transaction as short as we can. Because the big size of transactions makes the table locked and reduces the database concurrency. So, always try to make shorter transactions.
 
Use Full-text Index
If your query contains multiple wild card searches using LIKE(%%), then the use of Full-text Index can increase the performance. Full-text queries can include simple words and phrases or multiple forms of a word or phrase. A full-text query returns any document that contains at least one match (also known as a hit). A match occurs when a target document contains all the terms specified in the Full-text query and meets any other search conditions, such as the distance between the matching terms.
 
Thanks for reading the article. As I have asked in the starting, if you have any doubt or I wrote something wrong, then write me back in the comments section.



SQL Server 2019 Hosting - HostForLIFEASP.NET :: Query to Repair Suspect Database In SQL Server

clock January 29, 2021 06:43 by author Peter

Query to Repair Suspect Database In SQL Server
This is the query to repair suspect database in SQL Server. 
If your Database is marked as suspected, here are the steps to fix it.
In this you have replace “dbName” with suspected db name and run this query in master database.

Step 1
--command to recover suspected database

ALTER DATABASE DbName SET EMERGENCY

DBCC checkdb('DbName')

ALTER DATABASE DbName SET SINGLE_USER WITH ROLLBACK IMMEDIATE

DBCC CheckDB ('DbName', REPAIR_ALLOW_DATA_LOSS)

ALTER DATABASE DbName SET MULTI_USER


This step 1 cannot set Index created for that database so now we have to rebuild the index. This query should be run in that suspected database.

Step 2
-- command to rebuild all indexes
EXEC sp_MSforeachtable @command1="print '?' DBCC DBREINDEX ('?', ' ', 80)"



Europe SQL Hosting - HostForLIFEASP.NET :: SQL Comments Statement

clock January 15, 2021 08:10 by author Peter

SQL Comments statement can make your application easier for you to read and maintain. For example, we can include a comment in a statement that describes the purpose of the statement within your application with the exception of hints, comments within SQL. The statement does not affect the statement execution. Please refer to using hints on using this particular form of comment statement.  
 

A comment can appear between any keywords, parameters, or punctuation marks in a statement. You can include a comment in a statement in two ways:
    Begin the comment with a slash and an asterisk (/*). Proceed with the text of the comment. This text can span multiple lines.
    End the comment with an asterisk and a slash (*/). The opening and terminating characters need not be separated from the text by a space or a line break.
    Begin the comment with -- (two hyphens). Proceed with the text of the comment. This text cannot extend to a new line. End the comment with a line break.

Some of the tools used to enter SQL have additional restrictions. For example, if you are using SQL*plus, by default you cannot have a blank line inside a multiline comment.
 
For more information, please refer to the documentation for the tool you use as an interface to the database. A SQL statement can contain multiple comments of both styles. The text of a comment can contain any printable characters in your database character set.
 
The comment statement indicates the user-provided text. Comments can be inserted on a separate line, nested at the end of a SQL command line, or within a SQL statement. The server does not evaluate the comment.  
 
SQL Comment uses the two hyphens (--) for single-line or nested comments. Comments inserted with -- are terminated by a new line, which is specified with a carriage return. Character (U+000A), line feed character (U+000D), or a combination of the two in SQL comments.
 
There is no maximum length for comments. The following table lists the keyboard shortcuts that you can use to comment or uncomment text.
 
Syntax
    -- text_of_comment     

Examples
The following example uses the -- commenting characters.
 
Syntax
    -- Choose the sample database.      
    USE sample;      
    GO      
    -- Choose all columns and all rows from the Address table.      
    SELECT *      
    FROM OrderDetails      
    ORDER BY OrderId  ASC; -- We do not have to specify ASC because       
    -- that is the default.      


SQL Single Line Comments
Single line comments start with --. Any text between -- and the end of the line will be ignored (will not be executed). The following example uses a single-line comment as an explanation.
 
Syntax  
    --Select all:    
    SELECT * FROM OrderDetails ;  


The following example uses a single-line comment to ignore the end of a line.
 
Syntax
    SELECT * FROM OrderDetails -- WHERE OrderName='Coffee';   

The following example uses a single-line comment to ignore a statement.
 
Syntax
    --SELECT * FROM OrderDetails;    
    SELECT * FROM OrderDetails ;   


SQL Multi-line Comments
SQL Multi-line comments start with /* and end with */. Any text between /* and */ will be ignored. The following example uses a multi-line comment as an explanation.
 
Syntax  
    /*Select all the columns    
    of all the records    
    in the OrderDetails table:*/    
    SELECT * FROM OrderDetails;   

The following example uses a multi-line comment to ignore many statements.
 
Syntax
    /*SELECT * FROM Customers;    
    SELECT * FROM Products;    
    SELECT * FROM Orders;    
    SELECT * FROM Categories;*/    
    SELECT * FROM OrderDetails; 
 

To ignore just a part of a statement, also use the /* */ comment. The following example uses a comment to ignore part of a line.
 
Syntax
    SELECT CustomerName, /*City,*/ Country FROM Customers;   

The following example uses a comment to ignore part of a statement
 
Syntax
    SELECT * FROM OrderDetails WHERE (OrderName LIKE 'L%'    
    OR OrderName LIKE 'R%' /*OR OrderName  LIKE 'S%'    
    OR OrderName LIKE 'T%'*/ OR OrderName LIKE 'W%')    
    AND OrderName ='Mango'    
    ORDER BY OrderrAddress;   


In this article, you learned how to use a SQL Comments statement with various options.



Node.js Hosting - HostForLIFE.eu :: 10 Reasons Why "Node.js" Is A First Choice For Web-App Development

clock December 4, 2020 09:00 by author Peter

Node.js was created by Rayn Dahl in 2009 and his work was supported by Joyent. The core idea behind its development was extending Javascript into something that can not only run in the browser but also operate on the machine as a standalone application.
 
What can Node.JS do? Can you use it to build your first highly-secured application?
 
If you are asking these questions, then you are in the right place. Today, we are going to inform you why there’s so much hype among the developers when it comes to Node.js.
 
With so many technologies for development, it can be tough to choose the one which you can easily master yet it can give you better results. Besides, as a beginner, it’s way tougher to choose. So why should you go for Node.js? What makes it so special? Let’s get started from the basics.
 
Node.js was created by Rayn Dahl in 2009 and his work was supported by Joyent. The core idea behind its development was extending Javascript into something that can not only run in the browser but also operates on the machine as a standalone application.
 
Along with Javascript, Node.js runs on the specific Javascript runtime engine, i.e., V8. This runtime engine takes your code from Javascript and transforms it into rapid machine code.
 
Besides, several top-notch apps like Uber, PayPal, Netflix, etc. state that Node.js has powered their web applications and has provided a much faster interface.
 
Why Node.js?
 
Node.js is a Javascript runtime environment that promotes open-source and cross-platform functionalities. It helps in the execution of Javascript outside a browser. With the help of Node.js, one can create a dynamic web application or web page by writing and running a command-line for server-side scripting before the page is being shared at the user’s end.
 
It provides a unique blend of helpers, libraries, and other tools that make the web app development process efficient, easier, and simpler to operate. Besides, it offers a powerful base to develop web apps while securing an online presence.
 
Node.js uses a non-blocking, event-driven I/O Model that turns it light and efficient. It has one of the largest open-source libraries ecosystems, NPM. Besides, it uses push technology on web sockets that allows 2-way communication between server and client. One of the perfect examples of this feature of Node.js is Chatbots. You might have come across one of those while visiting a website’s customer service as well.
 
So now that you have a clear understanding of what you can do with Node.js, let’s get to the details that make it astounding!
 
Reasons that Make Node.js Exceptional!
 
Fast & Scalable
The scalability that Node.js provides to an organization has boosted their profits. As we have already discussed that Node.js runs on V8, its speed in terms of computing is unbeatable. With the new JS code conversion into the native language, the outcoming speed of operation has inspired several large and small institutions.
Besides, Node.js can help you with its ability to run a large batch of asynchronous processes simultaneously. Unlike other technologies for development, Node.js can complete reading, writing, or modifying a database in a shorter timeline.
 
Supremely Extensible
Another vital feature of Node.js is its extensibility. According to the requirements you have, the capabilities it has can be constructed and extended. For any developer who wants to share data among the web server and client, Node.js is there for your aid. It saves the coder from modulating differences in syntax while writing for the backend.
 
Easy To Learn & Code
From the very beginning, Javascript has been introduced in the coding world. It has improved and evolved itself with the internet. That means, almost every programmer or developer has a little bit of Javascript knowledge. But for those who don’t know what the heck is Javascript, it’s the basic and simple language that anyone can efficiently learn in minimum time.
 
As the V8 engine is created for JS coding and deployment by Google Chrome, it makes your work problem-free, and easy. So to get fabulous deployment results, all you need to do is code with JS along with Node.js and your stunning web-app is on its way!
 
Enhanced Productivity
Being entirely based on Javascript, Node.js removes the requirement for having different developers. Be it front-end or back-end, you can easily do it with Node.js instead of relying on other programming languages to complete the task which in return increases productivity.
 
Pervasive Runtime
With the arrival of Node.js, Javascript has been freed from the limitation of the environment as well. Now you can use JS on the client-side along with the server-side.
 
Regardless of where you are manipulating with the files, the effects can easily be seen on the other side.
 
Data Streaming
When it comes to Data Streaming, Node.js can effectively handle both input and output requests to support the online streaming functionality. It uses data streams to run certain operations at the same time it processes data.
 
Single Codebase
As you can write code in JS on both server and client-side, Node.js makes code execution and deployment faster and easier. Moreover, as language conversion is not required in Node.js, the data can be easily transferred from client to server and vice-versa.
 
NPM
NPM or Node.js Package Module enables different environmental packages to indulge into the existing one. It makes the development and performance robust, consistent, and quicker. There are more than 6000 modules available in Node.js that competes with ruby and will soon surpass it.
 
Database Query Resolutions
With Node.js working for both front-end and back-end, there is no need for you to worry about the translation of codes which also promotes flawless streaming while easily solving the database queries by itself.
 
Proxy Server
Node.js acts like a proxy server that gathers data resources and gives the third-party app enough time to perform the requested/required actions.
 
Conclusion
Node.js comes with plenty of benefits which makes it an adequate choice for developing a web application. While using it in your next project, you can not only assure less turnaround time, but also ensure an amazing output level.
 
If you want to empower yourself as a developer and you want the user of your web application to utilize the application to its highest extent in order to yield desirable outcomes, then Node.js is an ideal alternative.
 
Overall, it would not be wrong to say that Node.js has become the first choice for web app developers. There are several reasons Node.js has flourished so much and will undoubtedly reach great heights in the application development industry. It gives you what you want so you can offer creative solutions.



Node.js Hosting - HostForLIFE.eu :: Uploading File in Node.js

clock November 18, 2020 07:38 by author Peter

In this article we will observe uploading a file on a web server made use Node.js. Stream in Node.js makes this task super simple to upload files or so far as that is concerned working with any information exchange between a server and a client. To transfer a file we will work with two modules, HTTP and fs. So let us begin with stacking these two modules in an application:

var http = require('http');
var fs = require('fs')


When modules are loaded proceed to and make a web server as below:
http.createServer(function(request,response){   
  }).listen(8080);


So far we are great and now we wish to utilize the accompanying procedure:
Make a destination write stream. In this stream the substance of the uploaded file will be written. We need to compose once again to the client the rate of data being uploaded.

The first requirement could be possible utilizing a pipe. A pipe is an event of stream in Node.js. And the request is a readable stream. So we will use a pipe event to write a request to a readable stream.
var destinationFile = fs.createWriteStream("destination.md");     
      request.pipe(destinationFile);


The second necessity is to give back an of data uploaded. To do that first read the aggregate size of the file being uploaded. That could be possible by reading the content-length (line number 1 in the accompanying code snippet). At that point in the data occasion of request we will update uploadedBytes that starts at zero (line number 2). In the data event of the request we are calculating the percentage and writing it back in the response.

Now, It’s time to putting it all together your app should contain the following code to upload a file and return the percentage uploaded.
var http = require('http');
var fs = require('fs');
  http.createServer(function(request,response){    
    response.writeHead(200);
      var destinationFile = fs.createWriteStream("destination.md");      
      request.pipe(destinationFile);
      var fileSize = request.headers['content-length'];
      var uploadedBytes = 0 ;
      request.on('data',function(d){  
          uploadedBytes += d.length;
          var p = (uploadedBytes/fileSize) * 100;
          response.write("Uploading " + parseInt(p)+ " %\n");
     });
      request.on('end',function(){
            response.end("File Upload Complete");
          });
    }).listen(8080,function(){        
        console.log("server started");
         });

On a command prompt start the server as in the picture below:

Presently let us utilize curl -upload-file to upload a file on the server.

As you see, while the file is being uploaded the percentage of data uploaded is returned once again to the client. So thusly you can upload a file to the server made utilizing Node.js. Hope this tutorial works for you!



AngularJS Hosting Europe - HostForLIFE.eu :: Angular Data Binding

clock October 16, 2020 07:42 by author Peter

Binding is basically the process of connecting data between the view of your application and it's code behind.
In Angular, the view of the application is the HTML page and the code behind is the Component class written in typescript code.
 
There are different types of data binding in Angular,
 
Component to View using interpolation
This is one of the ways of bindings provided by the Angular framework. For this one, we need to have a class level property in our Component class which we use in our HTML using double curly braces.
 
For example, the below code snippet shows a piece of code in the component class. There are 3 properties: department, imgURL and
showspinner, out of which imgURL and showspinner are already initialized, whereas department is just declared.
    department: Any;  
    imgURL: string = "assets/photos/Department.jpg";  
    showSpinner: boolean = false;   


In our HTML file, these properties are used inside double curly braces to render these values directly on the browser. In our case, the imgURL represents the source of the image so that has to be used in the below manner, as shown in the below code snippet.
    <image src = {{imgURL}} >  

When the application gets rendered on the browser, {{imgURL}} gets replaced by assets/photos/Department.jpg.
 
In the real time application, this property is initialized dynamically at runtime.
 
Component to View using property binding
Just like interpolation, this is another type of one way binding. Just like the prior one, in property binding also, we need a class level property that has to be bound with an HTML control on the View. However, the syntax is a little different.
 
Let's use the same example to apply to property binding.
    department: Any;  
    imgURL: string = "assets/photos/Department.jpg";  
    showSpinner: boolean = false;  

To use the property binding we need to use the below syntax. We need to enclose the property of the HTML control inside the square braces and enclose the Component property inside the quotes.
    <image [src] = 'imgURL' >  

Note
While rendering the data on UI, interpolation converts the data into string, whereas property binding does not change the type and renders it as it is.
 
View to Component using event binding
 
This type of binding is used to bind data from View to Component i.e. from the HTML page to the Component class. This one is similar to the events of simple javascript. It can either be a simple click event, a keyup, or any other. The only difference is that the events in Angular have to be put inside circular braces, the rest all is same.
 
As shown in the below code snippet, there are 3 buttons with their respective click events. The methods handling those events are in the code behind file.
    <button (click) = 'addDepartment()' >Add </button>  
    <button (click) = 'editDepartment()' >Edit </button>  
    <button (click) = 'deleteDepartment()' >Delete </button>  

To Bind View and Component Simultaneously (two-way binding)
 
This type of binding is a little different from other frameworks. The two-way binding keeps the property in the Component class and the value of the HTML control in sync. Whenever we change the value of HTML control, the value of the property of the Component class also
changes.
 
To implement this type of binding in Angular, we use a special directive with a little bit different of a syntax.
    <input required [(ngModel)] = 'departmentName' name = 'departmentName' >  

As you can see in the above code snippet, a directive ngModel has been used inside 2 types of braces. The two braces signify two different bindings. The square brace is for property binding that we discussed as the second type and the circular is for event binding, the third one
that we discussed.
 
Lets talk a little about this textbox, whenever there is any change in the value of this textbox an event gets triggered, and by means of the event binding the value gets passed to the Component property and it gets updated and similarly whenever there is any change in the property value, the value of the textbox will also get updated by means of property binding.

The Component property associated with this textbox in the above code is departmentName.
 
Take an example, when we are fetching some data by making an API call, at the beginning the textbox won't have any value but as soon as the value of the property in the Component class gets updated, the value of textbox will also get updated simultaneously.
 
Note
In order to use [(ngModel)] for two-way binding, the name attribute is a must. The Angular framework internally uses the name attribute to map the value of HTML control with the Component property.

HostForLIFE.eu AngularJS Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes. We have customers from around the globe, spread across every continent. We serve the hosting needs of the business and professional, government and nonprofit, entertainment and personal use market segments.

 



About HostForLIFE

HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2019 Hosting, ASP.NET 5 Hosting, ASP.NET MVC 6 Hosting and SQL 2019 Hosting.


Tag cloud

Sign in