Full Trust European Hosting

BLOG about Full Trust Hosting and Its Technology - Dedicated to European Windows Hosting Customer

Crystal Reports for ASP.NET 4.x Hosting: How to Print a Crystal Report Programmatically in ASP.NET?

clock February 17, 2021 09:24 by author Peter

You can print a Crystal Report using the print option of Crystal Report Viewer. However, there are occasions when you want your application to print a report directly to the printer without viewing the report in Crystal Report Viewer.
 
The ReportDocument class provides PrintToPrinter method that may be used to print a CR direct to the printer. If no printer is selected, the default printer will be used to send the printing pages to.
 
The PrintToPrinter method takes four parameters.
nCopies : Indicates the number of copies to print.
collated : Indicates whether to collate the pages.
startPageN : Indicates the first page to print.
endPageN : Indicates the last page to print.
 
The following steps will guide you to achieve the same:
    Add a crystal report (.cr) file to your ASP.NET application.
    Add a report instance on the page level.
        Dim report As MyReport = New MyReport

    Populate reports data on Page_Init
        ' Get data in a DataSet or DataTable  
          
        Dim ds As DataSet = GetData()  
        ' Fill report with the data  
        report.SetDataSource(ds)

    Print Report
        report.PrintToPrinter(1, False, 0, 0)

If you wish to print a certain page range, change the last two parameters From To page number.
 
If you want to set page margins, you need to create a PageMargin object and set PrintOptions of the ReportDocument.
 

The following code sets page margins and printer name:
    Dim margins As PageMargins =  Report.PrintOptions.PageMargins  
       margins.bottomMargin = 200  
       margins.leftMargin = 200  
       margins.rightMargin = 50  
       margins.topMargin = 100  
       Report.PrintOptions.ApplyPageMargins(margins)  
       
       ' Select the printer name  
       Report.PrintOptions.PrinterName = printerName



Europe SQL Hosting - HostForLIFEASP.NET :: SQL Server Performance Tuning Tips

clock February 10, 2021 12:02 by author Peter

In this article, we will learn about SQL Server performance tuning tips with examples.
 

Database
The Database is the most important and powerful part of any application. If your database is not working properly and taking a long time to compute the result, this means something is going wrong in the database. Here, database tune-up is required, otherwise, the performance of the application will degrade.

I know a lot of articles already published on this topic. But in this article, I tried to provide a list of database tune-up tips that will cover all the aspects of the database. Database tuning is a very critical and fussy process. It is true that database tuning is a database admin task but we should have the basic level of knowledge for doing this. Because, if we are working on a project where there is no role of admin, then it is our responsibility to maintain the performance of the database. If the performance of the database is degraded, then it will cause the worst effect on the whole system.
 
In this article, I will explain some basic database tuning tips that I learned from my experience and from my friends who are working as a database administrator. Using these tips, you can maintain or upgrade the performance of your database system. Basically, these tips are written for SQL Server but we can implement these into another database too, like Oracle and MySQL. Please read these tips carefully and at the end of the article, let me know if you find something wrong or incorrect.
 
Avoid Null value in the fixed-length field
We should avoid the Null value in fixed-length fields because if we insert the NULL value in a fixed-length field, then it will take the same amount of space as the desired input value for that field. So, if we require a null value in a field, then we should use a variable-length field that takes lesser space for NULL. The use of NULLs in a database can reduce the database performance, especially,  in WHERE clauses. For example, try to use varchar instead of char and nvarchar.
    Never use Select * Statement:  

When we require all the columns of a table, we usually use a “Select *” statement. Well, this is not a good approach because when we use the “select *” statement, the SQL Server converts * into all column names before executing the query, which takes extra time and effort. So, always provide all the column names in the query instead of “select *”.
 
Normalize tables in a database

Normalized and managed tables increase the performance of a database. So,  always try to perform at least 3rd normal form. It is not necessary that all tables require a 3NF normalization form, but if any table contains 3NF form normalization, then it can be called well-structured tables.
 
Keep Clustered Index Small
Clustered index stores data physically into memory. If the size of a clustered index is very large, then it can reduce the performance. Hence, a large clustered index on a table with a large number of rows increases the size significantly. Never use an index for frequently changed data because when any change in the table occurs, the index is also modified, and that can degrade performance.
 
Use Appropriate Datatype
If we select an inappropriate data type, it will reduce the space and enhance the performance; otherwise, it generates the worst effect. So, select an appropriate data type according to the requirement. SQL contains many data types that can store the same type of data but you should select an appropriate data type because each data type has some limitations and advantages upon another one.
 
Store image path instead of the image itself

I found that many developers try to store the image into the database instead of the image path. It may be possible that it is a requirement of the application to store images into a database. But generally, we should use an image path, because storing image in a database increases the database size and reduces performance.
 
USE Common Table Expressions (CTEs) instead of Temp table

We should prefer a CTE over the temp table because temp tables are stored physically in a TempDB which is deleted after the session ends. While CTEs are created within memory. Execution of a CTE is very fast as compared to the temp tables and very lightweight too.
 
Use Appropriate Naming Convention

The main goal of adopting a naming convention for database objects is to make it easily identifiable by the users, their type, and the purpose of all objects contained in the database. A good naming convention decreases the time required in searching for an object. A good name clearly indicates the action name of any object that it will perform.
    * tblEmployees // Name of table  
    * vw_ProductDetails // Name of View  
    * PK_Employees // Name of Primary Key  


Use UNION ALL instead of UNION
We should prefer UNION ALL instead of UNION because UNION always performs sorting that increases the time. Also, UNION can't work with text datatype because text datatype doesn't support sorting. So, in that case, UNION can't be used. Thus, always prefer UNION All.
 
Use Small data type for Index
It is very important to use a Small data type for the index. Because the bigger size of the data type reduces the performance of the Index. For example, nvarhcar(10) uses  20 bytes of data, and varchar(10) uses 10 bytes of the data. So, the index for the varchar data type works better. We can also take another example of DateTime and int. Datetime data type takes 8 Bytes and int takes 4 bytes. A small datatype means less I/O overhead that increases the performance of the index.
    Use Count(1) instead of Count(*) and Count(Column_Name):  

There is no difference in the performance of these three expressions; but, the last two expressions are not well considered to be a good practice. So, always use count(10) to get the numbers of records from a table.
 
Use Stored Procedure
Instead of using the row query, we should use the stored procedure because stored procedures are fast and easy to maintain for security and large queries.
 
Use Between instead of In
 
If Between can be used instead of IN, then always prefer Between. For example, you are searching for an employee whose id is either 101, 102, 103, or 104. Then, you can write the query using the In operator like this:
    Select * From Employee Where EmpId In (101,102,103,104)  

You can also use Between operator for the same query.
    Select * from Employee Where EmpId Between 101 And 104  

Use If Exists to determine the record
 
It has been seen many times that developers use "Select Count(*)" to get the existence of records. For example
    Declare @Count int;  
    Set @Count=(Select * From Employee Where EmpName Like '%Pan%')  
    If @Count>0  
    Begin  
    //Statement  
    End  


But, this is not a proper way for such type of queries. Because, the above query performs the complete table scan, so you can use If Exists for the same query. That will increase the performance of your query, as below.
    IF Exists(Select Emp_Name From Employee Where EmpName Like '%Pan%')  
    Begin  
    //Statements  
    End  


Never Use ” Sp_” for User Define Stored Procedure
Most programmers use “sp_” for user-defined Stored Procedures. I suggest to never use “sp_” for user-defined Stored Procedure because in SQL Server, the master database has a Stored Procedure with the "sp_" prefix. So, when we create a Stored Procedure with the "sp_" prefix, the SQL Server always looks first in the Master database, then in the user-defined database, which takes some extra time.
 
Practice to use Schema Name

A schema is an organization or structure for a database. We can define a schema as a collection of database objects that are owned by a single principle and form a single namespace. Schema name helps the SQL Server finding that object in a specific schema. It increases the speed of the query execution. For example, try to use [dbo] before the table name.
 
Avoid Cursors

A cursor is a temporary work area created in the system memory when a SQL statement is executed. A cursor is a set of rows together with a pointer that identifies the current row. It is a database object to retrieve the data from a result set one row at a time. But, the use of a cursor is not good because it takes a long time because it fetches data row by row. So, we can use a replacement of cursors. A temporary table for or While loop may be a replacement of a cursor in some cases.
 
SET NOCOUNT ON

When an INSERT, UPDATE, DELETE, or SELECT command is executed, the SQL Server returns the number affected by the query. It is not good to return the number of rows affected by the query. We can stop this by using NOCOUNT ON.
 
Use Try–Catch
In T-SQL, a Try-Catch block is very important for exception handling. A best practice and use of a Try-Catch block in SQL can save our data from undesired changes. We can put all T-SQL statements in a TRY BLOCK and the code for exception handling can be put into a CATCH block.
 
Remove Unused Index

Remove all unused indexes because indexes are always updated when the table is updated so the index must be maintained even if not used.
 
Always create an index on the table
An index is a data structure to retrieve fast data. Indexes are special lookup tables that the database search engine can use to speed up data retrieval. Simply an index is a pointer to data in a table. Mainly an index increases the speed of data retrieval. So always try to keep a minimum of one index on each table it may be either clustered or non-clustered index.
 
Use Foreign Key with the appropriate action

A foreign key is a column or combination of columns that is the same as the primary key, but in a different table. Foreign keys are used to define a relationship and enforce integrity between two tables. In addition to protecting the integrity of our data, FK constraints also help document the relationships between our tables within the database itself. Also, define an action rule for the delete and update command, you can select any action among the No Action, Set NULL, Cascade, and set default.
 
Use Alias Name

Aliasing renames a table or a column temporarily by giving another name. The use of table aliases means to rename a table in a specific SQL statement. Using aliasing, we can provide a small name to a large name that will save our time.
 
Use Transaction Management

A transaction is a unit of work performed against the database. A transaction is a set of work (T-SQL statements) that execute together like a single unit in a specific logical order as a single unit. If all the statements are executed successfully then the transaction is complete and the transaction is committed and the data will be saved in the database permanently. If any single statement fails then the entire transaction will fail and then the complete transaction is either canceled or rolled back.
 
Use Index Name in Query

Although in most cases the query optimizer will pick the appropriate index for a specific table based on statistics, sometimes it is better to specify the index name in your SELECT query.
 
Example

    SELECT  
    e.Emp_IId,  
    e.First_Name,  
    e.Last_Name  
    FROM dbo.EMPLOYEE e  
    WITH (INDEX (Clus_Index))  
    WHERE e.Emp_IId > 5  
    Select Limited Data  


We should retrieve only the required data and ignore the unimportant data. The fewer data retrieved, the faster the query will run. Rather than filtering on the client, push as much filtering as possible on the server-end. This will result in less data being sent on the wire and you will see results much faster.
 
Drop Index before Bulk Insertion of Data
We should drop the index before the insertion of a large amount of data. This makes the insert statement run faster. Once the inserts are completed, you can recreate the index again.
 
Use Unique Constraint and Check Constraint

A Check constraint checks for a specific condition before inserting data into a table. If the data passes all the Check constraints then the data will be inserted into the table otherwise the data for insertion will be discarded. The CHECK constraint ensures that all values in a column satisfy certain conditions.
 
A Unique Constraint ensures that each row for a column must have a unique value. It is like a Primary key but it can accept only one null value. In a table, one or more column can contain a Unique Constraint. So we should use a Check Constraint and Unique Constraint because it maintains the integrity in the database.
 
Importance of Column Order in index

If we are creating a Non-Clustered index on more than one column then we should consider the sequence of the columns. The order or position of a column in an index also plays a vital role in improving SQL query performance. An index can help to improve the SQL query performance if the criteria of the query match the columns that are left most in the index key. So we should place the most selective column on left most side of a non-clustered index.
 
Recompiled Stored Procedure
We all know that Stored Procedures execute T-SQL statements in less time than a similar set of T-SQL statements are executed individually. The reason is that the query execution plan for the Stored Procedures is already stored in the "sys. procedures" system-defined view. We all know that recompilation of a Stored Procedure reduces SQL performance. But in some cases, it requires recompilation of the Stored Procedure. Dropping and altering of a column, index, and/or trigger of a table. Updating the statistics used by the execution plan of the Stored Procedure. Altering the procedure will cause the SQL Server to create a new execution plan.
 
Use Sparse Column

Sparse columns provide better performance for NULL and Zero data. If you have any column that contains large amounts numbers of NULL and Zero then prefer Sparse Column instead of the default column of SQL Server. The sparse column takes lesser space than the regular column (without the SPARSE clause).
 
Example
    Create Table Table_Name  
    (  
    Id int, //Default Column  
    Group_Id int Sparse // Sparse Column  
    )  

Avoid Loops In Coding

Suppose you want to insert 10 records into the table then instead of using a loop to insert the data into the table you can insert all data using a single insert query.
    declare @int int;  
    set @int=1;  
    while @int<=10  
    begin  
    Insert Into Tab values(@int,'Value'+@int);  
    set @int=@int+1;  
    end  


The above method is not a good approach to insert the multiple records instead of this you can use another method like below.
    Insert Into Tab values(1,'Value1'),(2,'Value2'),(3,'Value3'),(4,'Value4'),(5,'Value5'),(6,'Value6'),(7,'Value7'),(8,'Value8'),(9,'Value9'),(10,'Value10');  

Avoid Correlated Queries
In A Correlated query inner query take input from the outer(parent) query, this query runs for each row that reduces the performance of the database.
    Select Name, City, (Select Company_Name  
    from  
    Company where companyId=cs.CustomerId) from Customer cs  


The best method is that we should prefer the join instead of the correlated query as below.
    Select cs.Name, cs.City, co.Company_Name  
    from Customer cs  
    Join  
    Company co  
    on  
    cs.CustomerId=co.CustomerId  


Avoid index and join hints

In some cases, index and join hint may increase the performance of a database, but if you provide any join or index hint then the server always tries to use the hint provided by you although it has a better execution plan, so such type of approach may reduce the database performance. Use Join or index hint if you are confident that there is not any better execution plan. If you have any doubt then make the server free to choose an execution plan.
 
Avoid Use of Temp table

Avoid the use of a temp table as much as you can because a temp table is created into a temp database like any basic table structure. After completion of the task, we require to drop the temp table. That raises the load on the database. You can prefer the table variable instead of this.
 
Use Index for required columns
The index should be created for all columns which are using the Where, Group By, Order By, Top, and Distinct command.
 
Don't use Index
It is true that the use of an index makes the fast retrieval of the result. But, it is not always true. In some cases, the use of index doesn't affect the performance of the query. In such cases, we can avoid the use of the index.
    When the size of the table is very small.
    The index is not used in the query optimizer
    DML(insert, Update, Delete) operations are frequently used.
    Column contains TEXT, nText type of data.

Use View for complex queries
If you are using join on two or more tables and the result of queries is frequently used, then it will be better to make a View that will contain the result of the complex query. Now, you can use this View multiple times, so that you don't have to execute the query multiple times to get the same result.
 
Make Transaction short

It will be better to keep the transaction as short as we can. Because the big size of transactions makes the table locked and reduces the database concurrency. So, always try to make shorter transactions.
 
Use Full-text Index
If your query contains multiple wild card searches using LIKE(%%), then the use of Full-text Index can increase the performance. Full-text queries can include simple words and phrases or multiple forms of a word or phrase. A full-text query returns any document that contains at least one match (also known as a hit). A match occurs when a target document contains all the terms specified in the Full-text query and meets any other search conditions, such as the distance between the matching terms.
 
Thanks for reading the article. As I have asked in the starting, if you have any doubt or I wrote something wrong, then write me back in the comments section.



SQL Server 2019 Hosting - HostForLIFEASP.NET :: Query to Repair Suspect Database In SQL Server

clock January 29, 2021 06:43 by author Peter

Query to Repair Suspect Database In SQL Server
This is the query to repair suspect database in SQL Server. 
If your Database is marked as suspected, here are the steps to fix it.
In this you have replace “dbName” with suspected db name and run this query in master database.

Step 1
--command to recover suspected database

ALTER DATABASE DbName SET EMERGENCY

DBCC checkdb('DbName')

ALTER DATABASE DbName SET SINGLE_USER WITH ROLLBACK IMMEDIATE

DBCC CheckDB ('DbName', REPAIR_ALLOW_DATA_LOSS)

ALTER DATABASE DbName SET MULTI_USER


This step 1 cannot set Index created for that database so now we have to rebuild the index. This query should be run in that suspected database.

Step 2
-- command to rebuild all indexes
EXEC sp_MSforeachtable @command1="print '?' DBCC DBREINDEX ('?', ' ', 80)"



Europe SQL Hosting - HostForLIFEASP.NET :: SQL Comments Statement

clock January 15, 2021 08:10 by author Peter

SQL Comments statement can make your application easier for you to read and maintain. For example, we can include a comment in a statement that describes the purpose of the statement within your application with the exception of hints, comments within SQL. The statement does not affect the statement execution. Please refer to using hints on using this particular form of comment statement.  
 

A comment can appear between any keywords, parameters, or punctuation marks in a statement. You can include a comment in a statement in two ways:
    Begin the comment with a slash and an asterisk (/*). Proceed with the text of the comment. This text can span multiple lines.
    End the comment with an asterisk and a slash (*/). The opening and terminating characters need not be separated from the text by a space or a line break.
    Begin the comment with -- (two hyphens). Proceed with the text of the comment. This text cannot extend to a new line. End the comment with a line break.

Some of the tools used to enter SQL have additional restrictions. For example, if you are using SQL*plus, by default you cannot have a blank line inside a multiline comment.
 
For more information, please refer to the documentation for the tool you use as an interface to the database. A SQL statement can contain multiple comments of both styles. The text of a comment can contain any printable characters in your database character set.
 
The comment statement indicates the user-provided text. Comments can be inserted on a separate line, nested at the end of a SQL command line, or within a SQL statement. The server does not evaluate the comment.  
 
SQL Comment uses the two hyphens (--) for single-line or nested comments. Comments inserted with -- are terminated by a new line, which is specified with a carriage return. Character (U+000A), line feed character (U+000D), or a combination of the two in SQL comments.
 
There is no maximum length for comments. The following table lists the keyboard shortcuts that you can use to comment or uncomment text.
 
Syntax
    -- text_of_comment     

Examples
The following example uses the -- commenting characters.
 
Syntax
    -- Choose the sample database.      
    USE sample;      
    GO      
    -- Choose all columns and all rows from the Address table.      
    SELECT *      
    FROM OrderDetails      
    ORDER BY OrderId  ASC; -- We do not have to specify ASC because       
    -- that is the default.      


SQL Single Line Comments
Single line comments start with --. Any text between -- and the end of the line will be ignored (will not be executed). The following example uses a single-line comment as an explanation.
 
Syntax  
    --Select all:    
    SELECT * FROM OrderDetails ;  


The following example uses a single-line comment to ignore the end of a line.
 
Syntax
    SELECT * FROM OrderDetails -- WHERE OrderName='Coffee';   

The following example uses a single-line comment to ignore a statement.
 
Syntax
    --SELECT * FROM OrderDetails;    
    SELECT * FROM OrderDetails ;   


SQL Multi-line Comments
SQL Multi-line comments start with /* and end with */. Any text between /* and */ will be ignored. The following example uses a multi-line comment as an explanation.
 
Syntax  
    /*Select all the columns    
    of all the records    
    in the OrderDetails table:*/    
    SELECT * FROM OrderDetails;   

The following example uses a multi-line comment to ignore many statements.
 
Syntax
    /*SELECT * FROM Customers;    
    SELECT * FROM Products;    
    SELECT * FROM Orders;    
    SELECT * FROM Categories;*/    
    SELECT * FROM OrderDetails; 
 

To ignore just a part of a statement, also use the /* */ comment. The following example uses a comment to ignore part of a line.
 
Syntax
    SELECT CustomerName, /*City,*/ Country FROM Customers;   

The following example uses a comment to ignore part of a statement
 
Syntax
    SELECT * FROM OrderDetails WHERE (OrderName LIKE 'L%'    
    OR OrderName LIKE 'R%' /*OR OrderName  LIKE 'S%'    
    OR OrderName LIKE 'T%'*/ OR OrderName LIKE 'W%')    
    AND OrderName ='Mango'    
    ORDER BY OrderrAddress;   


In this article, you learned how to use a SQL Comments statement with various options.



Node.js Hosting - HostForLIFE.eu :: 10 Reasons Why "Node.js" Is A First Choice For Web-App Development

clock December 4, 2020 09:00 by author Peter

Node.js was created by Rayn Dahl in 2009 and his work was supported by Joyent. The core idea behind its development was extending Javascript into something that can not only run in the browser but also operate on the machine as a standalone application.
 
What can Node.JS do? Can you use it to build your first highly-secured application?
 
If you are asking these questions, then you are in the right place. Today, we are going to inform you why there’s so much hype among the developers when it comes to Node.js.
 
With so many technologies for development, it can be tough to choose the one which you can easily master yet it can give you better results. Besides, as a beginner, it’s way tougher to choose. So why should you go for Node.js? What makes it so special? Let’s get started from the basics.
 
Node.js was created by Rayn Dahl in 2009 and his work was supported by Joyent. The core idea behind its development was extending Javascript into something that can not only run in the browser but also operates on the machine as a standalone application.
 
Along with Javascript, Node.js runs on the specific Javascript runtime engine, i.e., V8. This runtime engine takes your code from Javascript and transforms it into rapid machine code.
 
Besides, several top-notch apps like Uber, PayPal, Netflix, etc. state that Node.js has powered their web applications and has provided a much faster interface.
 
Why Node.js?
 
Node.js is a Javascript runtime environment that promotes open-source and cross-platform functionalities. It helps in the execution of Javascript outside a browser. With the help of Node.js, one can create a dynamic web application or web page by writing and running a command-line for server-side scripting before the page is being shared at the user’s end.
 
It provides a unique blend of helpers, libraries, and other tools that make the web app development process efficient, easier, and simpler to operate. Besides, it offers a powerful base to develop web apps while securing an online presence.
 
Node.js uses a non-blocking, event-driven I/O Model that turns it light and efficient. It has one of the largest open-source libraries ecosystems, NPM. Besides, it uses push technology on web sockets that allows 2-way communication between server and client. One of the perfect examples of this feature of Node.js is Chatbots. You might have come across one of those while visiting a website’s customer service as well.
 
So now that you have a clear understanding of what you can do with Node.js, let’s get to the details that make it astounding!
 
Reasons that Make Node.js Exceptional!
 
Fast & Scalable
The scalability that Node.js provides to an organization has boosted their profits. As we have already discussed that Node.js runs on V8, its speed in terms of computing is unbeatable. With the new JS code conversion into the native language, the outcoming speed of operation has inspired several large and small institutions.
Besides, Node.js can help you with its ability to run a large batch of asynchronous processes simultaneously. Unlike other technologies for development, Node.js can complete reading, writing, or modifying a database in a shorter timeline.
 
Supremely Extensible
Another vital feature of Node.js is its extensibility. According to the requirements you have, the capabilities it has can be constructed and extended. For any developer who wants to share data among the web server and client, Node.js is there for your aid. It saves the coder from modulating differences in syntax while writing for the backend.
 
Easy To Learn & Code
From the very beginning, Javascript has been introduced in the coding world. It has improved and evolved itself with the internet. That means, almost every programmer or developer has a little bit of Javascript knowledge. But for those who don’t know what the heck is Javascript, it’s the basic and simple language that anyone can efficiently learn in minimum time.
 
As the V8 engine is created for JS coding and deployment by Google Chrome, it makes your work problem-free, and easy. So to get fabulous deployment results, all you need to do is code with JS along with Node.js and your stunning web-app is on its way!
 
Enhanced Productivity
Being entirely based on Javascript, Node.js removes the requirement for having different developers. Be it front-end or back-end, you can easily do it with Node.js instead of relying on other programming languages to complete the task which in return increases productivity.
 
Pervasive Runtime
With the arrival of Node.js, Javascript has been freed from the limitation of the environment as well. Now you can use JS on the client-side along with the server-side.
 
Regardless of where you are manipulating with the files, the effects can easily be seen on the other side.
 
Data Streaming
When it comes to Data Streaming, Node.js can effectively handle both input and output requests to support the online streaming functionality. It uses data streams to run certain operations at the same time it processes data.
 
Single Codebase
As you can write code in JS on both server and client-side, Node.js makes code execution and deployment faster and easier. Moreover, as language conversion is not required in Node.js, the data can be easily transferred from client to server and vice-versa.
 
NPM
NPM or Node.js Package Module enables different environmental packages to indulge into the existing one. It makes the development and performance robust, consistent, and quicker. There are more than 6000 modules available in Node.js that competes with ruby and will soon surpass it.
 
Database Query Resolutions
With Node.js working for both front-end and back-end, there is no need for you to worry about the translation of codes which also promotes flawless streaming while easily solving the database queries by itself.
 
Proxy Server
Node.js acts like a proxy server that gathers data resources and gives the third-party app enough time to perform the requested/required actions.
 
Conclusion
Node.js comes with plenty of benefits which makes it an adequate choice for developing a web application. While using it in your next project, you can not only assure less turnaround time, but also ensure an amazing output level.
 
If you want to empower yourself as a developer and you want the user of your web application to utilize the application to its highest extent in order to yield desirable outcomes, then Node.js is an ideal alternative.
 
Overall, it would not be wrong to say that Node.js has become the first choice for web app developers. There are several reasons Node.js has flourished so much and will undoubtedly reach great heights in the application development industry. It gives you what you want so you can offer creative solutions.



Node.js Hosting - HostForLIFE.eu :: Uploading File in Node.js

clock November 18, 2020 07:38 by author Peter

In this article we will observe uploading a file on a web server made use Node.js. Stream in Node.js makes this task super simple to upload files or so far as that is concerned working with any information exchange between a server and a client. To transfer a file we will work with two modules, HTTP and fs. So let us begin with stacking these two modules in an application:

var http = require('http');
var fs = require('fs')


When modules are loaded proceed to and make a web server as below:
http.createServer(function(request,response){   
  }).listen(8080);


So far we are great and now we wish to utilize the accompanying procedure:
Make a destination write stream. In this stream the substance of the uploaded file will be written. We need to compose once again to the client the rate of data being uploaded.

The first requirement could be possible utilizing a pipe. A pipe is an event of stream in Node.js. And the request is a readable stream. So we will use a pipe event to write a request to a readable stream.
var destinationFile = fs.createWriteStream("destination.md");     
      request.pipe(destinationFile);


The second necessity is to give back an of data uploaded. To do that first read the aggregate size of the file being uploaded. That could be possible by reading the content-length (line number 1 in the accompanying code snippet). At that point in the data occasion of request we will update uploadedBytes that starts at zero (line number 2). In the data event of the request we are calculating the percentage and writing it back in the response.

Now, It’s time to putting it all together your app should contain the following code to upload a file and return the percentage uploaded.
var http = require('http');
var fs = require('fs');
  http.createServer(function(request,response){    
    response.writeHead(200);
      var destinationFile = fs.createWriteStream("destination.md");      
      request.pipe(destinationFile);
      var fileSize = request.headers['content-length'];
      var uploadedBytes = 0 ;
      request.on('data',function(d){  
          uploadedBytes += d.length;
          var p = (uploadedBytes/fileSize) * 100;
          response.write("Uploading " + parseInt(p)+ " %\n");
     });
      request.on('end',function(){
            response.end("File Upload Complete");
          });
    }).listen(8080,function(){        
        console.log("server started");
         });

On a command prompt start the server as in the picture below:

Presently let us utilize curl -upload-file to upload a file on the server.

As you see, while the file is being uploaded the percentage of data uploaded is returned once again to the client. So thusly you can upload a file to the server made utilizing Node.js. Hope this tutorial works for you!



AngularJS Hosting Europe - HostForLIFE.eu :: Angular Data Binding

clock October 16, 2020 07:42 by author Peter

Binding is basically the process of connecting data between the view of your application and it's code behind.
In Angular, the view of the application is the HTML page and the code behind is the Component class written in typescript code.
 
There are different types of data binding in Angular,
 
Component to View using interpolation
This is one of the ways of bindings provided by the Angular framework. For this one, we need to have a class level property in our Component class which we use in our HTML using double curly braces.
 
For example, the below code snippet shows a piece of code in the component class. There are 3 properties: department, imgURL and
showspinner, out of which imgURL and showspinner are already initialized, whereas department is just declared.
    department: Any;  
    imgURL: string = "assets/photos/Department.jpg";  
    showSpinner: boolean = false;   


In our HTML file, these properties are used inside double curly braces to render these values directly on the browser. In our case, the imgURL represents the source of the image so that has to be used in the below manner, as shown in the below code snippet.
    <image src = {{imgURL}} >  

When the application gets rendered on the browser, {{imgURL}} gets replaced by assets/photos/Department.jpg.
 
In the real time application, this property is initialized dynamically at runtime.
 
Component to View using property binding
Just like interpolation, this is another type of one way binding. Just like the prior one, in property binding also, we need a class level property that has to be bound with an HTML control on the View. However, the syntax is a little different.
 
Let's use the same example to apply to property binding.
    department: Any;  
    imgURL: string = "assets/photos/Department.jpg";  
    showSpinner: boolean = false;  

To use the property binding we need to use the below syntax. We need to enclose the property of the HTML control inside the square braces and enclose the Component property inside the quotes.
    <image [src] = 'imgURL' >  

Note
While rendering the data on UI, interpolation converts the data into string, whereas property binding does not change the type and renders it as it is.
 
View to Component using event binding
 
This type of binding is used to bind data from View to Component i.e. from the HTML page to the Component class. This one is similar to the events of simple javascript. It can either be a simple click event, a keyup, or any other. The only difference is that the events in Angular have to be put inside circular braces, the rest all is same.
 
As shown in the below code snippet, there are 3 buttons with their respective click events. The methods handling those events are in the code behind file.
    <button (click) = 'addDepartment()' >Add </button>  
    <button (click) = 'editDepartment()' >Edit </button>  
    <button (click) = 'deleteDepartment()' >Delete </button>  

To Bind View and Component Simultaneously (two-way binding)
 
This type of binding is a little different from other frameworks. The two-way binding keeps the property in the Component class and the value of the HTML control in sync. Whenever we change the value of HTML control, the value of the property of the Component class also
changes.
 
To implement this type of binding in Angular, we use a special directive with a little bit different of a syntax.
    <input required [(ngModel)] = 'departmentName' name = 'departmentName' >  

As you can see in the above code snippet, a directive ngModel has been used inside 2 types of braces. The two braces signify two different bindings. The square brace is for property binding that we discussed as the second type and the circular is for event binding, the third one
that we discussed.
 
Lets talk a little about this textbox, whenever there is any change in the value of this textbox an event gets triggered, and by means of the event binding the value gets passed to the Component property and it gets updated and similarly whenever there is any change in the property value, the value of the textbox will also get updated by means of property binding.

The Component property associated with this textbox in the above code is departmentName.
 
Take an example, when we are fetching some data by making an API call, at the beginning the textbox won't have any value but as soon as the value of the property in the Component class gets updated, the value of textbox will also get updated simultaneously.
 
Note
In order to use [(ngModel)] for two-way binding, the name attribute is a must. The Angular framework internally uses the name attribute to map the value of HTML control with the Component property.

HostForLIFE.eu AngularJS Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes. We have customers from around the globe, spread across every continent. We serve the hosting needs of the business and professional, government and nonprofit, entertainment and personal use market segments.

 



European Visual Studio 2017 Hosting - HostForLIFE.eu :: Exporting Comments In Visual Studio

clock October 15, 2020 10:08 by author Peter

In this blog, we will be talking about how transactions take place in Entity Framework. DbContext.Database.BeginTransaction() method creates a new transaction for the underlying database and allows us to commit or roll back changes made to the database using multiple SaveChanges method calls.

The following example demonstrates creating a new transaction object using BeginTransaction(), which is, then, used with multiple SaveChanges() calls.

using(var context = new SchoolContext()) { 
using(DbContextTransaction transaction = context.Database.BeginTransaction()) { 
    try { 
        var standard = context.Standards.Add(new Standard() { 
            StandardName = "1st Grade" 
        }); 
        context.Students.Add(new Student() { 
            FirstName = "Rama2", 
                StandardId = standard.StandardId 
        }); 
        context.SaveChanges(); 
        context.Courses.Add(new Course() { 
            CourseName = "Computer Science" 
        }); 
        context.SaveChanges(); 
        transaction.Commit(); //save the changes 
    } catch (Exception ex) { 
        transaction.Rollback(); //rollback the changes on exception 
        Console.WriteLine("Error occurred."); 
    } 

In the above example, we created new entities - Standard, Student, and Course and saved these to the database by calling two SaveChanges(), which execute INSERT commands within one transaction.

If an exception occurs, then the whole changes made to the database will be rolled back.

I hope it's helpful.



Windows Server 2016 SSD Hosting - - HostForLIFE.eu :: Dedicated Servers As The Secured Solutions

clock September 25, 2020 09:16 by author Peter

When it comes to the option of dedicated servers, you may find it costly in comparison to other web hosting options. But ultimately, the choice is worth making because plenty of commercial benefits are integrated into this web hosting plan. Let’s see how it is a better option than others web hosting plans.

Better uptime
In dedicated hosting arrangements, the service provider reserves the SLA,  including the solution of hardware failure. The service provider maintains a support team for 24x7. With expert skill sets and ITIL complaint methods you can be sure about a high uptime.

Cost efficiency
This is a cost efficient option. According to the plan, a dedicated hosting service provider is responsible for upgradates and maintenance of hardware for maintaining connectivity, and for offering a friendly physical environment. Under this plan, you as a user have no obligation to pay for the total server room or for employing a service administrator. Under this plan you have to pay for the services you will be using.

Reliable bandwidth
Under this web hosting plan you will get to enjoy higher internet speed. There is no chance to lose the speed as there is no risk of sharing the connection. This will help in faster communication, upload management, and uninterrupted business presence.

Complete control on applications
If you select dedicated web hosting, you will enjoy a complete monopoly of decisions about using site management tools and allied other applications to boost your hosting environment. However, about the tools, you need to get prior approval from your hosting service provider that they will be able to give you backend support to maintain them.

Better security arrangement
Dedicated hosting service offers uninterrupted access to physical server. The security arrangement includes supervision cameras, Biometric Access Control System, round-the-clock patrolling, etc. for improved security. Advanced service providers often provide additional supports like DDos guard, web application firewall, VAPT, and security event management.

These reasons can clearly  justify why Dedicated Server is a better option. It is clearly understood that although expensive, this category of web hosting service offers excellent ROI (Return over investment).



AngularJS Hosting Europe - HostForLIFE.eu :: How to Create Strong Password for AngularJS Pages?

clock September 18, 2020 08:28 by author Peter

In this post, let me explain you how to create strong password for AngularJS. For choosing a password we need combination of special characters, Capital letter , small letters, digits etc to make it strong. Write the following code:

    <!DOCTYPE html> 
    <html> 
    <head> 
        <title>Strong Password for Angular UI Pages</title>            
        <script src="http://ajax.googleapis.com/ajax/libs/angularjs/1.3.8/angular.min.js"></script>   
        <script> 
            var app = angular.module("myApp", []); 
            app.controller("myCtrl", function ($scope) {        
                var strongRegularExp = new RegExp("^(?=.*[a-z])(?=.*[A-Z])(?=.*[0-9])(?=.*[!@#\$%\^&\*])(?=.{8,})");        
                var mediumRegularExp = new RegExp("^(((?=.*[a-z])(?=.*[A-Z]))|((?=.*[a-z])(?=.*[0-9]))|((?=.*[A-Z])(?=.*[0-9])))(?=.{6,})");        
                $scope.checkpwdStrength = { 
                    "width": "150px", 
                    "height": "25px", 
                    "float": "right" 
                };        
                $scope.validationInputPwdText = function (value) { 
                    if (strongRegularExp.test(value)) { 
                        $scope.checkpwdStrength["background-color"] = "green"; 
                        $scope.userPasswordstrength = 'You have a Very Strong Password now'; 
                    } else if (mediumRegularExp.test(value)) { 
                        $scope.checkpwdStrength["background-color"] = "orange"; 
                        $scope.userPasswordstrength = 'Strong password, Please give a very strong password';  
                    } else { 
                        $scope.checkpwdStrength["background-color"] = "red"; 
                        $scope.userPasswordstrength = 'Weak Password , Please give a strong password'; 
                    }                  
};        
           }); 
        </script> 
    </head> 
    <body ng-app="myApp"> 
        <div ng-controller="myCtrl" style="border:5px solid gray; width:800px;"> 
            <div> 
                <h3>Strong Password for Angular UI Pages </h3> 
            </div> 
            <div style="padding-left:25px;">                  
<div ng-style="checkpwdStrength"></div> 
                <input type="password" ng-model="userPassword" ng-change="validationInputPwdText(userPassword)" class="class1" /> 
                <b> {{userPasswordstrength}}</b> 
            </div> 
            <br /> 
            <br /> 
            <br /> 
        </div> 
    </body> 
    </html> 

HostForLIFE.eu AngularJS Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes. We have customers from around the globe, spread across every continent. We serve the hosting needs of the business and professional, government and nonprofit, entertainment and personal use market segments.



About HostForLIFE

HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2019 Hosting, ASP.NET 5 Hosting, ASP.NET MVC 6 Hosting and SQL 2019 Hosting.


Tag cloud

Sign in