How familiar you are with SQL, the question always answered by how fast you work with SQL. The speed is directly proportional to the shortcuts we use. Let us discuss some useful keyboard short cuts that help you in daily activities with SSMS.

  1. Ctrl + N – New Query Window
  2. Ctrl + K  + C/U – Comment the selected line(C)/Uncomment(U)
  3. Ctrl + Shift+L/U – Make a selected word upper(U)/Lowe(L) Case
  4. Ctrl + W  – Select current ( Ctrl + Shift+ ->(arrow) )
  5. Ctrl + U – Select Database
  6. Ctrl+F5 – Parse Query
  7. Ctrl + E – Execute (F5)
  8. Ctrl + (Arrow) – Cursor move to arrow direction till next special character
  9. Ctrl + (Space)/J (Alt+Right Arrow) – List member (intellisense)
  10. Ctrl + M – Enable Execution Plan
  11. Ctrl + L – Display actual execution plan
  12. Ctrl+Shift+V – Cyclic clip board ( Paste number of copied values)
  13. F8 – Object Explorer
  14. Ctrl+Alt+T – Template Explorer
  15. Ctrl+Tab – Move between SSMS query window tabs
  16. Ctrl + D/T – Changes result type [ Grdi(D), Text(T)]
  17. Ctrl + Q – Query Designer
  18. Ctrl + R – Hide/unhide Result pane
  19. Ctrl + H – Find and replace ( Ctrl+Shift+H – > You can experiment in your system)
  20. Ctrl + Shift+T – Toggle Words both sides of a special character
  21. Ctrl + F4 – Close currently opened query window
  22. Ctrl +\ + E – Error List
  23. F10 – Step over ( Debugging shortcut)
  24. F11 – Step Into ( Debugging shortcut)
  25. F9 – Toggle Bookmark.

And also SQL supports all windows shortcuts like Ctrl+C,Ctrl+N etc.

Apart from these plenty of shortcuts, Microsoft has provided an option to customize your own shortcuts.


In “Tools” menu “Options” tab “Keyboard” -> “Query Shortcuts“, here you could set your own keyboard shortcuts. For example if you wanted select 100 rows from a table, you can set any key combinations as “Select top 100 * from ” and then select any table name in query window and press the assigned key combination to get the result. Same way you can set frequently using key words as shortcuts.

Please do start playing with SSMS and find more shortcuts and leave them in comment box.


In my previous post we had discussed about column store index (non clustered), after 2 years in Microsoft’s new version SQL 2014 they have introduced the clustered column store index. The main limitation of the column store indexed tables were ready only (it was not apt for OLTP database), in the latest version Microsoft has introduced clustered column store index with eliminating data manipulation restriction.

Let us discuss how Microsoft achieved this feature.

Delta storage (Insert):  The data insert operations are costly in case of column store index, to over come the scenario SQL has introduced this new storage space called Delta storage. In delta store data would be stored as a traditional method (row store method) and data would be moved column store method using a mechanism called tuple mover. Tuple mover is mechanism that runs like a job in every 5 minutes and move all the rows to column store index without taking the table offline. Delta store is not used  in case of bulk insert.

The tuple mover waits delta storage to reach a limit of 1048576 rows, if the delta storage reached this number of rows the particular delta store would be closed and that would be ready to move column store index. If tuple mover finds a delta storage like this, tuple mover pull data from the storage and compress the data and then move to column store index. Mean while data can be inserted to another available delta storage so this process can be performed online.

Delete Bitmap(Delete) : To perform a efficient delete from column store index is a hard task, to over come SQL has introduced a segment called Delete bitmap this contains the information of deleted rows. When user requests for rows SQL retrieve records which are not exist in Delete bit map. So basically there is no changes happened in column store index when delete operations perform and the deleted data exist in the index pages. Index pages are get refreshed only when we rebuild or tuple mover.

Update : We have already discussed about how delete and insert works in column store indexes,for update SQL uses both delete and insert operations. SQL delete existing row using delete bit map method (virtual delete) and insert rows to the delta stores as usual insert operation. When next time select request for the row, delete bit map hides the old row and show the new row from Delta store.


Now we have understood how the insert delete operations are taking place in column store index, in SQL 2014 we can create only one column store index on a table.

Non clustered Column store indexes are introduced in SQL 2012, here we are going to discus about structure , limitations and advantages of non clustered column store index based on SQL 2012.

An index is nothing but arrangement of data to reach each rows in easiest way(here are we are not discussing about selectivity and density since the topic is overview on column store index). When column store indexes came into picture once People started thinking about column instead of rows.Either user interested in a single column or all the columns SQL has to load entire row in to memory and show only the requested columns to the user. If the user is only interested in a single column why should system waste time to read entire row.

Let use see how SQL deals with it and how data being stored in case of column store index.

Row store method data storage


Column store method data storage


Based on the above pictures you can see in case of column store same column data is stored in single page, this improves the compression (since data is homogeneous) and If you are interested on a single column then SQL search engine has to read only one column, the over head caused by row store method has been eliminated by column store index.


  1. The table will become read only if you create a column store index.
  2.  Sparse columns are not supported.
  3. Cannot be created on a view or indexed view.
  4. Cannot be created by using the INCLUDE keyword.
  5. Cannot include the ASC or DESC keywords for sorting the index. Columnstore indexes are ordered according to the compression algorithms. Sorting would eliminate many of the performance benefits.

Note :  if you are interested to pull all the rows from a table definitely column store index is not going to help you in terms of performance.


The above screenshot is from SQL 2012 instance running in my PC.  I have created a table and created a non clustered column store index on it. I tried to insert a row after that then I got that error.

Now I suggest you to create one more column store index on the same and see what happens , please do comment the result below.

Note : The post is described column store index based on SQL2012 version, few of the limitations has eliminated in the latest version.

Ref : MSDN more details about column store index.

This is another interesting feature which Microsoft has introduced as part SQL 2016 release. We had traditional methods like magic tables and CDC to audit and log the history of a table, with the introduction of this feature Microsoft is bringing up new detailed feature called temporal tables. note this is not replacement for CDC, this feature help to track the life span of each row in a table.

Temporal means “relating to time” – all we are going to log is life span of each record. If you enable the property on a table, SQL will create a version table and your each row history would be kept in the newly created table. So with this feature I can easly identify that there was a row exist with these values in a particular time.


In the above screenshot you can see the create statement on the top of the query window, the statement is as same as we use to create table. But to enable the property we should mention WITH(SYSTEM_VERSIONING = ON) to create a history table and also we have to add 2 more extra columns to store the life span of each record(starttime  and endtime).

In the above example I have created a table named TestTemporal, since the system version property is on, SQL creates a history table as shown in the figure. The history table contains the same number of columns and the history being stored in this table.

As shown in the picture I have inserted 2 rows to the table and updated a row and also deleted one of them, after these operations I have selected my TestTemporal and it’s history table. In the history table’s result we can see the old inserted and deleted record with the period those records were present in the main table. In the main table record end time is showing as infinite because the record is still exist in the table.


Now let discus about design changes on a version-ed table, if you create a table system version property on, you will not be able to delete of change the table definition without changing the property of the table. If you change the version-ed table to normal table, the history table will be converted to a normal table, once we revert back normal table to version table then a new history table will be created.

Hope you understand the feature, please do leave your comments.

Watch Demo to know more about temporal tables.

When we think of delete duplicate, we are familiar with deleting data by generating row number for each row( either using derived table or CTE). Now here I would like to show you a method to identify each row from duplicated records without using row number or temp table logic.

The method would break down a concept of duplicate rows in SQL and this key word works after SQL2005. As I said there is no duplicate rows in SQL, even the rows are duplicated each rows are stored identically and they can be identified using an internal storage id called Row Id. Let us see how to utilize this id to resolve our issue.


Above I have created a table and inserted duplicated rows, in the select query I have used “%%Physiloc%%” keyword and displays row id of each rows (this is an hexadecimal value of the location where rows are being stored), we can use this row ID to delete duplicate rows from the table using very simple logic.


Here I have deleted the duplicate rows only by executing the first query in the above screenshot.

Take away : There is no duplicate rows in a table, each rows can be identified by Row ID of each row. Please do leave your comments, we will discuss more about physiloc in up coming post.

This is the one of the coolest feature I have seen while going through new features of SQL 2016. When we analyze performance of a query, we always seek help of actual execution plan. But the main disadvantage I have seen for actual execution plan is “We have to wait until query completes”, if the query took 10 minutes we have to wait till that time to see the details of the query.

After the arrival of Live query statistics, Microsft has over come the issue. now with this new feature in SQL 2016 will provide you a real time experience of execution plan while running a query. The new execution plan would show which part of the predicate is being used at a point of time.

The below screen shot shows the preview of a query plan


note : The above picture is a merged one to show both tool bar and query execution , tool bars would be disabled while query running.

Whenever we required to test a scenario in lower version of an SQL. Microsoft has provided an option to change the compatibility level of database to achieve it.


Compatibility value and it’s respective versions


Once you have changed the compatibility level of a DB, features available to database is restricted to the compatibility level set to the DB.

As we all know LAG and TRY_CONVERT are introduced in SQL 2012 version, let use see what error SQL shows if we execute these statements.


Ref :